This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 2 minute read

Two Paths to AI Regulation: Capability vs. Use Case in State-Level Approaches

State-level AI regulation remains a dynamic landscape in the US as lawmakers tackle the complex risks posed by modern AI. Two influential 2024 regulations—Colorado's 24-205 (the Consumer Protections for Artificial Intelligence) and the proposed California’s SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act)—illustrate two emerging approaches: one focuses on the capabilities and power of AI models themselves, and the other on specific high-stakes AI uses in areas like employment or healthcare. Understanding these frameworks can offer a window into how future state AI laws might evolve in 2025 and beyond.

The Colorado Artificial Intelligence Act (CAIA) centers on “consequential decisions” that can materially affect individuals' lives in areas such as education, employment, financial services, housing, healthcare, and legal services. The law, which takes effect on February 1, 2026, requires comprehensive documentation and disclosure requirements for high-risk AI systems, defined as systems that make or substantially contribute to these consequential decisions. By focusing on preventing algorithmic discrimination in these critical areas, the law requires both developers and deployers to implement risk management programs, conduct impact assessments, and maintain active monitoring of AI systems used for crucial determinations. This approach demands robust internal compliance regimes that must be reevaluated at least annually and within 90 days of any substantial modification to the high-risk AI system.

California's SB 1047 took a distinct regulatory approach. Rather than focusing on specific use cases, the bill would have regulated AI systems based on both the quantity of computing power used to train them and associated costs - specifically targeting models requiring over 10^26 floating-point operations and costing more than $100 million in cloud computing resources. These covered models would have faced additional oversight, including mandatory third-party audits and implementing full shutdown capabilities. The bill also would have prohibited using these models in ways that could create an "unreasonable risk" of causing critical harm, which the bill defined to include mass casualties from weapons, major cyberattacks on critical infrastructure, and other grave harms to public safety and security. Although vetoed, this bill's underlying principle—tying regulation to an AI model's scale and computing requirements—represents one potential approach to identifying and regulating powerful AI systems.

While Governor Newsom vetoed SB 1047, reporting indicates that future AI regulation in California will likely emerge through a revised framework that balances computational capabilities and specific use cases as regulatory triggers. Rather than relying primarily on computing power thresholds (like SB 1047's $100 million training cost benchmark), new legislation will likely maintain some focus on model capabilities while adding more emphasis on the specific risks posed by AI systems in high-risk environments and critical decision-making contexts.

Businesses and counsel can glean practical lessons for preparing for future state AI regulation by comparing these two frameworks. Colorado’s use-case-driven approach compels developers and deployers to embed rigorous bias-mitigation strategies in critical sectors like education or healthcare. California’s capability-based approach—though vetoed in its initial form—would have required companies training large-scale, high-capacity models to implement third-party audits and full shutdown capabilities to mitigate risks of catastrophic misuse. Taken together, these approaches illustrate how future regulations may operate on two levels: controlling AI’s inherently powerful capabilities and governing its specific uses in high-risk environments. Organizations that integrate both perspectives—prioritizing safety and fairness in day-to-day AI applications while managing the broader risks of advanced AI—will be better positioned to achieve compliance as new regulations solidify.