This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take on AI

| 2 minutes read

High-Risk AI Regulation Arrives In Colorado

On May 17, 2024, Colorado Governor Jared Polis signed the Colorado AI Act into law, positioning the state at the forefront of artificial intelligence (AI) regulation in the United States. As the nation's first comprehensive AI governance law, the Colorado AI Act sets rigorous standards for businesses employing "high-risk" AI systems in critical sectors such as education, employment, finance, government services, healthcare, housing, insurance, and legal services.

High-risk AI systems are defined as "any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision." These "consequential decisions" have "a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of," various services available to consumers. However, the law does provide exemptions for certain narrow AI systems and technologies.

Developers of high-risk AI systems have a duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. They must provide deployers, consumers, and the Colorado attorney general with a collection of disclosures and documentation along with the system. Additionally, developers must make public statements summarizing their risk management practices concerning algorithmic discrimination and report any incidents within 90 days of discovery.

Deployers of high-risk AI systems also have a duty of care to avoid known or reasonably foreseeable risks of algorithmic discrimination. They must establish mandatory AI governance programs and risk management policies, which the law describes as an "iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system." Deployers are also required to conduct annual impact assessments and assessments within 90 days of substantial modifications to the AI system. Transparency is a key aspect of the law, with deployers obligated to provide consumer notification and opt-out rights, the principal reason for adverse decisions, and the opportunity for consumers to correct data and appeal AI-driven decisions.

Starting February 1, 2026, the Colorado attorney general will begin enforcement of the Colorado AI Act. Violations will be treated as violations of Colorado's consumer protection statute, with penalties of up to $20,000 per violation, counted separately for each consumer or transaction involved. The attorney general is also authorized to promulgate rules in six areas to facilitate the law's implementation.

In light of this new legislation, businesses must promptly align their practices with the new requirements. Developers should compile the necessary documentation for disclosures and impact assessments and draft public statements on risk management. Deployers should develop risk management policies and programs, conduct impact assessments, review systems for algorithmic discrimination, create processes for consumer data correction and appeal of AI decisions, and draft public statements on deployed high-risk AI systems and risk management.

Like many other U.S. AI governance proposals, the Colorado AI Act focuses on automated decision-making systems. The law defines a covered high-risk AI system as one that "when deployed, makes, or is a substantial factor in making a consequential decision."