Colorado’s New AI Law has Serious Implications for Many AI Developers and Users

The Colorado Artificial Intelligence Act (AI Act) is one of the most comprehensive AI laws outside of the EU, and it brings serious implications for AI developers and users alike.
- The Colorado AI Act has been signed by the state’s governor and will take effect on February 1, 2026.
- The law applies to developers and deployers (users) of “high-risk AI systems” in areas like employment, finance, and legal services.
- Requirements under the law range from producing technical documentation to running algorithmic impact assessments and implementing a risk management program.
Who’s covered by the Colorado AI Act?
To figure out whether the Colorado AI Act applies to you, you’ll need to understand how the law defines several key terms.
The Colorado AI Act targets two main types of entities:
- Developer: A person doing business in Colorado that develops or substantially modifies a high-risk AI system
- Deployer: A person doing business in Colorado that deploys (uses) a high-risk AI system
Now, let’s define a “high-risk AI system”. First, here’s what “AI system” means:
Any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
For an AI system to be “high-risk”, it must make, or “be a substantial factor in making,” a “consequential decision.”
So what’s a “consequential decision?” It’s a decision that has “a material legal or similarly significant effect” on whether to provide a service—or how much to charge for a service—in any of the following areas:
- Education enrollment or an education opportunity
- Employment or an employment opportunity
- A financial or lending service
- An essential government service
- Healthcare services
- Housing
- Insurance
- A legal service
The law excludes several types of systems from its scope, including antivirus software, spam filters, and—in case you were wondering—calculators.
So, for example:
- A software company creating a recruitment app that screens job applicants could be a developer
- A business using that software to find new hires could be a deployer
Avoiding ‘algorithmic discrimination’
The Colorado AI Act requires both developers and deployers of high-risk AI systems to avoid “algorithmic discrimination,” which means the use of a high-risk AI system to cause “unlawful differential treatment or impact” on the basis of:
- Age
- Color
- Disability
- Ethnicity
- Genetic information
- English language proficiency
- National origin
- Race
- Religion
- Reproductive health
- Sex
- Veteran status
- Any other legally protected classification
Developers and deployers must notify the Colorado Attorney General of any known or reasonably foreseeable risk of algorithmic discrimination.
If a developer or deployer complies with this requirement and other obligations under the Colorado AI Act, it is assumed to have made reasonable efforts to avoid algorithmic discrimination.
But this will involve quite a lot of work—both developers and deployers have substantial obligations under the law.
What are developers’ obligations under the Colorado AI Act?
The Colorado AI Act’s main obligation on developers is to provide information to deployers of their high-risk AI systems, including:
- The system’s purpose, intended benefits, reasonably foreseeable uses, including harmful or inappropriate uses
- Summaries of the training data
- Any reasonably foreseeable limitations or risks of algorithmic discrimination
- How the system has been tested and evaluated
This is just a small part of the very detailed documentation developers must provide deployers.
A developer must also post a notice on its website listing its high-risk AI systems and explaining its risk mitigation measures.
What are deployers’ obligations under the Colorado AI Act?
Deployers’ obligations under the Colorado AI Act include:
- Implementing a “risk management policy or program” such as the NIST AI Risk Management Framework, ISO/IEC 42001, or an equivalent standard
- Completing an “algorithmic impact assessment” for each high-risk AI system
- Providing extensive information to consumers about how it uses high-risk AI systems and the risks involved
- Allowing consumers to appeal consequential decisions and obtain human review
- Upholding other rights under the Colorado Privacy Act, where relevant
Again, this is a lot of work—implementing a risk management policy alone could require a deployer to make substantial changes to its operations. Colorado clearly takes the risks arising from AI-based decision-making very seriously indeed.
Colorado AI Act: Key takeaways
- The Colorado AI Act is one of the world’s most ambitious AI laws. It imposes extensive obligations on organizations that develop and use “high-risk AI systems.”
- Both developers and deployers of high-risk AI systems must make reasonable efforts to avoid algorithmic discrimination and notify the Attorney General if such discrimination has occurred or is reasonably foreseeable.
- Developers must provide deployers with extensive technical documentation explaining the uses and training of their high-risk AI systems, among other obligations.
- Deployers’ obligations include implementing a risk management program, providing notice to consumers, conducting algorithmic impact assessments, and allowing consumers to appeal AI-based decisions.