On May 17, Colorado passed comprehensive AI legislation. Learn more about the consumer protections in this bill, and what it means for developers and deployers of AI systems
Lauren Diethelm
AI Content Marketing Specialist
May 24, 2024
On May 20, 2024, the Colorado state legislature announced that the governor had signed Colorado’s comprehensive AI act into law on May 17. The “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems,” also called to Colorado AI Act (CAIA), is the first comprehensive and risk-based approach to AI in the US.
Set to take effect on February 1, 2026, the CAIA aims to protect Colorado consumers from algorithmic discrimination and ensure transparency and accountability from developers and deployers of AI systems.
The CAIA defines an AI system as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” Although the phrasing differs, this definition is similar to how the Utah AI bill defines AI.
A few other key definitions to understand in the CAIA include:
Algorithmic discrimination: Any condition in which the use of the AI system results in an unlawful differential treatment or impact on an individual or group of individuals that’s protected under state or federal laws
Consequential decision: Any decision that has material or similarly significant effect on the provision or denial to any consumer in areas like employment, education, financial services, and health care and housing
The Colorado AI Act focuses largely on high-risk AI systems, which it defines as any AI system that makes, or is a substantial factor in making, a consequential decision.
This definition of high-risk system differs from that of the EU AI Act, which focuses more specifically on identifying systems that have a significant harmful impact on health, safety, and fundamental rights. The CAIA definition of a high-risk system does not include:
An AI system that’s intended to perform a narrow procedural task; or
Detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment
The CAIA requires a developer or deployer of a high-risk AI system to use reasonable care to avoid algorithmic discrimination in the system.
Once the bill is in effect, a deployer or developer of a high-risk system will have unique requirements to ensure that consumers are protected from potential discrimination, and that there’s sufficient transparency and accountability in the model.
Under this bill, developers much maintain specific documentation, including a policy to comply with federal and state copyright laws, as well as a detailed summary of the content that was used to train the model.
Developers must also create, implement, maintain, and make documentation available to deployers who want to use the AI system. This documentation needs to:
Disclose a general statement regarding the reasonably foreseeable uses and known harmful or inappropriate uses of the system
Share information about the type of data used to train the system
State the purpose, intended benefits, and use of the high-risk system
Developers also are required to provide documentation explaining how the system was tested and evaluated for performance and mitigation before it was made available to the deployer.
Deployers, who use a high-risk AI system, will be required to use reasonable care to protect consumers from any known or potential risks of using AI. The CAIA requires deployers to:
Implement a risk management policy and program that is iteratively planned, implemented, and then regularly reviewed and updated
Complete impact assessments for deployed high-risk AI systems at least annually, and within 90 days of any intentional and substantial modification to the system
Notify the state attorney general where a high-risk system has caused or is likely to have caused algorithmic discrimination within 90 days or discovery
Deployers have additional requirements designed to increase transparency around AI and ensure that consumers interacting with it are protected.
They must provide consumers with the opportunity to correct any personal data that the AI system uses to make any consequential decisions, and must provide the opportunity to appeal any adverse consequential decisions resulting from that system.
Developers and deployers are also required to disclose to consumers when they’re interacting with AI, and must provide information on how to opt out of the processing of their personal data – a requirement that mirrors transparency requirements in other state privacy laws as well as Utah’s AI bill.
Like the Utah AI bill, the CAIA continues the growing trend of state-level initiatives that are filling the void left by the absence of federal legislation regulating AI. However, unlike Utah, which focuses more on establishing working groups and commercial communications using generative AI, the CAIA takes a more comprehensive and risk-based approach to AI regulation.
As the map of comprehensive state AI legislation continues to fill out, it will become more important for organizations to choose which frameworks they want to align their AI governance operations to, and to proactively communicate those values to the rest of their workforce.
To learn more about how aligning to key frameworks early on can help steer your AI governance program, see which frameworks OneTrust’s AI governance committee chose and how we integrated that guidance in our own program.
Webinar
In this webinar, we’ll explore how OneTrust helps organizations meet EU AI Act compliance by operationalizing AI governance frameworks.
Webinar
This webinar will explore how automating metadata capture can streamline the management of unstructured data, making it AI-ready while ensuring data quality and security.
Webinar
This webinar will explore the key privacy pitfalls organizations face when implementing GenAI, focusing on purpose limitation, data proportionality, and business continuity.
Webinar
This webinar will explore the how AI is affecting the data landscape, focusing on how data teams can extend common data practices to support AI’s unique use of data.
eBook
Download our guide to building an AI project intake workflow that balances risk and efficiency, complete with a checklist for thorough, informed assessments.
Checklist
Download our AI Project Intake Checklist to guide thorough assessments and ensure secure, compliant, and effective AI project planning from start to finish.
Webinar
This webinar will uncover the top 5 data sharing challenges organizations face and demonstrate how advanced data governance solutions can streamline processes, improve data quality, and enhance compliance, allowing organizations to discover the full potential of their data assets.
White Paper
Download this white paper to learn how to adapt your data governance program, by defining AI-specific policies, monitoring data usage, and centralizing enforcement.
Report
Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report
Webinar
This webinar unpacks California’s approach to AI and emerging legislations, including legislation on defining AI, AI transparency disclosures, the use of deepfakes, generative AI, and AI models.
eBook
Download this coauthored eBook by OneTrust and Protiviti to learn how organizations are building scalable AI governance models and managing AI risks.
Report
Download this 2024 Forrester Consulting Total Economic Impact™ study to see how OneTrust has helped organizations navigate data management complexities, generate significant ROI, and enable the responsible use of data and AI.
Webinar
Join us for a webinar on the latest updates and emerging trends in global privacy regulations.
eBook
Download this eBook to explore strategies for trustworthy AI procurement and learn how to evaluate vendors, manage risks, and ensure transparency in AI adoption.
eBook
Learn why discovering, classifying, and using data responsibly is the only way to ensure your AI is governed properly.
Webinar
Join our webinar to gain practical, real-world guidance from industry experts on implementing effective AI governance.
Webinar
Join our webinar and learn about the EU AI Act's enforcement requirements and practical strategies for achieving compliance and operational readiness.
Video
Learn how OneTrust AI Governance acts as a unified program center for AI initiatives so you can build and scale your AI governance program
Webinar
Whether your AI is sourced from vendors and third parties or developed in-house, AI Governance supports informed decision-making and helps build trust in the responsible use of AI. Join the live demo webinar to watch OneTrust AI Governance in action.
Webinar
Discover the EU AI Act's impact on your business with our video series on its scope, roles, and assessments for responsible AI governance and innovation.
Webinar
As innovation teams race to integrate AI into their products and services, new challenges arise for development teams leveraging third-party models. Join the webinar to gain insights on how to navigate AI vendors while mitigating third-party risks.
Resource Kit
Download this resource kit to help you understand, navigate, and ensure compliance with the EU AI Act.
Webinar
In this webinar, we'll navigate the intricate landscape of AI Governance, offering guidance for organizations whether they're developing proprietary AI systems or procuring third-party solutions.
eBook
Discover the ISO 42001 framework for ethical AI use, risk management, transparency, and continuous improvement. Download our guide for practical implementation steps.
Webinar
Join OneTrust experts to learn about how to enforce responsible use policies and practice “shift-left” AI governance to reduce time-to-market.
Webinar
Join out webinar to hear about the challenges and solutions in AI governance as discussed at the IAPP conference, featuring insights and learnings from our industry thought leadership panel.
Webinar
Colorado has passed landmark legislation regulating the use of Artificial Intelligence (AI) Systems. In this webinar, our panel of experts will review best practices and practical recommendations for compliance with the new law.
Webinar
In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.
Report
Download the full OCEG research report for a snapshot of what organizations are doing to govern their AI efforts, assess and manage risks, and ensure compliance with external and internal requirements.
Report
In this 5-part regulatory article series, OneTrust sponsored the IAPP to uncover the legal frameworks, policies, and historical context pertinent to AI governance across five jurisdictions: Singapore, Canada, the U.K., the U.S., and the EU.
Webinar
In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.
Webinar
This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.
Webinar
In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.
Resource Kit
What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.
Interactive Tool
This self-assessment will help you to gauge the maturity of your privacy program and understand the areas the areas of improvement that can further mature your privacy operations.
Webinar
Learn the challenges AI technology poses for the (re)insurance industry and gain insights on balancing regulatory compliance with innovation.
Webinar
Watch this session for insights and strategies on buiding a strong data protection program that empowers innovation and strengthens consumer trust.
Webinar
Get the latest insights from global leaders in cybersecurity managment in this webinar from our Data Protection in Financial Services Week 2024 series.
Webinar
Join the first session for our Data Protection in Financial Services Week 2024 series where we discuss the current state of AI regulations in the EU.
White Paper
Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.
Webinar
Join OneTrust and PA Consulting as they discuss key global trends and their impact on the UK, reflecting on the topics from IAPP DPI London.
Webinar
In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states.
In-Person Event
Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.
AI Governance
See the latest OneTrust platform features that improve on customers' ability to build trust, ensure compliance, and manage risk.
Webinar
In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.
eBook
Data privacy is a journey that has evolved from a regulatory compliance initiative to a customer trust imperative. This eBook provides an in-depth look at the Data Privacy Maturity Model and how the business value of a data privacy program can realised as it matures.
Webinar
In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.
Data Sheet
Data privacy is evolving from a regulatory compliance initiative to a customer trust imperative. This data sheet outlines the four stages of the Data Privacy Maturity Model to help you navigate this shift.
Checklist
Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.
Webinar
In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.
Webinar
OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.
Report
OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.
Webinar
In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.
Webinar
Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.
Webinar
Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC.
eBook
Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.
eBook
With the use of AI proliferating at an exponential rate, the EU rolled out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.
Webinar
Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.
White Paper
What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team.
Infographic
AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.
White Paper
Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.
eBook
We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.
Webinar
Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.
Webinar
Prepare for AI data privacy and security risks with our expert webinar. We will delve into the evolving technology and how to ensure ethical use and regulatory compliance.
Webinar
Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.
White Paper
With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.
Webinar
Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.
Regulation Book
Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.
Webinar
Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.
Webinar
OneTrust DataGuidance and Sidley are joined by industry experts for the annual Data Protection in Financial Services Week.