Colorado is scrapping its risk-based AI regulation in favor of a transparency-focused approach, marking a significant departure from the European model that influenced the original law. The Colorado AI Policy Work Group, convened by Governor Jared Polis, unveiled a proposal on March 17, 2026, to repeal and replace the Colorado Artificial Intelligence (CAIA) Act, which had been loosely modeled on the EU AI Act. The shift reflects mounting pressure from federal regulators, industry concerns about innovation, and the risk of losing federal broadband funding. What Changed in Colorado's AI Regulation Approach? The original Colorado AI Act, which was set to take effect in June 2026, focused on "high-risk AI systems" that make or influence decisions about consumers in critical areas like employment, lending, housing, and healthcare. It required developers and deployers to use reasonable care to prevent algorithmic discrimination and conduct impact assessments. The new proposal fundamentally restructures this framework by narrowing the scope and shifting compliance obligations. Instead of regulating broad "high-risk AI systems," the proposal introduces "Covered Automated Decision-Making Technology" (ADMTs), which applies only when AI output "materially influences" a consequential decision. The definition excludes advertising, marketing, content recommendations, search results, content moderation, cybersecurity, fraud prevention, and spam filtering. This narrower scope significantly reduces the number of systems subject to regulation. How Does the New Framework Shift Compliance Obligations? - Developer Duties: The proposal replaces the duty of care with documentation requirements. Developers must provide deployers with information about intended uses, known harmful uses, training data categories, system limitations, and usage instructions, rather than implementing proactive safeguards. - Transparency Focus: The framework pivots from prescriptive lifecycle governance (audits, risk management plans, impact assessments) to notice and transparency obligations, making it more aligned with U.S. consumer protection traditions than European regulatory models. - Liability Recalibration: The proposal narrows liability exposure for both AI developers and deployers, though it leaves open questions about how existing discrimination and consumer protection laws will be enforced in practice. This structural shift represents a fundamental philosophical change. The original CAIA imposed affirmative duties on companies to prevent harm before deployment. The new framework relies on transparency and documentation, allowing companies to disclose risks rather than eliminate them upfront. Why Is Colorado Making This Change Now? The timing of the proposal suggests multiple pressures converging on Colorado's legislature. President Trump's Executive Order on AI, issued in March 2026, directs the Department of Commerce to identify states with "onerous" AI laws that could disqualify them from federal broadband funding under the Broadband Equity Access and Deployment (BEAD) Program. Colorado's updated framework may be designed to avoid that list and preserve federal funding eligibility. Additionally, the Trump administration released a National AI Legislative Framework on March 20, 2026, calling for federal preemption of state AI laws that impose "undue burdens." Colorado's shift toward a lighter-touch transparency model may be an attempt to avoid federal preemption entirely. The Federal Trade Commission is also preparing guidance on how the FTC Act applies to AI systems, including circumstances where state laws requiring AI systems to alter outputs might be preempted. A framework that moves away from prescriptive output monitoring could sidestep these preemption risks. Colorado's legislative session runs through May 13, 2026, giving lawmakers nearly two months to review and revise the proposal before the June 2026 implementation deadline for the original law. What Does This Mean for AI Companies and Consumers? For AI developers and deployers, the new framework significantly reduces compliance burden. Companies will no longer need to conduct algorithmic discrimination assessments, implement risk management plans, or provide consumers with appeal mechanisms for adverse decisions. Instead, they must document system capabilities and limitations. This approach favors innovation and reduces friction for smaller companies that lacked resources to meet the original law's requirements. For consumers, the shift is more ambiguous. Transparency obligations may provide visibility into how AI systems work, but without proactive safeguards, consumers lose the protection of mandatory impact assessments and discrimination prevention measures. The proposal leaves enforcement of discrimination and consumer protection laws to existing legal frameworks, which may or may not adequately address AI-specific harms. The Colorado proposal signals a broader trend: as federal pressure mounts and preemption threats loom, states may abandon the European regulatory model in favor of lighter-touch frameworks that emphasize transparency over prevention. This approach aligns with the Trump administration's stated preference for minimally burdensome national standards and could influence how other states approach AI regulation in the coming months.