The Hidden Cost of AI: Why Laws Can't Keep Up With the Real Harms
AI regulation is failing to address the most serious harms the technology is already causing. While governments race to pass laws targeting algorithmic bias and data privacy, they're missing emerging dangers that regulations didn't anticipate and likely cannot prevent, from teenagers forming romantic relationships with chatbots to low-wage workers developing trauma from moderating AI training data .
What Harms Are AI Laws Actually Missing?
The problem isn't that AI regulation doesn't exist. The EU General Data Protection Regulation includes provisions for automated decision-making, the EU AI Act prohibits certain high-risk AI uses, and many US states have introduced AI-specific legislation . Yet these frameworks focus primarily on how AI systems are deployed, not on the hidden human and ethical costs embedded in their creation.
Consider the gap between law and reality. ChatGPT became the fastest-adopted technology in history, and while enterprises have invested heavily in generative AI tools, some have reportedly laid off up to 20 percent of their workforce due to AI-related costs . Meanwhile, regulations don't address the people who make AI possible: the data labelers and content moderators who work for low wages, often signing non-disclosure agreements that prevent them from discussing traumatic content they've seen, even with mental health professionals .
The emerging use cases are particularly troubling. More than 55,000 people follow the "My Boyfriend Is AI" subreddit, exploring romantic relationships with chatbots. Chatbots have been linked to suicides, including among teenagers. Deepfake technologies enable increasingly convincing scams. Yet legislation like the EU AI Act does not specifically address these scenarios, and it's unrealistic to expect regulations to anticipate every dangerous use case .
How Can Companies Assess AI Risks Before Deployment?
Because regulations cannot keep pace with technological advancement, enterprises developing or deploying AI systems must take proactive steps to identify and limit potential harms. The solution lies in systematic ethical impact assessment before systems go live.
- Privacy and Data Protection: Evaluate how personal information is collected, stored, and used within the AI system, including whether data is properly anonymized and whether users have meaningful consent.
- Transparency and Explainability: Assess whether the AI system's decisions can be understood by users and stakeholders, and whether the system discloses when it is making automated decisions that affect people.
- Cybersecurity and Auditability: Determine whether the system can be monitored for attacks, whether audit trails exist for decisions made, and whether the system can be shut down or corrected if problems emerge.
- Health and Safety Impacts: Consider whether the system could cause physical or psychological harm, including through addiction, manipulation, or exposure to harmful content.
- Hidden Labor Costs: Account for the human workers involved in training and maintaining the system, including their working conditions, mental health risks, and fair compensation.
- Anthropomorphism Risks: Evaluate whether the system presents itself as human-like in ways that could mislead users or create unhealthy dependencies, as with AI chatbots marketed as romantic partners.
ISACA, a professional organization for information systems professionals, released an AI Impact Assessment Tool designed to help enterprises evaluate these dimensions systematically . The tool contains guided questions that can be ranked by risk level (high, medium, or low) and automatically calculates risk scores across 14 dimensions, providing a benchmark to help organizations prioritize which areas need the most attention.
Enterprises whose AI systems are compliant with applicable laws and regulations should not assume those systems are safe and immune from causing harm. Legal compliance and ethical safety are not the same thing .
Why Does Market Pressure Matter More Than Regulation?
Recent events suggest that clear, public information about AI company policies may drive change faster than legislation. In late February 2026, reporting revealed that Anthropic had refused to allow the Pentagon to use its Claude AI model for mass domestic surveillance or fully autonomous weapons, while OpenAI signed a Pentagon deal within hours of Anthropic's designation as a "supply chain risk" .
The difference was clarity. When the information was presented in a form that ordinary people could understand and act on, they did. Claude reached the top of the US App Store within 48 hours, overtaking ChatGPT. Daily sign-ups hit record highs, and free users increased by more than 60 percent since January, while paid subscribers more than doubled .
This market shift demonstrated that when AI policy information reaches people in an understandable form, they use it to make different choices about which tools they adopt. The AI Accountability Directory was built on this insight, making it easier for people to compare how 15 major AI companies handle user data, ethical limits, government contracts, and safety records .
What Is the US Government's New Approach to AI Regulation?
On March 20, 2026, the Trump administration released a "National Policy Framework for Artificial Intelligence" with seven categories of legislative proposals intended to balance innovation with protection from harm . The framework calls for Congressional action on several fronts:
- Protecting Vulnerable Groups: Parental controls and privacy protections for children, safeguards for elderly people against AI-enabled scams, and restrictions on deepfakes used for fraud.
- Protecting Infrastructure and Creators: Streamlining federal permitting for AI data centers, protecting creators from unauthorized use of their content, and protecting taxpayers from increased electricity costs associated with AI infrastructure.
- Enabling Innovation: Creating "regulatory sandboxes" for AI applications, providing resources to small businesses for AI deployment, and encouraging AI training in workforce programs.
Notably, the White House rejected the creation of a new federal rulemaking body to regulate AI. Instead, it called for Congress to support development by existing regulatory bodies of sector-specific national standards that would preempt conflicting state AI laws, avoiding a patchwork of state regulations .
Senator Marsha Blackburn released a comprehensive 291-page "Trump America AI Act" on March 18, 2026, that would establish federal AI product liability, require large employers to report AI-driven workforce changes, mandate federal safety evaluations of advanced AI, and introduce independent bias audits targeting viewpoint and political affiliation . The bill would preempt conflicting state laws, except for state legislation providing greater protections for minors.
However, neither the Commerce Department nor the Federal Trade Commission released expected deliverables by the March 11, 2026 deadline, leaving states that have enacted AI regulations in a holding pattern and preparing for possible legal challenges to federal preemption efforts .
How Are States and Companies Responding?
California and Colorado are taking independent action. On March 30, 2026, California Governor Gavin Newsom issued an executive order instructing state agencies to develop new safeguards for AI procurement within 120 days, requiring vendors to explain their policies and safeguards to protect public safety . The order also allows California to continue procuring from companies whose federal designation the state deems improper, a direct response to the Pentagon's labeling of Anthropic as a supply chain risk.
Colorado's AI Policy Workgroup published a revised framework on March 17, 2026, that would substantially rewrite the state's AI Act by shifting focus from bias reporting to transparency and making compliance less burdensome . The new legislation would take effect on January 1, 2027.
Connecticut's attorney general issued guidance on February 25, 2026, clarifying that existing state laws on civil rights, privacy, data security, and consumer protection apply to AI systems, and that algorithmic discrimination in employment, housing, lending, insurance, and public accommodations is prohibited regardless of whether discrimination is facilitated by AI or human decision-making .
The bottom line is clear: enterprises cannot wait for perfect regulation. They must assess the ethical impacts of their AI systems now, before deployment, using frameworks like ISACA's AI Impact Assessment Tool. The gap between what laws require and what ethics demands is widening, and companies that fail to bridge that gap will face market pressure, legal challenges, and reputational damage.