Federal regulators are increasingly catching companies making false or exaggerated claims about their artificial intelligence (AI) capabilities to investors—a deceptive practice now labeled "AI washing." As AI has become one of the most powerful marketing terms in the modern economy, the U.S. Securities and Exchange Commission (SEC) has brought multiple enforcement actions against firms that misled investors about their use of advanced AI systems. The challenge now is figuring out how to regulate these misleading claims without stifling innovation in a rapidly evolving field. What Exactly Is AI Washing? AI washing occurs when companies make materially false or misleading statements about their AI use in communications with investors. Think of it as the tech equivalent of "greenwashing"—when companies falsely claim their products are environmentally friendly. In the AI context, firms racing to signal that they are AI-powered often exaggerate, make unsubstantiated claims, or outright lie about their technological capabilities. The problem is particularly acute because AI is difficult to verify. In a fast-moving digital environment where technical descriptions are hard to evaluate, hype can reward bold statements regardless of accuracy. Investors rely on truthful disclosures to make informed decisions about where to put their money, and when companies deceive them about AI capabilities, it undermines the entire market. How Is AI Washing Currently Regulated? AI washing is regulated under longstanding federal securities laws. The Securities Act of 1933 and the Securities Exchange Act of 1934 prohibit firms from making materially false or misleading statements in communications with investors. The SEC enforces these statutes through antifraud laws, including Rule 10b-5, which bars deceptive statements made in connection with the sale of securities such as stocks and bonds. The agency also requires public companies to periodically disclose accurate information about core business operations, financial risks, and strategic initiatives. However, applying these traditional antifraud tools to rapidly emerging technologies raises critical questions. Scholars debate how regulators should distinguish between permissible corporate optimism and genuinely deceptive technical claims, especially when companies adopt AI systems in early or experimental stages of development. Are Companies Actually Disclosing AI Risks? The answer is complicated. While more companies are acknowledging AI risks in their mandatory public disclosures, the warnings are often too vague to be meaningful. Researchers from Maastricht University analyzed over 30,000 corporate filings and found that the percentage of companies mentioning AI risk in their disclosures increased dramatically from 4% in 2020 to 43% in 2024. That's significant growth—but there's a catch. Most of these disclosures lack detailed plans to address AI risks. The researchers recommend that regulators push companies toward specific, actionable disclosures rather than generic warnings. Additionally, research from the Social Science Research Council found that roughly two-thirds of corporate AI disclosures emphasize benefits but fail to include significant risks such as systematic failures and service outages. What Reforms Are Experts Proposing? Regulatory experts have outlined several concrete steps to combat AI washing while protecting innovation: - AI-Specific Guidance: The SEC should issue clear guidance clarifying how companies may appropriately describe their technologies without overstating capabilities, helping distinguish between permissible optimism and deceptive claims. - Material Risk Framework: Regulators should require companies to disclose material AI risks using the same framework applied to cybersecurity risks in 2023, creating consistency across disclosure requirements. - Incident Reporting Requirements: Companies should be required to report AI-related incidents on disclosure forms, similar to how they report cybersecurity breaches, making problems transparent to investors. - AI Governance Sections: Annual corporate filings should include dedicated AI governance sections outlining how companies manage AI-related risks and decision-making processes. - Active Enforcement: Regulators should actively enforce antifraud rules against AI washing, signaling that misleading claims carry real consequences. "Enhanced regulatory scrutiny of AI-related corporate disclosures is necessary," explains Boyuan Li of the University of Florida, who analyzed corporate statements and employee data to distinguish companies' actual AI use from mere rhetoric. "The SEC has begun taking action against companies making misleading claims about their AI capabilities, reflecting growing regulatory concern about how businesses represent their AI use to the public". What Are the Long-Term Consequences of AI Washing? While AI washing can bring companies short-term benefits through inflated stock prices and investor enthusiasm, it carries significant long-term risks. Research from the University of Cincinnati compares AI washing to greenwashing and finds that both carry similar ethical and economic dangers. Companies engaging in AI washing risk damaging their reputation, eroding consumer trust, and misallocating digital resources. These consequences can reduce a company's market value and stakeholder confidence, and may trigger regulatory penalties. Beyond corporate consequences, AI washing threatens the integrity of financial markets themselves. Researchers warn that AI-generated misinformation could be used to manipulate markets, with realistic AI deepfakes potentially undermining public confidence in financial systems. How to Protect Yourself as an Investor - Read the Fine Print: When evaluating AI-focused companies, look beyond marketing claims and examine the detailed risk disclosures in annual reports and SEC filings, which are required to be more truthful than marketing materials. - Demand Specificity: Be skeptical of vague AI claims. Ask companies for specific details about how AI is actually being used, what problems it solves, and what risks it poses—generic statements are a red flag. - Check for Third-Party Validation: Look for evidence that a company's AI claims have been independently verified or audited by external experts, rather than relying solely on company statements. - Monitor Regulatory Actions: Stay informed about SEC enforcement actions against companies making misleading AI claims, as these cases often reveal patterns of deception in the industry. What's Next for AI Regulation? The regulatory landscape is still evolving. Some observers argue the SEC should issue AI-specific guidance to clarify appropriate descriptions of AI technology. Others caution that new mandates could discourage innovation by requiring firms to divulge rapidly evolving information that is difficult to evaluate precisely. The challenge is finding the right balance: protecting investors from fraud while allowing companies room to develop and deploy new technologies. What's clear is that the status quo is unsustainable. As AI becomes increasingly central to business models across industries, regulators must develop frameworks that prevent deception without stifling legitimate innovation. The coming months and years will be critical in determining whether existing securities laws can adapt to this new technological reality, or whether entirely new regulatory approaches are needed.