New York's AI Safety Law Creates a Blueprint for State Regulation, But Federal Pushback Looms

New York has significantly tightened its AI governance rules, requiring developers of advanced AI models to disclose safety risks and report critical incidents within 72 hours. Governor Kathy Hochul signed amendments to the Responsible AI Safety and Education Act (RAISE Act) on March 27, 2026, addressing concerns about vague compliance obligations in the original December 2025 law. The updated framework, effective January 1, 2027, aligns New York's approach with California's Transparency In Frontier Artificial Intelligence Act (TFAIA) and grants broad enforcement authority to the New York Department of Financial Services (NYDFS) .

What Exactly Is Covered by New York's New AI Law?

The amended RAISE Act applies to companies that develop what regulators call "frontier models," which are foundation models trained using more than 10 to the 26th power computing operations (FLOPs). This threshold represents an enormous amount of computational power; to put it in perspective, training such a model typically costs roughly $100 million or more. The law distinguishes between two categories of regulated developers :

  • Frontier Developers: Any company that has trained or initiated the training of a frontier model, regardless of company size or revenue
  • Large Frontier Developers: Frontier developers with annual revenue of $500 million or greater, subject to additional requirements

Importantly, companies that only use, deploy, or build applications on top of AI models developed by others, including those accessing models through application programming interfaces (APIs), are not considered frontier developers and fall outside the law's scope .

What Must AI Companies Actually Do to Comply?

The law imposes several concrete obligations on frontier developers before they deploy new models or substantially modified versions of existing ones. All frontier developers must publish transparency reports containing specific information about their models, including the developer's website, a contact mechanism for public inquiries, the model's release date, supported languages, output capabilities, intended uses, and any restrictions on use .

Beyond transparency, the law prohibits frontier developers from making materially false or misleading statements about "catastrophic risk," a term the law defines narrowly. Catastrophic risk means a foreseeable and material risk that a model will contribute to the death or serious injury of more than 50 people or more than $1 billion in property damage from a single incident. The law specifies three scenarios that constitute catastrophic risk: providing expert-level assistance in creating chemical, biological, radiological, or nuclear weapons; conducting cyberattacks or serious crimes without meaningful human oversight; or evading the developer's control .

The most aggressive requirement is the critical safety incident reporting mandate. Frontier developers must report to NYDFS within 72 hours after determining that a critical safety incident has occurred. This 72-hour window is significantly shorter than California's 15-day reporting period. If an incident poses imminent risk of death or serious physical injury, developers must disclose it to law enforcement or public safety agencies within 24 hours .

How Does This Fit Into the Broader AI Governance Landscape?

New York's approach reflects a growing trend of state-level AI regulation, but it also highlights a fundamental tension in American AI policy. California, Colorado, and other states have implemented their own AI rules targeting privacy, discrimination, and workforce concerns. However, this patchwork of state regulations creates compliance challenges, particularly for businesses operating across state lines. Small businesses face especially acute difficulties navigating conflicting rules .

The federal government has signaled strong opposition to state-level AI regulation. President Trump's Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence" directs federal agencies to challenge state AI laws deemed to impede a "minimally burdensome national standard" for AI regulation. Trump's National Policy Framework for AI similarly calls for preemption of state laws that regulate "AI development." Legal experts warn that both New York's RAISE Act and California's TFAIA could become prime targets for federal preemption challenges .

Steps for AI Developers to Prepare for Compliance

For frontier AI developers operating in or planning to serve the New York market, preparation for the January 2027 effective date requires systematic planning:

  • Audit Your Models: Determine whether your foundation models meet the computing threshold (10 to the 26th power FLOPs) that triggers RAISE Act obligations, and identify which models will be deployed in New York
  • Establish Reporting Infrastructure: Create internal processes and systems to detect, document, and report critical safety incidents within the 72-hour window, including designation of responsible personnel and escalation procedures
  • Develop Transparency Documentation: Prepare comprehensive transparency reports for each frontier model that include all required elements, with clear policies for redacting trade secrets and cybersecurity information while documenting the nature of redactions
  • Implement Governance Frameworks: For large frontier developers with $500 million or greater annual revenue, establish a formal AI safety framework that documents how the company manages catastrophic risk and complies with the law's requirements
  • Monitor Federal Developments: Track ongoing federal preemption efforts and potential legal challenges to the RAISE Act, as the regulatory landscape may shift significantly before or after the law takes effect

The NYDFS, which has spent nearly a decade leading cybersecurity regulation for financial services, is expected to take a similarly aggressive enforcement posture in the AI space. The agency will create a new office dedicated to receiving critical safety incident reports and will produce annual reports to state leadership documenting incidents and recommending updates to the law .

Meanwhile, Congress is still gathering data on AI's broader economic impact before committing to federal legislation. The House Workforce Protections Subcommittee held hearings in April 2026 examining how AI affects workers and employers across industries. Lawmakers emphasized the need for better data before implementing legislative solutions, while also acknowledging that state regulations risk creating compliance burdens that could hinder innovation and competitiveness .

The tension between state innovation and federal uniformity will likely define AI governance for the next several years. New York's RAISE Act amendments represent one state's attempt to establish rigorous safety standards, but their ultimate durability depends on whether federal courts uphold them against preemption challenges or whether Congress acts to establish a national framework that supersedes state law.