New York City Public Schools has released comprehensive guidance that treats AI not as a classroom solution waiting to happen, but as a technology that requires strict guardrails before it touches student data. The framework, issued in March 2026, establishes a "traffic light" system that explicitly names what AI will never be allowed to do in schools, then carefully defines what it can do with human oversight. This approach reflects a fundamental belief: teaching and learning are human endeavors that technology should serve, not replace. The guidance comes as schools nationwide grapple with how to integrate AI tools while protecting students from bias, privacy violations, and erosion of critical thinking skills. NYC's approach is notable because it doesn't just say "use AI responsibly." Instead, it creates a clear evaluation process and identifies specific high-risk uses that are completely off-limits, no exceptions. What Exactly Is NYC Banning AI From Doing in Schools? The "red light" category in NYC's framework identifies uses that represent the highest risk to students, families, and the fairness of the school system. These are completely prohibited. While the source material indicates these red-light uses exist, the specific prohibited applications are identified as the most critical guardrails the district is putting in place. The concern driving these restrictions is concrete: AI systems can reflect biases present in their training data, producing outputs that misrepresent or exclude certain perspectives. Generative AI (GenAI), which creates new content like text, images, or audio based on user instructions, can produce responses that sound confident but are factually wrong or entirely made up, a phenomenon sometimes called a "hallucination." This is why human review of AI outputs is always required. The district's commitment to protecting students extends beyond just banning certain uses. NYC Schools emphasizes that students already encounter AI outside school walls. The question, according to the guidance, is whether they are equipped with critical thinking, ethical grounding, and creative agency, or left to navigate AI alone. How Does NYC's AI Approval Process Actually Work? Before any AI tool can be used in a New York City public school with student data, it must pass through a rigorous review process called ERMA, the Enterprise Request Management Application. This is not a new process; ERMA has long been the district's established system for data privacy and security compliance. However, in December 2024, NYC Schools added AI-specific standards to this evaluation. The ERMA process requires vendors to disclose exactly what AI capabilities their tool includes, prohibit the use of student data to train AI models, and meet transparency requirements so tools can be explained to families and students. The process currently reviews tools for data privacy and security compliance with federal and state laws, including FERPA (Family Educational Rights and Privacy Act), New York State Education Law Section 2-d, and Chancellor's Regulation A-820 (Data Privacy and Security). However, the district acknowledges a gap: the ERMA process does not yet evaluate algorithmic bias, equity impact, or instructional effectiveness. NYC Schools is committed to building that expanded evaluation capacity, with this work reflected in a comprehensive Playbook planned for June 2026. Steps to Ensure AI Tools Meet NYC Schools' Standards The district has established a 10-step process that every AI tool must complete before it can be used in any NYC public school. This structured approach ensures consistent evaluation and accountability: - Identify the Need: Schools must explain the problem they want to solve, why the AI tool is the best solution, and what outcomes they expect from implementation. - Submit an ERMA Request: Only authorized leaders such as principals or central executives can initiate the request; teachers cannot submit directly to bypass oversight. - Vendor Agreement: The vendor signs a Data Processing Agreement that meets strict privacy and security laws and provides a detailed data protection plan. - Security Check: The vendor completes a security questionnaire that NYCPS's security team reviews and approves before proceeding. - Cloud Review: If the tool is cloud-based, the city's Office of Technology and Innovation checks data storage, security architecture, and compliance with city, state, and federal policies. - Legal and Compliance Review: NYCPS teams review legal terms, privacy protections, security measures, instructional value, and AI-specific issues like bias and transparency. - Fix Issues: If problems are found during review, the vendor must correct them and resubmit for evaluation. - Final Decision: The tool is designated as Approved (added to the official list), In Progress (cannot be used), or Denied (cannot be used). - Implementation: Schools may only begin using a tool after ERMA approval; tools cannot be used during the review process, and schools must never bypass ERMA. - Ongoing Monitoring: NYCPS audits tools regularly, vendors must report changes and maintain compliance, and approval can be revoked for violations. This multi-step approach reflects the district's understanding that approving an AI tool is not a one-time decision. Even after a tool receives approval, several requirements remain in place. What Rules Apply Even After an AI Tool Gets Approved? ERMA approval confirms compliance with data privacy and security standards, but it is not the only requirement for tool use. Once a tool is approved, educators and schools must follow strict ongoing requirements that keep human judgment at the center of every AI-assisted decision. All AI tools must be used with human oversight and review; AI supports educator decision-making but never replaces it. Personal information about students may never be entered into AI tools that have not completed ERMA review. Educators must know and apply tool-specific age restrictions and teach students and families where to find that information. Most critically, educators must critically evaluate all AI-generated output for accuracy, appropriateness, and potential bias. AI responses should never be accepted at face value. Educators also teach students, as age-appropriate, to do the same. This emphasis on human judgment reflects a broader philosophy embedded in NYC's guidance: educators, relationships, and professional judgment remain central to the district's mission. The guidance explicitly states that students do not need technology for its own sake. They need accurate instruction, meaningful practice, and adults who know them well enough to decide when AI belongs in their learning and when it does not. Why Is NYC Taking Such a Cautious Approach to AI in Schools? The district's risk-based framework acknowledges that AI tools, especially generative AI, are fundamentally different from traditional school technology. Rather than following fixed instructions, AI generates outputs by finding patterns, making inferences, and adapting over time. Its outputs may be incomplete, uncertain, or shaped by design choices not visible to users. The risks the district identifies are specific and serious. Risks to students include bias, privacy violations, loss of agency, developmental harms, exposure to unfair discipline, or erosion of the thinking, creativity, and problem-solving skills students must develop themselves. Risks to staff include over-reliance on automated outputs or unclear accountability for AI-assisted decisions. Risks to society include reinforcing inequity or reducing human judgment in civic institutions. The guidance is particularly focused on protecting vulnerable student populations. It explicitly mentions the fourth grader whose reading score lags behind her curiosity and insight, the multilingual learner navigating two languages in a system that too often overlooks what he already knows, and the student with a disability whose need is clear but whose classroom still lacks the right tools to meet it. For these students, AI tools could either help close gaps or deepen existing inequities, depending on how they are designed and deployed. NYC Schools is accepting feedback and comments on this guidance through May 8, 2026, indicating that this framework is not final. The district plans to release a comprehensive Playbook in June 2026 that will expand the evaluation process to include assessment of algorithmic bias, equity impact, and instructional effectiveness. This timeline suggests that NYC is treating AI governance in schools as an evolving challenge that requires ongoing refinement and community input. The release of this guidance signals that major school districts are not waiting for federal AI regulation to establish their own standards. Instead, they are taking responsibility for protecting students from AI harms while remaining open to beneficial uses. For families and educators, the message is clear: AI in schools will be subject to human oversight, transparency requirements, and a presumption that teaching and learning are fundamentally human endeavors.