England's £23 Million AI Tutoring Bet: Why One Country's Cautionary Tale Should Matter to You
England's Department for Education announced a £23 million pilot deploying AI tutoring systems to 450,000 disadvantaged pupils in January 2026, framing the initiative as a way to provide personalized, one-to-one learning support for children who cannot afford private tutors. However, a strikingly similar program in South Korea, launched with identical promises about personalization and equity, collapsed within a single academic year, raising urgent questions about whether the equity argument masks a fundamentally flawed approach to educational technology .
What Happened When South Korea Tried the Same Approach?
South Korea's national AI textbook program, announced in 2023 with the same promises about personalized learning and reduced teacher workload, became a cautionary tale. A civic organization called Political Mamas filed suit against the education minister before the program had even opened in a single classroom, arguing that mandatory deployment overlooked risks to children, lacked adequate data protection, and had been imposed without meaningful input from families or teachers .
When the textbooks finally launched, the problems were immediate and systemic. Teachers reported factual errors in the AI-generated content, regular technical failures, and monitoring interfaces that demanded more time than conventional teaching. A parents' petition gathered 56,605 signatures opposing the program. By August 2025, the National Assembly had stripped the AI textbooks of their official status entirely, and the opposition won the presidential election partly on a pledge to rescind the policy .
The financial fallout was staggering. Publishers who had invested approximately 800 billion won, roughly $567 million, expecting mandatory adoption across South Korean schools, now faced mass layoffs. When the combined public and publisher investment in the program totaled over $1.4 billion, the people who put it in motion commissioned launch events rather than independent reviews .
Why Does the EdTech Industry Keep Making the Same Mistakes?
The pattern repeats across countries and technologies because the commercial incentives in the EdTech industry create what researchers call a "revealed preference" for rapid deployment over rigorous safety testing. The industry has no meaningful internal brake on surveillance, data extraction, or the displacement of children's development by engagement metrics .
This is not an accusation of bad faith directed at any individual actor. A ministry under political pressure to modernize, publishers under commercial pressure to recoup investment, school leaders under inspection pressure to demonstrate innovation, and academic researchers dependent on technology company funding streams are each making locally rational decisions within a framework that prices harm in the wrong direction, or does not price it at all .
The Department for Education announced the English pilot and simultaneously declared its safety standards already met. An institution does not commission an independent review of a decision it has already announced as safe. This creates a structural problem: restraint in the EdTech industry is a function of cost. When external pressure raises the cost of harm high enough, behavior changes, but by then children have already been in classrooms experiencing the failures .
How to Protect Students From Premature AI Deployment in Schools
- Demand Independent Review Before Launch: Require third-party safety assessments and measurable benefit thresholds before any AI system enters a classroom, not after deployment has already been announced as safe.
- Mandate Meaningful Stakeholder Consultation: Involve teachers, parents, and students in the design and testing phases, not just in post-launch feedback. Teacher unions and parent organizations should have veto power over deployment timelines.
- Establish Named Liabilities and Data Protections: Create explicit accountability mechanisms that name who is responsible if the system fails, and establish rigorous data protection standards that prevent surveillance, tracking, or behavioral monitoring beyond educational necessity.
- Set Measurable Success Metrics: Define what success actually looks like before launch, with clear thresholds for when a program should be paused or terminated if it fails to meet those metrics.
What Surveillance Concerns Should Parents Know About?
The EdTech industry's approach to data collection and surveillance became visible in China, where the commercial logic operated across different technologies and different provinces. In Hangzhou in 2018, classrooms began installing a system that scanned students' faces every thirty seconds, classifying each expression across seven emotional categories and tracking six distinct behaviors, including sleeping, reading, writing, and what it categorized as listening. The technology was supplied by Hikvision, a company simultaneously contracted to build surveillance infrastructure inside Xinjiang detention facilities .
In Zhejiang province, schools trialled neurological monitoring headbands developed by BrainCo, a company founded at Harvard and funded by American and international venture capital. The devices transmitted what the company described as attention data to a classroom dashboard in real time, color-coding children by focus level: blue for focused, red for distracted. Within days a hashtag on Weibo had been viewed 220 million times and education officials ordered the school to stop. BrainCo's founder had already told The Independent that the goal of the first 20,000 devices was to capture data from 1.2 million people .
In Guizhou province, eleven schools introduced GPS-enabled smart uniforms with microchips embedded in the shoulder pads, tracking movements and triggering alarms if a child strayed. The principal of one participating school told state media that although the school retained the ability to track students at all times, they chose not to use it after hours. That discretion was entirely at the school's disposal since no external constraint prevented its removal .
Who Bears Responsibility When AI Education Programs Fail?
The democracies that produced these companies, funded their research, and trained their engineers did not simply fail to prevent what happened in those classrooms. They produced everything that filled them except the permission. The venture capital was American. The university research environment was American. The engineers were trained in open societies with functioning ethics review boards that would have stopped what those engineers then built and deployed in jurisdictions where no such boards existed .
Teacher unions eventually arrive at resistance, but only after months of classroom failure, public crisis, and political toxicity have made the cost of association higher than the cost of dissent. The Korean Teachers and Education Workers Union filed suit alongside Political Mamas, and that action was critical, but it came eighteen months after the harm was already accumulating. Parents, whose children are in the classrooms while that calculation is being made, operate on a different timeline entirely .
The English pilot affecting 450,000 disadvantaged pupils represents a moment when the cost of external pressure can still be raised before the harm accumulates. The comparison the equity framing invites is between AI tutoring and a well-resourced teacher. The comparison it actually makes is between AI tutoring and nothing, and accepting nothing as the baseline is a political choice. Political choices, we must remember, can be unmade .