As artificial intelligence becomes ubiquitous in classrooms, a counterintuitive challenge has emerged: the easier AI makes it to produce correct answers, the harder it becomes for students to develop the irreplaceable skills that actually matter. Education experts are now rethinking how schools should integrate AI, arguing that the goal isn't to make learning faster or easierāit's to ensure students stay actively engaged in the hardest, most human parts of thinking. Why Removing Support Can Actually Improve Learning? The metaphor comes from an unexpected place: children learning to ride bikes. For decades, training wheels were the standardāthey provided support and prevented falls. But modern balance bikes take a different approach. By removing the pedals and training wheels entirely, children are forced to master the hardest skill first: balancing. Once that becomes embodied knowledge, pedals come later. In classrooms, many traditional supports function like training wheels. Step-by-step worksheets, overly rigid rubrics, and model essays that students reproduce rather than understand can help produce superficially correct work. But they also prevent students from practicing the central challenge: managing uncertainty, making judgments, and adjusting course when things get messy. In a world where AI can generate polished text instantly, these scaffolding techniques become even more problematic. "If you don't know where you're going, how will you know when you get there?" one educator asked a seventh-grade student. The response was telling: "When my mom pulls the car over and opens the door for me." It's not new for students to avoid responsibility for their own learning. But it's never been easier to do so, and it's never been more important to prevent it. What Skills Can't Be Outsourced to AI? As AI handles routine text production and surface-level problem-solving, the skills that remain irreplaceable are becoming clearer. These are the abilities that define human thinking in an AI-enabled world: - Choosing a direction: Defining the actual question or goal, rather than accepting the first prompt that comes to mind - Maintaining effort: Deciding whether a challenge is worth pursuing when difficulty increases - Managing uncertainty: Determining the next move when the path forward isn't clear - Reading the terrain: Adjusting strategy based on feedback, evidence, or new constraints - Working collaboratively: Listening deeply to create outcomes greater than individual contributions These skills are not only irreplaceableāthey become more valuable as AI handles routine work. Yet many current AI tools in education are designed to do the opposite: they reward students for accepting automated guidance rather than developing independent judgment. How to Design Learning That Uses AI Without Replacing Thinking - Shift focus from production to judgment: Instead of asking students to use AI to generate essays faster, ask them to evaluate AI-generated text, identify gaps, and make editorial decisions that require critical thinking - Build in uncertainty as a feature: Design assignments where the "right answer" isn't predetermined, forcing students to navigate ambiguity and defend their reasoning - Require human oversight of AI recommendations: When AI suggests a learning path or flags a student as struggling, have educators interpret that data rather than treating it as definitive - Create moments where AI is off-limits: Recognize that some learning objectives require students to struggle without technological assistance, building resilience and deeper understanding John Steinbeck captured this tension in "Travels with Charley," reflecting on how interstate highways made cross-country travel faster but allowed drivers to reach California "without seeing a single thing." The same risk exists in education: if schools focus only on reaching learning outcomes, students may arrive at the destination without understanding the journey. The Evidence Problem: Why Weak AI Tools Risk Student Development While some AI systems show promise, a growing concern is that many educational AI tools lack rigorous scientific evidence. A recent white paper from UK tuition provider Explore Learning warns that rapid expansion of AI in education could risk student outcomes if tools are implemented without strong pedagogical foundations. The problem is particularly acute in how progress is measured. Many AI learning tools prioritize easily measurable performance metricsālike test scoresāover deeper indicators of learning. This creates what researchers call "metric fixation," where educational progress is reduced to simplified outcomes rather than broader cognitive development. The result can be what Explore Learning describes as "a mirage of false mastery, where short-term gains disappear when the technology is removed". "The research is clear: poorly designed tools risk a mirage of false mastery, where short-term gains disappear when the technology is removed," explains Lisa Haycox, CEO of Explore Learning. "Stronger evidence standards are needed as AI adoption expands across education." However, carefully designed AI systems grounded in established learning theory show different results. In Brazil's state of ParanĆ”, 750,000 students using an evidence-based AI platform improved their state English test scores by more than 32% in under two years. The difference lies in how these systems are built: they model not just what students know, but how they learn and the pace at which they develop, recalibrating support in real time. Where AI Teaching Assistants Are Actually Working The inflection point for AI in education appears to have arrived. Large-scale pilots are now demonstrating that AI can approximate the responsiveness and personalization of one-on-one tutoringāsomething researchers have long known produces exceptional results. Bloom's "2 Sigma Problem" showed that one-on-one tutoring lifts average student performance to levels typically seen only among the very best. For decades, scaling that impact seemed impossible. Today, AI is bringing it within reach. In India, companies like SigIQ use AI tutors to help candidates prepare for civil service exams. Estonia and Iceland have partnered with providers such as OpenAI and Anthropic to bring AI-powered tutors to every high school student. OpenAI's partnership with Khan Academy has deployed KhanMigo, an AI-powered tutor currently used by 65,000 American students, demonstrating around 20% higher-than-expected learning gains on standard growth assessments. These successes share a common thread: they treat AI as a tool to amplify teacher effectiveness, not replace it. The technology handles the practice, personalization, and immediate feedback that students often lackāespecially in large or mixed-ability classrooms. But human educators remain essential to interpret progress, manage the limitations of algorithmic systems, and guide students toward deeper understanding. The stakes are particularly high for students with special educational needs and disabilities (SEND). Explore Learning reports a 35% increase in SEND students accessing its tuition services between 2024 and 2025, suggesting that evidence-based AI systems may help identify learning difficulties earlier and adjust tasks to match individual learning needs. The Equity Question: Will AI Widen or Close Educational Gaps? UNESCO emphasizes that the promise of "AI for all" must mean that everyone can access the benefits of this technological revolution. However, without careful governance, AI could exacerbate inequalities, particularly if high-quality tools remain available only in well-resourced systems. The convergence of improved AI architecture, richer pedagogical datasets, broader educator acceptance, and recognition that technology works best within structured teaching environments has created a genuine turning point. But realizing that potential requires discipline. Policymakers, education leaders, and technology providers must establish clear standards for evaluation, safeguard learner data, invest in teacher training, and ensure that AI systems reflect linguistic and cultural diversity rather than narrow assumptions. The question is no longer whether AI will change educationāit already is. The real question is whether schools will treat AI as a balance bike, removing the supports that prevent students from developing irreplaceable skills, or as a shortcut that lets students nap in the back seat while the technology does the driving.