DeepMind's Hassabis Says AI's Next Giant Leap Won't Come From Bigger Models Alone
DeepMind CEO Demis Hassabis believes the path to artificial general intelligence (AGI) depends less on making AI models larger and more on solving fundamental architectural problems that current systems can't handle. In recent interviews, Hassabis outlined a roadmap where the next wave of AI breakthroughs will come from systems that can learn continuously, remember information persistently, and understand how the physical world works, not simply from pouring more computing power into existing approaches .
What's Actually Holding AI Back Right Now?
While most people assume that bigger models and more data are the only ingredients needed for AGI, Hassabis and other leading AI researchers see a different bottleneck. The real limitations are architectural and algorithmic. Current large language models (LLMs), which are AI systems trained on vast amounts of text to predict and generate language, lack the ability to adapt in real time, maintain long-term memory across conversations, or reason about cause and effect the way humans do .
DeepMind allocates roughly half its research resources to scaling up compute power and half to what Hassabis calls "blue-sky algorithmic innovation." This 50-50 split reflects his view that scaling alone has only a 50 percent chance of reaching AGI without one or two additional major breakthroughs .
The specific gaps Hassabis identifies include:
- Continual Learning: Systems that learn from new experiences after training without forgetting what they already know, a problem called catastrophic forgetting that humans solve effortlessly.
- Long-Term Memory: Persistent, hierarchical memory that goes beyond fixed context windows and supports reasoning across long time horizons and multiple sessions.
- World Models: Internal simulations that understand physics, causality, materials, and how objects behave, enabling planning and imagination rather than just predicting the next word.
- Advanced Reasoning: Consistent multi-step thinking that combines language models with search and planning techniques, similar to how AlphaZero plays chess.
How Are Researchers Closing These Gaps in 2026?
The frontier of AI research is already moving in the directions Hassabis outlined. New architectures are delivering 4 to 17 times better performance than raw scaling alone in specific domains like memory and reasoning . Rather than waiting for bigger models, researchers are experimenting with hybrid systems that combine language models with planning algorithms, memory augmentation, and simulation-based learning.
Yann LeCun, Meta's chief AI scientist, is pushing world models through research like JEPA (Joint-Embedding Predictive Architecture), which trains AI to predict and understand the world rather than just process text. This represents a fundamental shift away from pure language modeling toward systems that grasp physical reality .
Hassabis expects 2026 to be a breakthrough year for two specific areas: reliable world models and continual learning prototypes. He predicts that interactive systems similar to Genie, which can simulate physics in real time, will emerge in robotics and agent applications. Memory-augmented systems inspired by architectures like Titans will become standard in agentic frameworks, and on-device persistent memory agents will start appearing in practical applications .
Steps to Understanding AI's Next Phase of Development
- Inference-Time Scaling: Instead of just training models bigger, researchers are letting models "think longer" at the moment you ask them a question, similar to how OpenAI's o1 model works, which delivers significant performance gains without requiring larger models.
- Multimodal Training Data: Combining text, images, video, and robotics data to train systems that understand the world across multiple senses, moving beyond text-only learning.
- Algorithmic Multipliers: New techniques that deliver 4 to 17 times performance improvements in specific domains, representing the algorithmic breakthroughs Hassabis emphasizes as critical to AGI.
The broader consensus among AI leaders aligns closely with Hassabis' vision. Sam Altman at OpenAI, Dario Amodei at Anthropic, Andrej Karpathy, and others echo the shift toward inference-time scaling, agentic loops that let AI systems take multiple steps to solve problems, memory augmentation, and closing the gap between simulation and reality .
"Scaling laws have not hit the limits. LLMs will not commoditize easily. Push scaling to the absolute maximum will be a key component of AGI. However, it's roughly 50-50 whether scaling alone suffices or if one to two more breakthroughs are needed," explained Demis Hassabis.
Demis Hassabis, CEO at DeepMind
When Could These Breakthroughs Actually Arrive?
Hassabis views AGI as plausible within the next 5 to 10 years, roughly between 2030 and 2035, with probability concentrated toward the lower end of that range. This timeline assumes continued progress on models like Gemini alongside the algorithmic innovations in memory, world models, and continual learning .
The research roadmap extends beyond 2026. By 2027, Hassabis expects convergence on unified foundation world models with persistent and continual memory. Agentic loops should become robust enough to handle long-horizon tasks and self-correct. Robotics and scientific discovery are expected to accelerate through grounded simulation. By 2028 and beyond, if memory and world-model gaps close, systems could enter feedback loops of self-improvement, potentially reaching AGI-level consistency across reasoning, creativity, and real-world interaction .
The shift represents a fundamental change in how the AI field approaches the next frontier. Rather than betting everything on scale, the industry is diversifying its bets across multiple research directions. Progress will increasingly be measured by new evaluation benchmarks like long-horizon agent tasks and interactive tests, not just traditional performance metrics on static datasets .
For researchers, companies, and investors watching AI development, the message is clear: the era of scaling-only breakthroughs is transitioning to a more complex landscape where memory, simulation, and continuous learning matter as much as raw computing power.