Brett Adcock just declared the era of hand-coded robotics officially over by deleting 109,504 lines of C++ code from Figure's codebase, replacing it entirely with neural network weights that run "pixels-to-torque" control. This wasn't a demo or a roadmap slide. It was a quiet but seismic announcement that the robotics industry's fundamental architecture has shifted. The implications extend far beyond manufacturing efficiency; they signal a complete reframing of where the real value in humanoid robotics actually lives. Adcock, who previously sold his recruitment startup Vettery to Adecco for $100 million and took Archer Aviation public as an eVTOL company, is not known for incremental progress. His latest move reveals a strategic insight that Wall Street is beginning to price in: hardware is no longer the constraint. The brain controlling that hardware is where the next industrial revolution will be won. Why Is Figure AI Suddenly Deleting Code Instead of Writing It? For decades, robotics relied on hand-coded control systems written in C++. Engineers would write explicit instructions for every movement, every sensor reading, every decision point. It was precise but rigid. Figure's Figure 03 robot represents a fundamental departure from this approach. Instead of following pre-programmed instructions, the robot now uses end-to-end neural networks for full-body control, manipulation, and room-scale planning. The practical difference is striking. Figure 03 uses palm cameras and onboard inference to enable high-frequency torque control of 40 or more motors for complex two-handed tasks, replanning, and error recovery in dynamic environments. The robot can perform tasks like opening and using a Keurig coffee maker, not because engineers coded every step, but because neural networks learned how to do it from data. This shift from code to learned behavior creates a competitive advantage that traditional robotics companies cannot easily replicate. Data accumulation and neural network retraining allow rapid iteration and improvement. As diverse knowledge accumulates in larger pre-training datasets, the system exhibits emergent generalization, meaning the robot can handle situations it has never explicitly encountered before. What Makes Hark the Real Play in Adcock's Robotics Strategy? While Figure AI gives robots a body, Hark, the independent AI lab Adcock seeded with $100 million of his own capital, is tasked with giving that body genuine cognition. This dual-track approach mirrors Tesla's architecture: Optimus as the embodied data engine, xAI as the intelligence layer, each reinforcing the other's competitive moat. Adcock has been pointed in his critique of current large language models (LLMs), dismissing them as "advanced Google search engines" that lack the world-understanding necessary to avoid walking through glass walls or crushing delicate objects. General-purpose robotics, he argues, requires what he calls "embodied physics," something no cloud-hosted chatbot can supply. The strategic positioning is clear: as hardware commoditization accelerates, controlling the model layer becomes the defensible position. Figure has been aggressively collecting real-world data to feed Helix, its vision-language-action model, as it targets 100,000 units shipped by 2029. While Adcock has not publicly committed to selling that data, launching a $100 million AI lab strongly signals an intention to monetize embodied robot training insights well beyond Figure's own fleet. How to Understand Figure's Path to Mass Production and Deployment - Production Timeline: Figure aims to achieve robot production every 30 minutes in Baku by 2026, with robots in the commercial workforce running 24 hours a day, seven days a week. - Cost Reduction Strategy: Figure vertically integrates all hardware (actuators, sensors, compute) and software (neural nets, data) to achieve a 90 percent cost reduction in Figure 03 compared to earlier models, targeting a $20,000 price point for mass adoption. - Fleet Learning Architecture: In 2026, Figure's ambition is for one robot to learn a task and every robot in the fleet to immediately inherit that capability, collapsing the distinction between a single deployment and a global workforce update. - Facility Expansion: Figure's Grid facility opens in January 2026, expanding to hundreds of robots running 24/7 for both home and commercial workforce applications, with mission control monitoring their performance. The economics of this approach are transformative. When one robot learns a task and the entire fleet inherits it instantly, the business model stops resembling manufacturing and starts resembling software. That is the leverage point the market is beginning to price into Figure's $39 billion valuation. What Hardware Innovations Enable Figure's Neural Network Control? Figure 03 is 30 pounds lighter than its predecessor and equipped with fingertip tactile sensors sensitive to three grams of pressure. The robot has 40 or more degrees of freedom and uses neural nets for high-level planning, enabling complex whole-body manipulation. The compute architecture itself is revealing. The brain-like compute unit is located in the head for sensors and heat dissipation, while the torso contains the majority of onboard computation. Critically, Figure runs fast, low-power inference fully onboard using cheap hardware, not expensive GPUs like H100s or GB300s. This enables real-time policy deployment without draining the robot's entire power supply. Battery life and charging represent practical constraints that Figure has addressed. Figure robots have 2 kilowatt-hour batteries lasting 4 to 5 hours per full charge, with one-hour wireless charging through their feet using thin charging mats placed anywhere. This enables opportunistic charging while working. The robots also maintain connectivity through three communication systems: Wi-Fi, 5G SIM, and Bluetooth. However, they can perform tasks offline with high onboard intelligence to avoid being disabled if connectivity drops. When Will Humanoid Robots Actually Be Safe Enough for Homes? Adcock has been candid about where the limits still lie. "Until I feel safe enough to have it there with free reign around all my kids, it's not ready for everyone," he told Peter Diamandis on the Moonshots podcast, describing how he still personally supervises the robot near his children. This admission is notable less for its caution than for what it implies about the benchmark. Adcock is not managing expectations downward; he is managing them toward a standard that the entire industry will eventually have to meet. Figure robots require a fault-tolerant, redundant real-time safety architecture and a proven safety track record before widespread deployment. The goal is achieving a safety bar where the robot operates fully autonomously around children. Figure is developing intrinsically safe robots with superhuman perception and always-on computing, making them safer than humans around humans, animals, and pets. By 2026, Figure aims to have humanoid robots with surgical capabilities comparable to human surgeons, enabled by teleoperation and AI systems working at the highest performance level. The home deployment timeline remains ambitious but grounded. "The home is coming," Adcock has said. "The home is like single-digit years away" from useful humanoid deployment. His engineers are stress-testing this claim daily in Figure's 300,000 to 400,000 square foot Sunnyvale facility, which houses hundreds of robots and a large team focused on neural net development. What Does a $50 Trillion Market Actually Mean? The $50 trillion market figure Adcock deploys in conversations with investors and policymakers is not a near-term revenue projection but a directional claim about what becomes possible when a single neural network can run full-body autonomy across millions of deployed units simultaneously. If humanoid robots can be manufactured at $20,000 per unit and deployed at scale, the potential for tens of billions of robots on Earth by 2035 to 2040 becomes mathematically plausible. The market opportunity extends beyond manufacturing. Figure robots will enable aging in place at home, providing elder care and health monitoring to help people stay healthy. They can remember things, navigate homes like a visitor, and perform tasks over days and weeks. They will learn any task without instruction manuals by researching the internet, using digital tools, reasoning, and talking to humans, with only neural net weights updated while hardware remains unchanged. The competitive context sharpens Adcock's positioning considerably. The humanoid field in 2026 is no longer a two-horse race between Tesla's Optimus and Boston Dynamics' legacy. China's Unitree has demonstrated that capable hardware can be manufactured at a price point that disrupts the economics of high-end deployment, while UBTech has rolled out its thousandth Walker S2 unit. In this environment, controlling the model layer, not just the mechanical platform, is the strategically rational position. Adcock's dual-CEO structure, running both Figure and Hark simultaneously, will invite skepticism. But the logic is clear: Figure generates the embodied data that trains Hark's models, and Hark's models make Figure's robots more capable. Each reinforces the other. That flywheel, if it works, is where the $50 trillion thesis actually lives.