Figure AI's $39 Billion Valuation Masks a Growing Safety Controversy

Figure AI has become one of the most valuable robotics companies in the world, but behind its White House moment and billion-dollar funding lies a serious safety dispute that could reshape how the industry approaches robot development. Founded by tech entrepreneur Brett Adcock, the company has attracted massive investor enthusiasm for its humanoid robots powered by an in-house AI system called Helix. Yet a pending lawsuit from the company's former head of product safety reveals a stark disconnect between the company's public ambitions and internal concerns about whether these machines are safe enough for human environments.

The contrast is striking. In 2026, Figure AI's humanoid robot was featured at a White House event as part of the Fostering the Future Together Global Coalition Summit, where First Lady Melania Trump highlighted the potential for these robots to serve as educators. The company's Series C funding round, which raised over $1 billion, valued Figure AI at $39 billion, a testament to investor confidence in what's being called "physical AI" . Yet this moment of triumph is shadowed by a lawsuit that challenges the very premise of deploying these machines in sensitive environments like schools and homes.

What's the Safety Concern Behind Figure AI's Humanoid Robots?

The lawsuit centers on allegations made by Robert Gruendel, Figure AI's former head of product safety. According to the source material, Gruendel alleged that the company's robots are powerful enough to cause serious harm, with the potential to fracture a human skull . This is not a minor technical dispute; it goes to the heart of whether these machines should be operating around children, elderly people, or other vulnerable populations. The lawsuit remains pending, meaning the allegations have not been resolved and the company has not had a final opportunity to defend itself in court.

The timing of these allegations raises important questions about the company's development process. If safety concerns were significant enough for the head of product safety to pursue legal action, it suggests that internal discussions about risk may not have been fully resolved before the company moved forward with high-profile endorsements and public demonstrations. The White House event, while symbolically important for the robotics industry, occurred against this backdrop of unresolved safety questions.

How Does Figure AI's Technology Actually Work?

To understand the stakes, it helps to know what makes Figure AI's robots different. The company has developed Helix, an in-house AI system that enables its humanoid robots to learn through observation and verbal commands. This approach focuses on three core capabilities: vision, language, and action . In practical terms, this means the robots can see their environment, understand instructions given in natural language, and execute physical tasks based on what they've learned.

This learning capability is genuinely innovative. Rather than being pre-programmed for every task, these robots can adapt to new situations by watching how humans perform tasks and following verbal guidance. However, this same flexibility and autonomy is precisely what makes safety oversight so critical. A robot that can learn and adapt in real-time requires robust safeguards to ensure it doesn't cause harm when something goes wrong or when it encounters an unexpected situation.

Steps to Understanding the Broader Implications of This Safety Dispute

  • Industry Standards Gap: The robotics industry currently lacks comprehensive federal safety standards for humanoid robots operating in human spaces, leaving companies to set their own safety thresholds and oversight mechanisms.
  • Investor vs. Safety Tension: The massive funding Figure AI has attracted reflects investor enthusiasm for the technology's commercial potential, but this financial pressure may create incentives to move quickly rather than thoroughly address safety concerns.
  • Regulatory Precedent: How this lawsuit is resolved could establish important precedents for how other robotics companies approach safety testing and disclosure, potentially influencing the entire industry's development practices.
  • Public Trust Factor: High-profile endorsements like the White House appearance can boost public acceptance of robotics technology, but they also raise expectations that the technology has been thoroughly vetted for safety.

Why Does Brett Adcock's Track Record Matter Here?

Understanding the context of Figure AI's leadership is important for evaluating the company's trajectory. Brett Adcock is not new to the world of venture-backed technology companies. He previously co-founded Archer Aviation, a drone company focused on electric air taxis. Interestingly, Archer Aviation saw its stock surge after President Trump signed an executive order promoting the integration of electric air taxis . This pattern suggests that Adcock has experience navigating the intersection of emerging technology, regulatory environments, and political support.

However, the Archer Aviation example also illustrates a potential concern. If a company can see rapid financial gains from political endorsement and regulatory tailwinds, there may be less incentive to slow down and address safety questions thoroughly. The robotics industry is watching to see whether Figure AI will prioritize safety validation as aggressively as it has pursued funding and public visibility.

What Happens Next in the Safety Dispute?

The pending lawsuit is the critical variable in Figure AI's near-term future. If Gruendel's allegations are substantiated, the company could face significant liability, regulatory scrutiny, and reputational damage. Conversely, if Figure AI successfully defends itself, it may clear a major hurdle to broader commercialization. Either way, the outcome will likely influence how other robotics companies approach safety testing and disclosure .

The broader implication is that the robotics industry is at an inflection point. Companies like Figure AI are racing to develop and deploy humanoid robots at scale, driven by massive investor interest and the potential for transformative applications. Yet this same rush creates the conditions for safety disputes to emerge. The question facing the industry is whether it will establish robust safety standards proactively, or whether those standards will be imposed reactively through lawsuits and regulatory action after incidents occur.

For now, Figure AI's $39 billion valuation and White House moment represent the optimistic vision of what humanoid robots could become. The pending lawsuit represents the harder questions that still need to be answered before that vision can be safely realized.

" }