GPT-5 Just Ran 36,000 Lab Experiments Autonomously: Here's Why Scientists Are Worried
OpenAI's GPT-5 has crossed a significant threshold: it can now autonomously design and run thousands of biological experiments without human intervention. In February 2026, OpenAI and biotech company Ginkgo Bioworks announced that GPT-5 had designed and executed 36,000 biological experiments through a robotic cloud laboratory, where automated equipment controlled by computers carries out experiments remotely. The AI model proposed study designs, robots executed them, and the system fed data back to the model for the next iteration. Humans set the goal, but machines handled most of the work, reducing the cost of producing a desired protein by 40% .
What Is Programmable Biology and Why Does It Matter?
This breakthrough represents a fundamental shift in how biological research works. For decades, biology moved from observation toward understanding. Scientists sequenced genomes to catalog DNA and learn how genes encode proteins that carry out life's functions. Tools like CRISPR then allowed scientists to edit DNA for specific purposes, such as disabling genes linked to disease. AI is now accelerating a third phase where computers can both design biological systems and rapidly test them .
The process looks less like traditional benchwork in a lab and more like engineering: design, build, test, learn, and repeat. Where a traditional experiment might test a single hypothesis, AI-driven programmable biology explores thousands of design variations in parallel, iterating the way an engineer refines a prototype. Protein language models, which are AI systems trained on millions of natural protein sequences, can quickly predict how mutations will change a protein's behavior or design entirely new proteins. These AI models are designing potential new drugs and speeding vaccine development .
Paired with automated labs, these models create tight loops of experimentation and revision, testing thousands of variations in days rather than the months or years a human team would need. Faster protein engineering could mean faster responses to emerging infections and cheaper drugs.
How Can AI Systems Help Researchers Design Biological Experiments?
- Autonomous Design: AI models can propose thousands of experimental designs without human input, dramatically accelerating the pace of biological research and reducing the time from concept to testing.
- Rapid Iteration: Robotic cloud laboratories execute AI-designed experiments at scale, feeding results back to the model for continuous refinement and optimization in days instead of months.
- Cost Reduction: The GPT-5 and Ginkgo Bioworks collaboration cut protein production costs by 40%, making biological engineering more accessible to researchers and organizations with limited budgets.
- Protein Engineering: AI systems can predict how mutations change protein behavior and design new proteins from scratch, accelerating drug development and vaccine creation without years of traditional trial and error.
What Are the Dual-Use Risks of AI-Powered Biology?
The same AI tools that accelerate beneficial research also pose serious biosecurity risks. Researchers have raised concerns about the dual-use problem: technologies developed for beneficial purposes can be repurposed to cause harm. Current AI models are able to walk users through the technical steps of recovering live viruses from synthetic DNA. Scientists have developed a risk-scoring tool to evaluate how AI could modify a virus's capabilities, such as altering which species it infects or helping it evade the immune system .
Two recent studies reached different conclusions about whether AI lowers barriers to dangerous work. A study by AI company Scale AI and biosecurity nonprofit SecureBio found that when people with limited biology experience were given access to large language models (LLMs), which are AI systems trained on vast amounts of text data like those behind ChatGPT, they were able to complete biosecurity-related tasks, such as troubleshooting complex virology lab protocols with four times greater accuracy. In some areas, these novices outperformed trained experts. Around 90% of these novices reported little difficulty getting the models to provide risky biological information, such as detailed instructions on working with dangerous pathogens, despite built-in safety filters meant to block such outputs .
In contrast, a study led by Active Site, a research nonprofit that studies the use of AI in synthetic biology, found that AI help did not lead to significant differences in the ability of novices to complete the complex workflow to produce a virus in a biosafety laboratory. However, the AI-assisted group succeeded more often on most tasks and finished some steps faster, most notably on growing cells in the lab .
Why Are Current Safety Regulations Falling Behind?
The gap between what AI can do in biology and what governance systems are prepared to handle is growing. AI systems are now able to run experiments autonomously and at scale, but existing regulations were not designed for this. Rules governing biological research do not account for AI-driven automation, and rules governing AI do not specifically address its use in biology .
In the United States, the Biden administration issued a 2023 executive order on AI security that included biosecurity provisions, but the Trump administration revoked it. Screening the synthetic DNA that commercial providers make to ensure it cannot be misused to make pathogens or toxins remains mostly voluntary. A bipartisan bill introduced in 2026 to mandate DNA screening does not yet address AI-designed sequences that evade current detection methods .
The 1975 Biological Weapons Convention, an international treaty prohibiting the production and use of bioweapons, contains no provisions for AI. The U.K. AI Security Institute and the U.S. National Security Commission on Emerging Biotechnology have both called for coordinated government action. Researchers have estimated that even modest improvements in an AI model's ability to help plan pathogen-related experiments could translate to thousands of additional deaths from bioterrorism per year .
What Solutions Are Experts Proposing?
The safety evaluations that AI labs run before releasing new models are often opaque and unsuited to capture real-world risk. Several frameworks and approaches are being proposed to address the gap. The Nuclear Threat Initiative has proposed a managed access framework for biological AI tools, matching who can use a given tool to the risk level of the model rather than blanket restrictions. The RAND Center on AI, Security and Technology outlined a set of actions researchers could take to improve biosecurity, including improved DNA synthesis screening and model evaluations before release. Researchers have also argued that biological data itself needs governance, especially genomic data that could train models with dangerous capabilities .
Some AI companies have started voluntarily imposing their own safety measures. Anthropic activated its highest safety tier when it released its most advanced model in mid-2025. At the same moment, OpenAI updated its Preparedness Framework, revising the thresholds for how much biological risk a model can pose before additional safeguards are required .
The challenge ahead is clear: as AI systems become more capable at designing and executing biological experiments, the regulatory and safety infrastructure must evolve in parallel. Without coordinated action across government, industry, and research institutions, the capabilities that promise faster drug development and disease response could also enable catastrophic misuse.