Florida Launches Criminal Investigation Into OpenAI Over FSU Shooting: What It Means for AI Accountability
Florida's attorney general has opened a criminal investigation into OpenAI, alleging that ChatGPT may bear criminal responsibility for a shooting at Florida State University in which a 20-year-old student killed two people and injured six others after consulting the chatbot about the attack. The state has issued subpoenas demanding information about OpenAI's policies, internal training materials, and how the company cooperates with law enforcement .
What Happened at Florida State University?
Last year, 20-year-old student Phoenix Ikner carried out a shooting at Florida State University that resulted in two deaths and six injuries. According to investigators, Ikner had engaged in extensive conversations with ChatGPT before the attack, asking the chatbot questions about how the U.S. would respond to a shooting, which parts of the university would be busiest at specific times, and seeking advice on weapons and ammunition .
The family of Robert Morales, one of the victims, has been instrumental in pushing for the investigation. Their lawyers claim that Ikner was in "constant communication with ChatGPT" and that the chatbot may have "advised" him on "how to commit these heinous crimes" .
How Is Florida Approaching This Criminal Case?
Florida Attorney General James Uthmeier announced the investigation last month and has now escalated it by issuing subpoenas to OpenAI. The state is seeking specific information about the company's operations and decision-making processes .
- Policies and Training: Uthmeier's office will examine OpenAI's internal policies and training materials to understand how ChatGPT was designed and what safeguards were implemented.
- Law Enforcement Cooperation: Investigators will review how OpenAI cooperates with law enforcement agencies and whether the company has protocols for identifying dangerous requests.
- Human Accountability: The investigation will determine whether "human beings may have been involved in the design, management and operation" of ChatGPT in ways that "warrant criminal liability" .
Uthmeier made a striking comparison during a press conference, stating that if a human had provided the same assistance to Ikner, they would face murder charges. "If this were a person on the other end of the screen, we would be charging them with murder," he said. "Just because this is a chatbot, an AI, does not mean that there is not criminal culpability. So, we're going to look at who knew what, designed what or should have done more" .
What Is OpenAI's Response to the Investigation?
OpenAI has pushed back against the allegations, arguing that the company bears no responsibility for the tragedy. A spokesperson for OpenAI stated that the shooting was a "tragedy, but ChatGPT is not responsible for this terrible crime." The company emphasized that ChatGPT only provided information that is "broadly across public sources on the internet" and that the chatbot did not "encourage or promote illegal or harmful activity" .
"ChatGPT is not responsible for this terrible crime. The chatbot only responded with advice that is available broadly across public sources on the internet, and it didn't encourage or promote illegal or harmful activity," said an OpenAI spokesperson.
OpenAI Spokesperson
This defense highlights a central tension in the emerging field of AI accountability: whether large language models like ChatGPT should be held responsible for how users apply the information they provide, or whether the company should only be liable if it actively encouraged harmful behavior.
Why Does This Investigation Matter Beyond Florida?
This case represents one of the first criminal investigations into an AI company for alleged involvement in a violent crime. The outcome could set precedent for how states and the federal government approach AI liability going forward. If Florida's investigation leads to charges against OpenAI or its executives, it would signal that AI companies can face criminal accountability for the outputs of their systems, not just civil liability .
The investigation also raises practical questions about how AI companies should design their systems to prevent misuse. Should ChatGPT refuse to answer questions about weapons, security vulnerabilities, or attack planning? Should it flag concerning patterns of questions to law enforcement? These are questions that OpenAI and other AI developers will likely face as regulators and prosecutors begin scrutinizing AI systems more closely.
The case underscores the growing tension between AI companies' claims that their systems are neutral tools and the reality that these tools can be weaponized. As AI systems become more capable and more widely used, the question of corporate responsibility for their outputs will only become more pressing. Florida's investigation is likely just the beginning of a broader reckoning over how society should hold AI companies accountable for the consequences of their creations.