Florida Attorney General James Uthmeier has announced an investigation into OpenAI and its product ChatGPT. His announcement follows revelations that an accused campus shooter at Florida State University used the AI chatbot before the 2025 attack that killed two people.
In a video posted to X, Uthmeier said his office will issue subpoenas to OpenAI as part of an inquiry into the company’s potential role in the shooting. He also raised broader concerns, citing reports of ChatGPT prompts allegedly encouraging self-harm as well as questions about international data practices.
“We support innovation, but that doesn't give any company the right to endanger our children, facilitate criminal activity, empower America's enemies or threaten our national security. Companies that do so will be held accountable to the fullest extent,” he said.
Uthmeier did not specify the scope of the subpoenas or what information the state is seeking.
According to court filings, the accused FSU shooter entered more than 200 prompts into an AI system ahead of the attack. Attorneys representing the wife of shooting victim Robert Morales say they plan to sue ChatGPT’s parent company, OpenAI, in connection with those prompts.
Uthmeier is not the only Florida official reacting. Congressman Jimmy Patronis told WFSU he was alarmed when he learned of the alleged shooter’s AI interactions.
“I don't want to throw the baby out with the bath water, but I do think it's incredibly important that we consider the gravity of that content on a developing mind,” he said.
Patronis argued that the case strengthens the need for the SHIELD Act, legislation he is championing that would repeal Section 230 of the federal Communications Decency Act. Section 230 provides online platforms and users with immunity from liability for content posted by third parties — protections Patronis believes may discourage companies from putting stronger safeguards in place to protect minors and the public.
“This is where you have crossed the line in Section 230 because its existence has allowed potentially for loss of life to take place that, in my opinion, could have been avoided,” he said.
Whether Section 230 applies to AI platforms remains an unsettled legal question and is currently being tested in courts across the country.