OpenAI supports state bill limiting liability for catastrophic AI harm

April 13, 2026 An Illinois bill that would limit when AI developers can be sued over large-scale harm has gained support from OpenAI, as states continue to shape policy in the absence of federal rules. The proposal ties liability protection to strict conditions, including proof that companies did not act intentionally or recklessly and that they publicly disclose safety and transparency reports.

Known as SB 3444, or the Artificial Intelligence Safety Act, the legislation defines “critical harms” as severe events such as the death or serious injury of 100 or more people, at least $1 billion in property damage, or the use of AI to develop chemical, biological, radiological, or nuclear weapons. The bill applies specifically to “frontier models,” defined as systems trained using more than $100 million in compute, a threshold that captures major developers across the industry.

OpenAI said it supports the approach because it focuses on high-impact risks without restricting broader deployment. “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses — small and big — of Illinois,” spokesperson Jamie Radice said.

The measure also reflects growing concern about fragmented regulation. In testimony supporting the bill, OpenAI’s Caitlin Niedermeyer warned against a “patchwork of inconsistent state requirements” and called for a unified federal framework. The bill itself includes a provision that would render it inactive if Congress passes overlapping national legislation.

In practice, the proposal attempts to balance two competing pressures. On one side is the need to hold developers accountable for catastrophic failures. On the other is the industry’s push to avoid open-ended liability that could slow development or deployment of advanced systems.

That tension is already playing out across the United States. States including California and New York have introduced or passed laws requiring AI companies to submit safety and transparency reports, while others are advancing their own frameworks. Without federal coordination, companies are navigating a growing set of regional compliance requirements.

At the same time, AI firms are increasing their presence in policy discussions. OpenAI, along with companies such as Meta, Alphabet, and Microsoft, spent a combined $50 million on federal lobbying in the first nine months of 2025, according to IssueOne. OpenAI has also announced plans to open a Washington, D.C., office in 2026.

Despite the activity, no federal law has yet defined responsibility in the event of an AI-driven disaster. Congress has not established a clear liability framework, leaving states to fill the gap with varying approaches.

Top Stories

Related Articles

April 13, 2026 YouTube users watching on smart TVs have reported being served unskippable ads lasting more than 90 seconds more...

April 13, 2026 A new investigation by The New Yorker raises questions about Sam Altman’s technical expertise and leadership at more...

April 13, 2026 Sam Altman’s San Francisco home was reportedly targeted in a second incident early Sunday, days after a more...

April 10, 2026 Anthropic says its new Claude Mythos Preview model successfully escaped a restricted sandbox environment during testing and more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn