April 7, 2026 OpenAI released new policy recommendations aimed at ensuring artificial intelligence benefits society as systems approach superintelligence. At the same time, a detailed investigation by The New Yorker raised questions about whether CEO Sam Altman can be trusted to deliver on those commitments.
The company’s proposal outlines an “industrial policy for the intelligence age,” calling for safeguards as AI begins to outperform humans, including monitoring risks such as loss of human control and misuse by governments. OpenAI said it plans to remain transparent about these risks and advocate for outcomes that improve quality of life broadly.
The recommendations focus heavily on economic and social impact. OpenAI proposed ideas such as a public wealth fund to distribute gains from AI, taxes on automated labour to support programmes like Social Security and Medicaid, and pilot programmes for shorter work weeks without loss of pay. It also suggested retraining initiatives to move displaced workers into sectors like healthcare and caregiving, alongside efforts to recognise those roles as economically valuable.
At the same time, the company emphasised governance. It called for “common-sense” regulation and a public-private partnership model, along with stricter oversight for the most advanced AI systems. In scenarios involving high-risk capabilities, such as models that could enable chemical, biological or cyber threats, OpenAI said stronger controls and global coordination would be required.
The policy push comes as public concerns about AI continue to grow. Surveys cited in the reporting show rising anxiety around energy use, job displacement and broader societal impact, while political debate is beginning to shape how quickly new infrastructure and systems can be deployed.
Running alongside that, The New Yorker’s investigation offers a contrasting view of OpenAI’s leadership. Based on interviews with more than 100 people and internal communications, the report describes concerns from former insiders, including Ilya Sutskever and Dario Amodei, about decision-making and trust within the company. Amodei wrote in one message that “The problem with OpenAI is Sam himself.”
Altman disputed or downplayed several claims and said some inconsistencies reflected the fast-changing nature of AI development. He also acknowledged being conflict-avoidant in the past. The report notes that his public messaging has recently shifted toward a more optimistic tone about AI’s benefits.
