Employees say OpenAI and Google DeepMind Are hiding dangers from the public

Share post:

An article in Time revealed that a group of current and former employees at leading AI companies OpenAI and Google DeepMind have published a letter warning against the dangers of advanced AI, alleging that companies prioritize financial gains while avoiding oversight. Thirteen employees, including eleven from OpenAI and two from Google DeepMind, signed the letter titled “A Right to Warn about Advanced Artificial Intelligence,” with six signatories remaining anonymous.

The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation, including entrenching existing inequalities, spreading misinformation, and potentially leading to human extinction due to loss of control over autonomous AI systems. They assert that AI companies possess information about these risks but are not required to disclose them to governments, keeping the true capabilities of their systems secret. This makes current and former employees the only ones able to hold the companies accountable, though many are constrained by confidentiality agreements.

Key Points of the letter:

  • The letter demands AI companies stop forcing employees into agreements preventing them from criticizing their employer over risk-related concerns.
  • It calls for the creation of an anonymous process for employees to raise concerns to board members and relevant regulators.
  • It advocates for a culture of open criticism and protection against retaliation for employees who share risk-related confidential information.

Employee Protections and Public Concerns

Lawrence Lessig, the groupā€™s pro bono lawyer, emphasized the importance of employees being able to speak freely without retribution as a line of safety defense. Research by the AI Policy Institute shows that 83% of Americans believe AI could accidentally cause a catastrophic event, and 82% do not trust tech executives to self-regulate the industry.

Governments worldwide are moving to regulate AI, but progress lags behind the rapid advancement of AI technology. The E.U. passed the worldā€™s first comprehensive AI legislation earlier this year, and international cooperation has been pursued through AI Safety Summits and U.N. discussions. In October 2023, President Joe Biden signed an AI executive order requiring AI companies to disclose their development and safety testing plans to the Department of Commerce.

 

SUBSCRIBE NOW

Related articles

AI and Cyber Security: Practical Insights. Hashtag Trending Weekend Edition (repeat episode)

Unlocking AI: Understanding the Expanding Role of AI in Business and Cybersecurity This is our repeat episode and if...

You.com versus Perplexity.ai. Two AI’s go to head with an twist. A debate between AI’s with an AI judge

This is a bit longer than our average article, but hopefully it's also a little bit of fun....

Is Windows Intelligent Media Search the next “Recall?”

Microsoft is reportedly working on a new AI feature for Windows 11, called "Intelligent Media Search," which can...

Are AI enabled features worth a 300% increase in software price? Hashtag Trending for Wednesday, September 4, 2024

Governments are demanding information from tech firms at a growing rate, a study says that the Tik Tok...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways