Site icon Tech Newsday

Former OpenAI employee alleges plan for AGI bidding war

In a recent interview, former OpenAI safety researcher Leopold Aschenbrenner made startling claims about his ex-employer’s strategy regarding artificial general intelligence (AGI).

Aschenbrenner claimed that he had been fired for raising concerns about security, despite OpenAI’s assertion that it did not penalize employees for speaking out.

Now, speaking with tech podcaster Dwarkesh Patel, Aschenbrenner claimed that he believed OpenAI had once considered initiating a global bidding war for AGI among the United States, China, and Russia.

Aschenbrenner recounted hearing “from multiple people” within the company about a plan where OpenAI leadership intended to fund and sell AGI by pitting these governments against each other. The idea was to create a competitive environment where nations would outbid each other for access to AGI technology. This plan, he noted, included the possibility of selling AGI to China and Russia, which he found “surprising” and concerning.

“There’s also something that feels eerily familiar about starting this bidding war and then playing them off each other, saying, ‘well, if you don’t do this, China will do it,'” Aschenbrenner remarked during the interview.

The conversation took a personal turn when Aschenbrenner explained why he was fired from OpenAI earlier this year. According to him, the dismissal followed his circulation of a memo warning about the Chinese Communist Party potentially stealing “key algorithmic secrets.” Human resources deemed the memo “racist” and “unconstructive,” leading to concerns about his loyalty to the company.

Aschenbrenner was ultimately fired for leaking information after OpenAI examined his computer and found documents shared during a brainstorming session on “preparedness, safety, and security measures” with external researchers. The documents included a projection about AGI development by 2027 to 2028, which HR considered confidential.

OpenAI has expressed their commitment to building safe AGI but disagreed with Aschenbrenner’s characterization of the company’s actions. OpenAI CEO Sam Altman has publicly discussed similar timelines, leading Aschenbrenner to believe that the information he shared was not sensitive.

The allegations raise significant questions about the ethical considerations and geopolitical implications of AGI development, highlighting the need for transparency and responsible handling of advanced AI technologies.

Exit mobile version