Site icon Tech Newsday

Top AI Labs Have Minimal Defense Against Espionage, Researchers Say

Some of the nation’s top artificial intelligence labs have insufficient security measures to protect against espionage, leaving potentially dangerous AI models exposed to theft, according to U.S. government-backed researchers.

Gladstone AI, a firm advising federal agencies on AI issues, conducted a sweeping probe into the security practices of leading AI outfits, including OpenAI, Google DeepMind, and Anthropic. The firm discovered that security measures were often lacking and that cavalier attitudes about safety were prevalent among AI professionals.

Jeremie Harris, CEO of Gladstone AI, highlighted that security practices in these labs would be highly concerning to professionals if observed. An example provided was AI researchers working on powerful models in public places like Starbucks, without proper supervision, posing a significant security risk.

The investigation, conducted with the State Department, revealed minimal security measures and a lack of awareness about the threat of foreign espionage. Edouard Harris, Gladstone AI’s tech chief, shared an anecdote where a security official dismissed concerns about Chinese tech theft, stating no similar models had emerged in China, which the researchers found perplexing.

The State Department acknowledged the ongoing efforts to understand AI research and mitigate associated risks. While Gladstone AI’s findings are part of the broader assessment, they do not explicitly represent the U.S. government’s views.

Some AI labs, like Google DeepMind, have acknowledged security concerns. Google DeepMind has reconsidered how to publish and share its work due to fears of Chinese exploitation. The company stated it takes security seriously and follows AI principles to ensure responsible development.

A company spokesperson said, “Our mission is to develop AI responsibly to benefit humanity — and safety has always been a core element of our work,” the company said in a statement late last week. “We will continue to follow our AI principles and share our research best practices with others in the industry, as we advance our frontier AI models.”

However, the situation is worse in smaller AI labs. Edouard Harris noted that the security measures in these labs are significantly lower than in major companies like Google and Microsoft. He emphasized that because of this, the U.S. is losing its AI leadership to espionage, with American developments routinely stolen.

Exit mobile version