Top AI Labs Have Minimal Defense Against Espionage, Researchers Say

Share post:

Some of the nationā€™s top artificial intelligence labs have insufficient security measures to protect against espionage, leaving potentially dangerous AI models exposed to theft, according to U.S. government-backed researchers.

Gladstone AI, a firm advising federal agencies on AI issues, conducted a sweeping probe into the security practices of leading AI outfits, including OpenAI, Google DeepMind, and Anthropic. The firm discovered that security measures were often lacking and that cavalier attitudes about safety were prevalent among AI professionals.

Jeremie Harris, CEO of Gladstone AI, highlighted that security practices in these labs would be highly concerning to professionals if observed. An example provided was AI researchers working on powerful models in public places like Starbucks, without proper supervision, posing a significant security risk.

The investigation, conducted with the State Department, revealed minimal security measures and a lack of awareness about the threat of foreign espionage. Edouard Harris, Gladstone AIā€™s tech chief, shared an anecdote where a security official dismissed concerns about Chinese tech theft, stating no similar models had emerged in China, which the researchers found perplexing.

The State Department acknowledged the ongoing efforts to understand AI research and mitigate associated risks. While Gladstone AI’s findings are part of the broader assessment, they do not explicitly represent the U.S. government’s views.

Some AI labs, like Google DeepMind, have acknowledged security concerns. Google DeepMind has reconsidered how to publish and share its work due to fears of Chinese exploitation. The company stated it takes security seriously and follows AI principles to ensure responsible development.

A company spokesperson said, ā€œOur mission is to develop AI responsibly to benefit humanity ā€” and safety has always been a core element of our work,ā€ the company said in a statement late last week. ā€œWe will continue to follow our AI principles and share our research best practices with others in the industry, as we advance our frontier AI models.ā€

However, the situation is worse in smaller AI labs. Edouard Harris noted that the security measures in these labs are significantly lower than in major companies like Google and Microsoft. He emphasized that because of this, the U.S. is losing its AI leadership to espionage, with American developments routinely stolen.

SUBSCRIBE NOW

Related articles

AI and Cyber Security: Practical Insights. Hashtag Trending Weekend Edition (repeat episode)

Unlocking AI: Understanding the Expanding Role of AI in Business and Cybersecurity This is our repeat episode and if...

You.com versus Perplexity.ai. Two AI’s go to head with an twist. A debate between AI’s with an AI judge

This is a bit longer than our average article, but hopefully it's also a little bit of fun....

Is Windows Intelligent Media Search the next “Recall?”

Microsoft is reportedly working on a new AI feature for Windows 11, called "Intelligent Media Search," which can...

Are AI enabled features worth a 300% increase in software price? Hashtag Trending for Wednesday, September 4, 2024

Governments are demanding information from tech firms at a growing rate, a study says that the Tik Tok...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways