Researchers uncover vulnerability in AI language models

July 31, 2023

Researchers from Carnegie Mellon University, the Center for AI Safety, and the Bosch Center for AI claim to have discovered a method for avoiding the “guardrails” that are supposed to prohibit undesirable text outputs in large language models (LLMs).

The researchers claim in their article, “Universal and Transferable Adversarial Attacks on Aligned Language Models,” that they can automatically generate adversarial terms that evade the safety procedures put in place to tame harmful ML model output. By appending hostile phrases to text prompts, they can trick LLMs into writing inappropriate content that they would ordinarily refuse to answer.

These attacks are fully automated, allowing for an infinite number of possible threats. The suffix, a collection of words and symbols, may be appended to a variety of text prompts to generate undesirable material, and the method is applicable across models. The statements may appear to be nonsense, but they are intended to take advantage of the model’s behavior and elicit yes responses to otherwise undesirable requests. The idea is to make the models more likely to reply positively rather than rejecting to answer such requests that may include unlawful or dangerous information.

The researchers also demonstrate the possibility of creating automated adversarial attacks on LLMs, using character sequences that cause the system to follow harmful user commands when appended to a query. These attacks are entirely automated, enabling the creation of an unlimited number of them. They went ahead to suggest that the ability to generate automated attack phrases may render many existing alignment mechanisms insufficient.

The sources for this piece include an article in TheRegister.

Top Stories

Related Articles

December 30, 2025 A fast-moving cyberattack has compromised more than 59,000 internet-facing Next.js servers in less than two days after more...

December 29, 2025 The U.S. National Institute of Standards and Technology (NIST) has warned that several of its Internet Time more...

December 29, 2025 A critical security flaw has been found in LangChain, one of the most widely used frameworks for more...

December 23, 2025 South Korea will require facial recognition scans to open new mobile phone accounts. The new rule is more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn