Researchers uncover vulnerability in AI language models

Share post:

Researchers from Carnegie Mellon University, the Center for AI Safety, and the Bosch Center for AI claim to have discovered a method for avoiding the “guardrails” that are supposed to prohibit undesirable text outputs in large language models (LLMs).

The researchers claim in their article, “Universal and Transferable Adversarial Attacks on Aligned Language Models,” that they can automatically generate adversarial terms that evade the safety procedures put in place to tame harmful ML model output. By appending hostile phrases to text prompts, they can trick LLMs into writing inappropriate content that they would ordinarily refuse to answer.

These attacks are fully automated, allowing for an infinite number of possible threats. The suffix, a collection of words and symbols, may be appended to a variety of text prompts to generate undesirable material, and the method is applicable across models. The statements may appear to be nonsense, but they are intended to take advantage of the model’s behavior and elicit yes responses to otherwise undesirable requests. The idea is to make the models more likely to reply positively rather than rejecting to answer such requests that may include unlawful or dangerous information.

The researchers also demonstrate the possibility of creating automated adversarial attacks on LLMs, using character sequences that cause the system to follow harmful user commands when appended to a query. These attacks are entirely automated, enabling the creation of an unlimited number of them. They went ahead to suggest that the ability to generate automated attack phrases may render many existing alignment mechanisms insufficient.

The sources for this piece include an article in TheRegister.

SUBSCRIBE NOW

Related articles

Cyber Security Today – Week In Review for September 7, 2024

Cyber Security Today - Weekend Edition: Toronto School Board Hack, MoveIT Breach & Data Privacy Concerns This weekend edition...

You’re not crazy – your smart phone could be listening to you

If you have every heard someone say that they'd just had a conversation on their smart phone only...

Dick’s Sporting Goods faces potential cyberattack

Dick's Sporting Goods, a major U.S. retailer of athletic equipment, has reportedly experienced a potential cyberattack. According to...

Mastering AI & Cybersecurity: Navigating the Future – A Special Panel Discussion

Mastering AI & Cybersecurity: Navigating the Future - A Special Panel Discussion Welcome to a special weekend edition of...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways