Researchers Prove Malware Can Be Hidden Inside AI Models

July 26, 2021

Researchers have discovered a new method of slipping malware past automated detention tools by hiding it in a neutral network.

To prove the validity of the technique, the researchers embellished 36.9 MiB of malware in a 178 MiB AlexaNet model without significantly altering the function of the model itself.

The malware-embedded model classified images with almost identical accuracy within 1% of the malware-free model.

By selecting the best layer to work with in an already trained model, and then embedding the malware in that layer, the researchers were able to break the malware in a way that allowed them to bypass detection by standard antivirus engines.

The new technique is a way to hide malware, not execute it. To actually execute the malware, it must be extracted from the poisoned model by another malicious program and then reassembled into its working form.

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui made the discovery.

For more information, read the original story in Arstechnica.

Top Stories

Related Articles

February 5, 2026 A security researcher at Koi named Oren Yomtov has uncovered a widespread malware operation embedded inside an more...

February 4, 2026 More than three million Fortinet devices have been exposed to a critical authentication-bypass vulnerability that is being more...

February 4, 2026 A now-patched security flaw in Docker’s built-in AI assistant exposed users to the risk of remote code more...

January 28, 2026 A suspected credit card skimming attack on the Canada Computers online store may have quietly exposed customer more...

Picture of TND News Desk

TND News Desk

Staff writer for Tech Newsday.
Picture of TND News Desk

TND News Desk

Staff writer for Tech Newsday.

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn