Researchers Prove Malware Can Be Hidden Inside AI Models

July 26, 2021

Researchers have discovered a new method of slipping malware past automated detention tools by hiding it in a neutral network.

To prove the validity of the technique, the researchers embellished 36.9 MiB of malware in a 178 MiB AlexaNet model without significantly altering the function of the model itself.

The malware-embedded model classified images with almost identical accuracy within 1% of the malware-free model.

By selecting the best layer to work with in an already trained model, and then embedding the malware in that layer, the researchers were able to break the malware in a way that allowed them to bypass detection by standard antivirus engines.

The new technique is a way to hide malware, not execute it. To actually execute the malware, it must be extracted from the poisoned model by another malicious program and then reassembled into its working form.

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui made the discovery.

For more information, read the original story in Arstechnica.

Top Stories

Related Articles

March 30, 2026 Google has expanded its “Results about you” tool, allowing users to remove highly sensitive personal data, including more...

March 27, 2026 Microsoft is updating GitHub Copilot to train on real-world developer interactions, expanding beyond public code datasets to more...

March 23, 2026 David Shipley, co-host of Cybersecurity today is covering RSAC for Tech Newsday and Cybersecurity Today.  SAN FRANCISCO more...

March 23, 2026 The U.S. Federal Communications Commission has banned the import of all new foreign-made consumer routers following a more...

Picture of TND News Desk

TND News Desk

Staff writer for Tech Newsday.
Picture of TND News Desk

TND News Desk

Staff writer for Tech Newsday.

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn