New tool protects open source AI from malware and code compromise

In the digital age, a new kind of Trojan horse has emerged in the form of AI models laced with malicious code. The AI community got a jolt from Protect AI’s revelation that a staggering 3,354 models on Hugging Face, a go-to AI model depot, contained potential malware or compromised code.

Worse, it also appeared that Hugging Face’s security scans missed the threats in a third of these compromised models.

This has led a company called Protect AI to develop a scanner tailored to detect malware and compromised code in open source AI models.

Open source AI models are gaining in popularity given the costs associated with building and training a proprietary model.

This has made platforms like Hugging Face incredibly popular but, if Project Ai’s numbers are correct, it has also made them a potential source of compromised AI code.

Protect AI’s scanning software is one potential tool to detect these issues and ensure the safety of open source AI models.

How will Protect AI keep up to date on threats? They have acquired a bug bounty program aimed at AI models called Huntr which they hope will provide them with continuing insights into new threats as they evolve.

Sources include: Axios

Top Stories

Related Articles

May 12, 2026 Θα έχετε την καλύτερη εντύπωση για το Wildsino αν παίξετε αυτά τα παιχνίδια στο tablet σας ή σε more...

May 13, 2026 Home of Fun on ottanut käyttöön nettikolikkopelit, jotta voit pelata ilmaiseksi ja nauttia hauskasta tunnelmasta. Henkilökohtaiset uhkapelit ovat more...

May 12, 2026 Te softwarematige kansspelen zorgt gij ontwikkelaar pro diegene gij RTP klopt, gezag narekenen of de achterliggende algoritmen wa more...

May 11, 2026 Most other intimate source, such as other kid, Disregard Holtz, affirmed he could be “ 11may however fighting more...

Jim Love

Jim Is and author and pud cast host with over 40 years in technology.