Google says Gemini targeted by large-scale AI cloning attempts

February 13, 2026 Google says its Gemini chatbot is facing a surge of attempts to reverse-engineer its technology, with some campaigns hitting the system tens of thousands of times in an apparent effort to copy how it works. The company disclosed on Feb. 12 that repeated probing, including one effort involving more than 100,000 prompts, appears aimed at extracting the model’s underlying logic.

In a new security report, Google described the activity as “distillation attacks,” a tactic where actors bombard an AI system with carefully crafted queries to infer how it reasons and generates answers. The company said the campaigns appear largely commercially motivated, with attackers likely seeking insights that could help train competing models or refine their own AI tools.

The technique, also known as model extraction, exploits a basic reality of modern AI: powerful systems are widely accessible through public interfaces. By systematically collecting outputs at scale, attackers may reconstruct approximations of the decision patterns behind a model , effectively learning from it without direct access to the code or training data.

Google did not identify specific perpetrators but said the activity appears global. The company suggested private firms and researchers are the most likely sources, reflecting the intensifying race to build advanced AI systems. With billions of dollars invested in large language models, their internal workings are now among the most valuable intellectual property in tech.

Security analysts warn the implications extend beyond major platforms. As enterprises roll out custom AI models trained on proprietary data, similar probing techniques could expose sensitive knowledge embedded within those systems, from trading strategies to internal research or operational workflows.

Google framed Gemini as an early target largely because of its scale and visibility, warning that smaller AI deployments may soon face the same pressures. The company said it has mechanisms to detect and limit extraction attempts. At the same time, it acknowledged that fully preventing them is difficult while models remain publicly accessible.

The disclosure highlights a shifting front in AI security. Instead of hacking infrastructure, attackers are increasingly targeting the models themselves. The trend could complicate how companies protect competitive advantages in a field where innovation cycles are accelerating.

Top Stories

Related Articles

February 13, 2026 Cybersecurity researchers have uncovered a malicious Google Chrome extension designed to steal sensitive data from Meta Business more...

February 12, 2026 The Sun’s radiation has become an existential risk for spacecraft, and SpaceX is taking the fight underground, more...

February 12, 2026 Canadians will finally gain legal control over their financial data in 2026 as the federal government confirms more...

February 11, 2026 Workday’s CEO Carl Eschenbach is stepping down, less than a week after the enterprise software firm announced more...

Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.
Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn