February 13, 2026 Google says its Gemini chatbot is facing a surge of attempts to reverse-engineer its technology, with some campaigns hitting the system tens of thousands of times in an apparent effort to copy how it works. The company disclosed on Feb. 12 that repeated probing, including one effort involving more than 100,000 prompts, appears aimed at extracting the model’s underlying logic.
In a new security report, Google described the activity as “distillation attacks,” a tactic where actors bombard an AI system with carefully crafted queries to infer how it reasons and generates answers. The company said the campaigns appear largely commercially motivated, with attackers likely seeking insights that could help train competing models or refine their own AI tools.
The technique, also known as model extraction, exploits a basic reality of modern AI: powerful systems are widely accessible through public interfaces. By systematically collecting outputs at scale, attackers may reconstruct approximations of the decision patterns behind a model , effectively learning from it without direct access to the code or training data.
Google did not identify specific perpetrators but said the activity appears global. The company suggested private firms and researchers are the most likely sources, reflecting the intensifying race to build advanced AI systems. With billions of dollars invested in large language models, their internal workings are now among the most valuable intellectual property in tech.
Security analysts warn the implications extend beyond major platforms. As enterprises roll out custom AI models trained on proprietary data, similar probing techniques could expose sensitive knowledge embedded within those systems, from trading strategies to internal research or operational workflows.
Google framed Gemini as an early target largely because of its scale and visibility, warning that smaller AI deployments may soon face the same pressures. The company said it has mechanisms to detect and limit extraction attempts. At the same time, it acknowledged that fully preventing them is difficult while models remain publicly accessible.
The disclosure highlights a shifting front in AI security. Instead of hacking infrastructure, attackers are increasingly targeting the models themselves. The trend could complicate how companies protect competitive advantages in a field where innovation cycles are accelerating.
