Critical LangChain flaw exposes secrets across millions of AI agent deployments

December 29, 2025 A critical security flaw has been found in LangChain, one of the most widely used frameworks for building AI agents, potentially exposing secrets across millions of production systems. According to a disclosure published this week by security firm Cyata, the vulnerability, dubbed “LangGrinch,” allows attackers to extract sensitive environment variables, including cloud credentials and API keys. It works by exploiting how LangChain handles serialized data.

LangChain, an open-source framework used to build chatbots, retrieval-augmented generation (RAG) systems and multi-step AI agents, relies heavily on serialization, the process of encoding data structures so they can be stored, transmitted and later reconstructed. According to Cyata, LangGrinch exploits a flaw in this process, allowing malicious instructions to be hidden inside data that appears legitimate.

At the core of the issue is a serialization and deserialization injection bug in langchain-core’s built-in helper functions. An attacker can persuade an AI agent, through normal prompt interaction, to generate a specially crafted data structure containing LangChain’s internal marker key. Because that marker is not properly escaped during serialization, it can later be misinterpreted during deserialization as a trusted LangChain object rather than user-supplied data.

Once triggered, the vulnerability can lead to secret exfiltration, with attackers able to leak all environment variables via outbound HTTP requests. These variables often contain highly sensitive information such as database passwords, vector database credentials and large language model API keys. Cyata warned that in some scenarios, the issue could escalate further toward remote code execution.

“What makes this finding unusual is that the vulnerability lives in the serialization path, not the deserialization path,” said Yarden Porat, a security researcher at Cyata who discovered the issue. Since agentic systems routinely serialize and reconstruct structured outputs generated by models, the attack surface can be reached through everyday operations rather than explicit file uploads or plugins.

Unlike many previous LangChain-related security issues, LangGrinch does not depend on third-party tools or integrations. Cyata said the flaw exists in langchain-core itself.

The maintainers of LangChain have released patches, with fixes available in versions 1.2.5 and 0.3.81. Cyata said the LangChain team moved quickly to address the problem and implemented additional hardening measures beyond the immediate fix.

Organizations running LangChain-based systems are being urged to update immediately and review how secrets are exposed to AI agents, particularly those that serialize and persist model-generated outputs. 

Top Stories

Related Articles

December 29, 2025 SoftBank Group Corp. has sold its entire remaining stake in Nvidia in hopes to help raise the more...

December 29, 2025 The U.S. National Institute of Standards and Technology (NIST) has warned that several of its Internet Time more...

December 29, 2025 Google parent Alphabet said Monday it will acquire data-centre and energy developer Intersect Power in a deal more...

December 23, 2025 Thank you. None of what follows happens without your support. Hashtag Trending has now passed three million more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn