Docker patches critical AI flaw that allowed code execution via container metadata

February 4, 2026 A now-patched security flaw in Docker’s built-in AI assistant exposed users to the risk of remote code execution and sensitive data theft, cybersecurity researchers disclosed this week. The vulnerability, dubbed DockerDash by Noma Labs, affected Ask Gordon, an AI assistant integrated into Docker Desktop and the Docker command-line interface. Docker fixed the issue with the release of version 4.50.0 in November 2025.

Researchers said the flaw allowed attackers to embed malicious instructions inside seemingly harmless metadata fields of a Docker image. When a user queried Ask Gordon about the image, the AI assistant could misinterpret those metadata labels as executable commands and trigger them without proper validation.

According to Noma Labs, the attack chain required no traditional exploit techniques. Instead, it relied on a breakdown of trust between the AI assistant, its middleware and the local execution environment. Ask Gordon parsed unverified image metadata and passed it to Docker’s Model Context Protocol (MCP) Gateway. The gateway then executed the instructions using the victim’s Docker privileges.

The result varied by platform. In cloud and CLI environments, the flaw could lead to critical-impact remote code execution. On desktop systems, it could enable large-scale data exfiltration, exposing details such as container configurations, mounted directories, installed tools and network topology.

At the core of the issue was how Ask Gordon handled context. The assistant treated metadata — typically descriptive information such as Dockerfile LABEL fields — as trusted input. Researchers described the weakness as a form of “meta-context injection,” where information meant to describe an object is instead interpreted as an instruction to act.

In a typical scenario outlined by Noma Labs, an attacker would publish a Docker image containing weaponized metadata. When a user asked Ask Gordon for information about the image, the assistant would read and forward the embedded instructions to the MCP Gateway. Because the gateway could not distinguish between informational metadata and authorized commands, it would execute the request through MCP tools without additional checks.

A related variant of the flaw targeted Ask Gordon’s read-only permissions on Docker Desktop, allowing attackers to harvest internal environment data using the same prompt injection technique.

Docker said version 4.50.0 also fixes a separate prompt injection issue previously identified by Pillar Security, which involved malicious instructions hidden in Docker Hub repository metadata.

Noma Labs warned that organizations should treat AI supply chain risk as an immediate concern, not a future one. The researchers said mitigating this new class of attacks will require zero-trust validation of all contextual data provided to AI models, even inputs that appear benign.



Top Stories

Related Articles

February 4, 2026 Web hosting provider HostPapa experienced a service outage early Tuesday morning that left customer websites and dashboards more...

February 4, 2026 Global markets were jolted on Feb. 3 as fears that artificial intelligence could upend the software industry more...

February 4, 2026 OpenAI has been looking beyond Nvidia for parts of its artificial intelligence infrastructure, according to people familiar more...

February 4, 2026 More than three million Fortinet devices have been exposed to a critical authentication-bypass vulnerability that is being more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn