December 12, 2025 Google is rolling out fully managed MCP servers so AI agents can plug directly into services like Maps and BigQuery without custom connectors. The rollout comes as Anthropic donates MCP to the Linux Foundation, formalizing the protocol that leading AI companies are using to connect agents to real-world tools and data.
The managed servers are designed to replace the fragile systems developers currently use to link agents with external tools. Early offerings include MCP servers for Maps, BigQuery, Compute Engine and Kubernetes Engine, with real-time access governed through Cloud IAM permissions, Model Armor protections and audit logging.
Google says the servers are available at no additional cost to enterprise customers already paying for Google Cloud, and that support will expand to storage, databases, logging, monitoring and security services in the coming months. The managed model is intended to give agents dependable access to live information such as location data or analytics queries, reducing integration work for teams deploying agentic workflows.
According to Steren Giannini, product management director at Google Cloud, permission controls through Cloud IAM and protections from Model Armor give organizations clearer guardrails over what agents are allowed to do.
The launch lands just as MCP solidifies its position as the industry’s preferred standard. Anthropic’s donation of MCP to the Linux Foundation places the protocol under neutral governance and brings companies like Google, Microsoft, AWS, OpenAI and Cloudflare into a new Agentic AI Foundation focused on open standards. MCP has gained rapid adoption across the industry, enabling agents from different companies to authenticate, discover tools and execute tasks across services without custom APIs or one-off integrations.
With a shared standard now governed independently and adopted across the major AI labs, Google is positioning its cloud services to plug directly into that ecosystem. The company plans to expand MCP support beyond the initial set of servers to include storage, databases, logging, monitoring and security tools.
As more enterprises experiment with agent workloads, standardized connections to tools and data could make it easier to deploy systems that rely on live information rather than model-only reasoning. AI developers can also gain a clearer path to building agents that rely on consistent, real-time connections rather than model-only reasoning.
