Agentic AI Security Is Broken: Token Security on Identity, Intent & Guardrails for Autonomous Agents

Jim Love discusses how rapid adoption of agentic AI is repeating the industry pattern of shipping technology without security, citing issues like vulnerabilities in Anthropic’s MCP and insecure open-source agent tools. He interviews Ido Shlomo, co-founder and CTO of Token Security, who argues AI agents are fundamentally hard to secure because they are non-deterministic, have infinite input/output space, and often require broad permissions to be useful.

Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that’s built for performance and scale. You can find them at Meter.com/cst

Shlomo proposes focusing security on access, identity, attribution, least privilege, and auditability rather than trying to filter prompts and outputs, and describes Token’s “intent-based permission management” approach that maps agents and sub-agents as non-human identities tied to their purpose and allowed actions. The conversation covers real-world risks such as developer tools like Claude Code running with extensive access, widespread over-provisioning of admin permissions and API keys, exposure of unencrypted local token files, and misconfigurations that leak data publicly. Shlomo recommends organizations build governance processes for agents—discovery/inventory, boundary setting, continuous monitoring, and secure decommissioning—and says AI is needed to help police AI. He also highlights emerging trends like agent teams and multi-day autonomous tasks, and notes Token Security is a top-10 finalist in the RSA Innovation Sandbox 2026, planning to present an intent-and-access-focused security model for AI agents.

00:00 Sponsor: Meter’s integrated networking stack
00:19 Why agentic AI security is breaking (MCP & open-source chaos)
02:53 Meet Token Security: practical guardrails for AI agents
04:57 Why you can’t just ban agents at work (shadow AI reality)
06:24 Tel Aviv’s cybersecurity pipeline: gaming, military, and startups
08:57 Why AI/agents are fundamentally hard to secure (new OS + ‘human spirit’)
13:44 Trust, autonomy, and permissions: managing the blast radius
18:17 Real-world exposure: Claude Code and the developer identity attack surface
20:16 A workable approach: treat agents as untrusted processes with identity + least privilege
22:33 Zero Trust for Agents: Access ≠ Permission to Act
23:27 Token’s “Intent-Based Permission Management” Explained
25:29 Building the Identity Map: Tracing What Agents Touch
26:52 The Secret Sauce: Using AI to Secure AI in Real Time
28:10 Real-World Case: 1,500 Agents and Wildly Over-Provisioned Access
30:57 CUA ‘Computer-Use’ Agents: Exciting, Personal… and Terrifying
34:44 Secure-by-Default & Sandboxing: Fixing ‘Always Allow’ Dark Patterns
35:36 What Security Teams Should Do Now: Inventory, Boundaries, Governance
37:59 What’s Next: Agent Teams and Multi-Day Autonomous Work
40:10 Tony Stark Vision: Agents That Improve the Human Experience
41:02 RSA Innovation Sandbox: Token’s Big Bet on Intent + Access
43:01 Wrap-Up, Audience Q&A, and Sponsor Message

Related Podcasts

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn