The first month of 2026 has delivered a stark warning to every enterprise deploying AI agents: identity security for autonomous systems isn't just important—it's existential. Two major vulnerability disclosures, ServiceNow's "BodySnatcher" and Microsoft's Connected Agents issue, have exposed fundamental weaknesses in how organizations authenticate and authorize AI agents. Meanwhile, new research shows that most enterprises have visibility into only 25% of their non-human identities—and that number is projected to drop to 12% by year's end.
If you're running AI agents in production, this is your wake-up call.
The Vulnerabilities That Changed Everything
BodySnatcher: The Most Severe AI-Driven Vulnerability to Date
Earlier this month, AppOmni Labs researcher Aaron Costello disclosed what he calls "the most severe AI-driven vulnerability uncovered to date." Named BodySnatcher, this vulnerability in ServiceNow's platform allowed unauthenticated attackers to impersonate administrators and weaponize AI agents—requiring nothing more than a target's email address.
Here's how Costello described the impact:
"Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe. With only a target's email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges."
The vulnerability wasn't isolated. It built upon previous research into ServiceNow's Agent-to-Agent discovery mechanism, demonstrating classic lateral movement risk—attackers could trick AI agents into recruiting more powerful AI agents to fulfill malicious tasks.
ServiceNow has since patched the vulnerability, with cloud (SaaS) customers receiving automatic updates in October 2025 and self-hosted customers urged to apply security updates immediately. But the pattern exposed a deeper problem: AI agents are being deployed with capabilities that far exceed traditional security models.
Microsoft's Connected Agents: Feature or Vulnerability?
The second major disclosure involves Microsoft's Copilot Studio, where researchers at Zenity Labs discovered that the "Connected Agents" feature—enabled by default on all new agents—creates significant lateral movement opportunities.
Connected Agents allows any agent, whether registered with Microsoft Entra's Agent Registry or not, to laterally connect to Copilot Studio agents and leverage their knowledge and capabilities. Attackers can create malicious agents that connect to legitimate, privileged agents with email-sending capabilities or access to sensitive business data.
Microsoft's position? It's a feature, not a bug. "Connected Agents enable interoperability between AI agents and enterprise workflows," a Microsoft spokesperson explained. "Turning them off universally would break core scenarios for customers who rely on agent collaboration."
The recommended mitigation: "For any agent that uses unauthenticated tools or accesses sensitive knowledge sources, disable the Connected Agents feature before publishing."
This represents a fundamental tension in agentic AI: the very features that make agents useful—their ability to collaborate, chain actions, and operate autonomously—are precisely what makes them dangerous when compromised.
The Identity Crisis Hiding in Plain Sight
These vulnerabilities didn't emerge in a vacuum. They're symptoms of a much larger problem: enterprises have spent decades accumulating non-human identities (NHIs) without adequate governance, and AI agents are accelerating this crisis at machine speed.
The Numbers Are Staggering
Consider the scale of the problem:
| Metric | Current State | Projected (End of 2026) |
|---|---|---|
| NHI-to-Human Identity Ratio | 80:1 to 100:1 | 200:1 to 500:1 |
| Total Enterprise NHIs | 8-10 million | 20-50 million |
| CISO Visibility into NHIs | 25% | 12% |
| NHIs Managed by IAM | 44% | Unknown (declining) |
As Nik Kale, principal engineer at Cisco and member of the Coalition for Secure AI, puts it: "Whether it's 200:1 or 500:1, if IAM only manages 44% of them, the attack surface is already unmanageable."
The problem compounds rapidly with agentic AI. Agents spawn subagents, create credentials dynamically, and establish agent-to-agent authentication chains. One agent deployment can generate dozens of new machine identities—most of which exist outside traditional visibility and governance frameworks.
Why Traditional IAM Fails for Agents
AI agents don't behave like users, and they don't fit neatly into service-account models either. They:
- Run continuously without human intervention
- Act on behalf of different users within the same session
- Touch multiple tools and systems in a single workflow
- Make decisions at machine speed with no pause for reflection
- Chain actions in ways that obscure the original intent
Traditional identity governance assumes human-like patterns: joiner-mover-leaver lifecycles, approval workflows, periodic access reviews. None of these map well to agents that operate continuously and propagate actions automatically.
As Sanchit Vir Gogia, chief analyst at Greyhound Research, observes: "The most dangerous assumption in enterprise security today is that valid identity implies safe behavior. In machine-driven environments, credentials are often correct and activity is authorized, yet outcomes are harmful."
The Board-Level Imperative
If you needed proof that NHI governance has become a board-level concern, recent regulatory and market developments provide it:
SEC Disclosure Requirements: U.S. public companies must now disclose material cybersecurity incidents within four business days of determining materiality, plus annual disclosures about governance and board oversight.
Audit Committee Priority: According to Deloitte's Audit Committee Practices reporting, 50% of audit committees identify cybersecurity as their number one focus area, with 62% having primary oversight of cybersecurity risk.
Strategic Investment: BDO's 2025 Board Survey found that 63% of directors plan to increase strategic investment in cybersecurity this year.
The question boards should be asking—and CISOs should be ready to answer:
"How are we governing non-human identities and their access, and what is our confidence in the inventory?"
This single question ties to everything boards care about: risk appetite (unknown access creates unknowable exposure), continuity (fragile access breaks operations during change), accountability (no owner means no control), cost (manual access work is measurable operational drag), and crisis response (containment speed depends on visibility and ownership).
Identity-Centric Controls for Agentic AI
The solution isn't to abandon AI agents—the productivity benefits are too significant, and competitive pressure makes that impractical. Instead, organizations must implement identity-centric controls that match the speed and autonomy of AI.
Core Principles
1. Every Agent Must Have a Distinct, Managed Identity
Just as humans have credentials, AI agents need their own identity lifecycle management. This includes:
- Unique identities for each agent (not shared service accounts)
- Registration in a central agent registry
- Ownership assignment to a human or team
- Lifecycle management (creation, modification, decommissioning)
Microsoft's Entra Agent ID feature represents one approach, automatically registering agents built with Copilot Studio or Azure AI Foundry in an Agent Registry. Okta, Ping Identity, and the OpenID Foundation are all developing similar standards.
2. Just-in-Time Access Over Standing Privileges
Agents should not hold persistent access to sensitive systems. Instead:
- Grant access only when needed for a specific task
- Automatically revoke access when the task completes
- Log every access request with full context
- Rate-limit actions to prevent runaway operations
3. Least Privilege by Default
Design agents with the minimum permissions required for their function:
- Segment agents by function and sensitivity level
- Prevent agents from accessing systems outside their designated scope
- Review and prune permissions regularly
- Treat any agent with broad access as high-risk
4. Full Traceability of Agent Actions
When something goes wrong, you need to reconstruct exactly what happened:
- Log every action with identity, timestamp, and context
- Trace chains of agent-to-agent interactions
- Maintain audit trails that survive agent termination
- Enable real-time alerting on anomalous behavior
5. Agent-to-Agent Communication Controls
The BodySnatcher and Connected Agents vulnerabilities highlight the risk of uncontrolled agent collaboration:
- Explicitly authorize agent-to-agent connections
- Disable inter-agent communication by default for sensitive agents
- Monitor and log all agent-to-agent interactions
- Implement mutual authentication between agents
AI Agent Authentication Platform Comparison
Several platforms have emerged to address agent authentication challenges. Here's how the leading options compare:
| Platform | Best For | Authentication Model | Key Strength | Limitation |
|---|---|---|---|---|
| Composio | Multi-tool agent workflows | Centralized OAuth + token management | 500+ integrations, managed auth layer | Requires technical expertise |
| Microsoft Entra Agent ID | Microsoft ecosystem agents | Registry-based identity + managed credentials | Automatic registration for MS-built agents | Limited to Microsoft tooling |
| Merge Agent Handler | Enterprise governance | User-authorized, scoped access | Strong audit trails, compliance focus | Less flexible for custom scenarios |
| Arcade | High-risk agent actions | Action-level authorization | Execution-time verification | Smaller connector ecosystem |
| Nango | Teams with existing stacks | Clean OAuth and token handling | Plugs into existing architecture | Not a full governance solution |
Choosing the Right Platform
Consider these factors when selecting an agent authentication platform:
For broad SaaS automation: Composio offers the widest integration coverage with centralized authentication management—ideal for agents that need to operate across many tools.
For Microsoft-centric environments: Entra Agent ID provides native integration with Copilot Studio and Azure AI Foundry, with automatic registry enrollment.
For compliance-heavy industries: Merge Agent Handler emphasizes governance, auditability, and standardized access patterns—critical for regulated environments.
For high-stakes operations: Arcade's action-level authorization ensures permissions are verified at execution time, not just at setup.
Implementation Roadmap: From Crisis to Control
Moving from today's visibility crisis to proper agent governance requires a phased approach. Trying to fix everything at once is neither practical nor cost-effective.
Phase 1: Containment (Weeks 1-4)
Objective: Stop the bleeding without disrupting operations
- Inventory all currently deployed AI agents (even incomplete)
- Identify agents with access to sensitive data or privileged actions
- Disable agent-to-agent communication for high-sensitivity agents
- Implement emergency access revocation procedures
- Assign ownership to every known agent
Key Deliverable: A "terrifying spreadsheet" of agent identities with ownership and risk classification
Phase 2: Governance Foundation (Months 2-3)
Objective: Establish policies and tooling for new agent deployments
- Define agent identity standards (naming, registration, lifecycle)
- Select and implement agent authentication platform
- Create approval workflows for new agent deployments
- Establish monitoring and alerting for agent behavior
- Document agent access policies by sensitivity tier
Key Deliverable: Governance framework that applies to all new agents going forward
Phase 3: Visibility Improvement (Months 4-6)
Objective: Extend monitoring to legacy and shadow agents
- Deploy discovery tools across cloud consoles, repos, and config files
- Integrate agent activity logs into SIEM/SOAR platforms
- Create dashboards for agent inventory and activity tracking
- Implement regular access reviews for agent permissions
- Begin decommissioning orphaned or unnecessary agents
Key Deliverable: 50%+ visibility into total agent population
Phase 4: Continuous Improvement (Ongoing)
Objective: Maintain governance as agent deployments scale
- Automate agent lifecycle management
- Implement just-in-time access for all new agent functions
- Regular penetration testing targeting agent vulnerabilities
- Quarterly governance reviews with board reporting
- Incident response playbooks specific to agent compromise
Key Deliverable: Sustainable governance model that scales with agent growth
Security Checklist for AI Agents
Use this checklist to evaluate your current agent security posture:
Identity & Authentication
- Each agent has a unique, managed identity
- Agents are registered in a central inventory with ownership
- Service accounts are not shared across multiple agents
- Agent credentials are rotated on a defined schedule
- Authentication uses phishing-resistant methods where possible
Access Control
- Agents operate under least privilege principles
- Standing privileges are minimized or eliminated
- Just-in-time access is implemented for sensitive operations
- Agent-to-agent communication is explicitly authorized
- Rate limits prevent runaway agent operations
Monitoring & Response
- All agent actions are logged with full context
- Agent-to-agent interaction chains are traceable
- Anomaly detection is enabled for agent behavior
- Incident response procedures address agent compromise
- Regular access reviews include agent permissions
Governance & Compliance
- Agent deployment requires approval workflow
- Ownership is assigned to every agent
- Decommissioning procedures exist for retired agents
- Board-level reporting includes agent risk metrics
- Regulatory requirements are mapped to agent controls
What's Next: The Race Between Capability and Control
The disclosures of BodySnatcher and Connected Agents vulnerabilities aren't the end of AI agent security challenges—they're the beginning. As agents become more capable and more autonomous, the attack surface will continue to expand.
Google's Mandiant team warns that 2026 will see the "shadow AI problem escalate into a critical shadow agent challenge," with employees independently deploying autonomous agents regardless of corporate approval. This creates "invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft."
The organizations that will navigate this successfully are those treating identity as the control plane for AI operations—not an afterthought bolted on after deployment, but a foundational requirement built into every agent from day one.
The math is simple: agent identities are growing faster than discovery capabilities. You cannot govern what you cannot see, and you cannot secure what you cannot govern. The only viable strategy is containment of legacy identity chaos combined with strict governance for every new agent deployment.
As CyberArk's Ariel Pisetzky notes: "No environment is perfect. Resilience comes from knowing who or what is acting at any given moment and having the controls to respond without stopping the business."
The question isn't whether your agents will be targeted—it's whether you'll know when they are.
Further Reading
For deeper exploration of these topics, see our Learn IAM resources: