The first month of 2026 delivered a clear warning for every enterprise deploying AI agents: identity security for autonomous systems isn’t “an AI problem” — it’s an IAM problem.
Two disclosures in particular (ServiceNow “BodySnatcher” and Microsoft “Connected Agents”) are useful because they highlight the same underlying failure mode:
When an agent can be impersonated, rebound to a different principal, or tricked into acting with broad permissions, you don’t have an “AI issue.” You have an identity boundary failure.
This post is a practical breakdown of what to change in your IAM design so agents aren’t your newest Tier-0 threat.
Executive summary (what to do next)
If you only do four things:
- Give every agent its own identity (never share “agent-service-account” across agents).
- Stop using long-lived secrets for agents. Move to short-lived tokens via workload identity federation (OIDC), SPIFFE/SPIRE, or cloud-native identities.
- Put an authorization gate in front of tools, not just prompts. (Tool allowlists + least privilege + time bounds.)
- Log agent intent + tool calls (prompt hash / policy decision / tool invocation) so investigations aren’t blind.
Why these disclosures matter (even if the details change)
Most “agent vulnerabilities” rhyme:
- Agents are glued together from LLMs + tools + identity credentials.
- The LLM is not deterministic.
- The tools are usually authenticated with powerful credentials.
- The authorization boundary is often implicit (“the agent knows what it should do”), not enforced.
So when a vuln lets an attacker:
- hijack an agent session,
- swap the identity context,
- inject instructions that cause privileged tool calls,
…it’s effectively the same outcome as stealing OAuth tokens or abusing an over-privileged integration.
Related Learn IAM reading:
- https://learn-iam.com/blog/oauth-tokens-are-the-new-keys
- https://learn-iam.com/topic/specifications/oauth-token-security-revocation-rotation-incident-response
The agent identity stack (where things go wrong)
Think of an “agent” as three layers:
| Layer | What it is | Typical failure |
|---|---|---|
| Identity | Credential used to call tools/APIs | Static secrets; shared identities; no revocation story |
| Authorization | What calls are allowed | “Everything the API key can do” (no policy gate) |
| Orchestration | Prompt/tool chain | Prompt injection or untrusted data steering actions |
The critical realization: tool calls are the real privileged actions. The LLM is just the decision engine.
Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/ai-tool-authorization
What to change in IAM for agents (patterns that actually work)
1) Separate “agent identity” from “human identity”
Don’t let agents reuse human browser sessions or human OAuth grants by default.
Preferred model:
- Human authenticates (SSO, MFA, device posture)
- Human delegates to agent with explicit, scoped consent
- Agent acts with a separate principal
Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/delegation-impersonation
Practical implementation
- Okta: Use OAuth client credentials or service apps for agent identities; keep admin scopes tightly controlled.
- Microsoft Entra ID: Use separate app registrations / managed identities for agents. Track owners + permissions.
Reference (general guidance):
- Use a dedicated workload/app identity per agent, with explicit ownership + least-privilege permissions.
2) Prefer short-lived credentials (minutes), not long-lived secrets (months)
Agents are high-frequency, automated actors. That means:
- leaks happen (logs, repos, prompt context)
- abuse is fast
- blast radius is huge
A useful baseline:
| Credential type | Bad default | Better target |
|---|---|---|
| API keys / client secrets | 90 days–never | avoid entirely |
| Access tokens | 1–12 hours | 5–60 minutes |
| Refresh tokens | weeks/months | rotation + reuse detection |
If you’re on cloud:
- AWS: IRSA (EKS), IAM Roles Anywhere
- GCP: Workload Identity Federation
- Azure: Managed Identities + federation
3) Put policy enforcement in front of tools
Agents shouldn’t be able to call arbitrary tools with arbitrary parameters.
Minimum viable control:
- allowlist tools per agent
- allowlist actions per tool (read vs write vs admin)
- require step-up (human approval) for destructive actions
- apply time bounds (policy expires after task)
This is the same model as PAM — just applied to tool calls.
Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/ai-agent-identity-and-access-controls
4) Audit: capture “why” not just “what”
Classic audit logs tell you:
- who did what, when
Agent investigations also need:
- which prompt / instruction triggered the action (hash is fine)
- which tools were invoked in what order
- which policy decision allowed the action
If you can’t answer “why did the agent do that?” you can’t prove impact.
Incident response: what you revoke when an agent is compromised
When a human is compromised, you reset password + revoke sessions.
When an agent is compromised, you need:
- Revoke the agent’s tokens/sessions
- Disable the agent’s principal (app/service account/managed identity)
- Rotate downstream credentials (if any)
- Pull logs of tool invocations (not just logins)
Bottom line
Agentic systems move fast. Your identity controls must move faster.
The lesson from BodySnatcher/Connected Agents isn’t “AI is scary.” It’s:
If you don’t design identity boundaries for agents, you will accidentally build privileged automation you can’t govern.