2026-02-04

AI Agent Identity: Why BodySnatcher and Connected Agents Vulnerabilities Should Change How You Think About Agentic Security

Two major vulnerability disclosures—ServiceNow's BodySnatcher and Microsoft's Connected Agents—expose fundamental weaknesses in AI agent authentication. Organizations must implement identity-centric controls that match the speed and autonomy of agents.

The first month of 2026 delivered a clear warning for every enterprise deploying AI agents: identity security for autonomous systems isn’t “an AI problem” — it’s an IAM problem.

Two disclosures in particular (ServiceNow “BodySnatcher” and Microsoft “Connected Agents”) are useful because they highlight the same underlying failure mode:

When an agent can be impersonated, rebound to a different principal, or tricked into acting with broad permissions, you don’t have an “AI issue.” You have an identity boundary failure.

This post is a practical breakdown of what to change in your IAM design so agents aren’t your newest Tier-0 threat.


Executive summary (what to do next)

If you only do four things:

  1. Give every agent its own identity (never share “agent-service-account” across agents).
  2. Stop using long-lived secrets for agents. Move to short-lived tokens via workload identity federation (OIDC), SPIFFE/SPIRE, or cloud-native identities.
  3. Put an authorization gate in front of tools, not just prompts. (Tool allowlists + least privilege + time bounds.)
  4. Log agent intent + tool calls (prompt hash / policy decision / tool invocation) so investigations aren’t blind.

Why these disclosures matter (even if the details change)

Most “agent vulnerabilities” rhyme:

  • Agents are glued together from LLMs + tools + identity credentials.
  • The LLM is not deterministic.
  • The tools are usually authenticated with powerful credentials.
  • The authorization boundary is often implicit (“the agent knows what it should do”), not enforced.

So when a vuln lets an attacker:

  • hijack an agent session,
  • swap the identity context,
  • inject instructions that cause privileged tool calls,

…it’s effectively the same outcome as stealing OAuth tokens or abusing an over-privileged integration.

Related Learn IAM reading:


The agent identity stack (where things go wrong)

Think of an “agent” as three layers:

LayerWhat it isTypical failure
IdentityCredential used to call tools/APIsStatic secrets; shared identities; no revocation story
AuthorizationWhat calls are allowed“Everything the API key can do” (no policy gate)
OrchestrationPrompt/tool chainPrompt injection or untrusted data steering actions

The critical realization: tool calls are the real privileged actions. The LLM is just the decision engine.

Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/ai-tool-authorization


What to change in IAM for agents (patterns that actually work)

1) Separate “agent identity” from “human identity”

Don’t let agents reuse human browser sessions or human OAuth grants by default.

Preferred model:

  • Human authenticates (SSO, MFA, device posture)
  • Human delegates to agent with explicit, scoped consent
  • Agent acts with a separate principal

Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/delegation-impersonation

Practical implementation

  • Okta: Use OAuth client credentials or service apps for agent identities; keep admin scopes tightly controlled.
  • Microsoft Entra ID: Use separate app registrations / managed identities for agents. Track owners + permissions.

Reference (general guidance):

  • Use a dedicated workload/app identity per agent, with explicit ownership + least-privilege permissions.

2) Prefer short-lived credentials (minutes), not long-lived secrets (months)

Agents are high-frequency, automated actors. That means:

  • leaks happen (logs, repos, prompt context)
  • abuse is fast
  • blast radius is huge

A useful baseline:

Credential typeBad defaultBetter target
API keys / client secrets90 days–neveravoid entirely
Access tokens1–12 hours5–60 minutes
Refresh tokensweeks/monthsrotation + reuse detection

If you’re on cloud:

  • AWS: IRSA (EKS), IAM Roles Anywhere
  • GCP: Workload Identity Federation
  • Azure: Managed Identities + federation

3) Put policy enforcement in front of tools

Agents shouldn’t be able to call arbitrary tools with arbitrary parameters.

Minimum viable control:

  • allowlist tools per agent
  • allowlist actions per tool (read vs write vs admin)
  • require step-up (human approval) for destructive actions
  • apply time bounds (policy expires after task)

This is the same model as PAM — just applied to tool calls.

Learn IAM topic: https://learn-iam.com/topic/identity-for-ai/ai-agent-identity-and-access-controls


4) Audit: capture “why” not just “what”

Classic audit logs tell you:

  • who did what, when

Agent investigations also need:

  • which prompt / instruction triggered the action (hash is fine)
  • which tools were invoked in what order
  • which policy decision allowed the action

If you can’t answer “why did the agent do that?” you can’t prove impact.


Incident response: what you revoke when an agent is compromised

When a human is compromised, you reset password + revoke sessions.

When an agent is compromised, you need:

  1. Revoke the agent’s tokens/sessions
  2. Disable the agent’s principal (app/service account/managed identity)
  3. Rotate downstream credentials (if any)
  4. Pull logs of tool invocations (not just logins)

Related: https://learn-iam.com/topic/specifications/oauth-token-security-revocation-rotation-incident-response


Bottom line

Agentic systems move fast. Your identity controls must move faster.

The lesson from BodySnatcher/Connected Agents isn’t “AI is scary.” It’s:

If you don’t design identity boundaries for agents, you will accidentally build privileged automation you can’t govern.


Where to go next