AI agents turn “a user clicked a button” into “a model executed a tool call.”
That changes authorization in two ways:
- the actor might be an agent (not directly a human)
- the agent might chain tools and services, expanding blast radius
This topic focuses on keeping AI automation least-privileged, auditable, and revocable.
Define the actor(s)
An “agent action” often involves multiple principals:
- the end user (who requested the work)
- the agent identity (the runtime doing the work)
- tool/service identities (connectors)
Your system should be able to answer: “Which principal caused this write?”
Practical patterns
Tool-scoped tokens
Instead of giving the agent a broad user token:
- mint a token scoped to one tool
- scope it to one tenant/org
- expire it quickly
Approval gates
For high-risk actions:
- require explicit approval (human-in-the-loop)
- record the approval as part of the audit trail
Capability allowlists
Give agents a small menu of allowed actions:
- read-only by default
- narrow write permissions with safeguards
Pitfalls
- Treating the agent as a “super-admin”.
- Not logging the prompt/tool context needed to explain actions.
- No “kill switch” (revoke agent credentials, disable tools, invalidate tokens).
Where to go next
- /category/identity-for-ai
- /category/access-management
- IDPro Book of Knowledge (reference): https://bok.idpro.org/
