Overview
AI Tool Authorization controls the "hands" of the AI. When an LLM decides to call a function (e.g., send_email, query_db, restart_server), that call must be authorized. The AI model itself is not a trusted caller. It generates intent, which the application runtime must validate against a strict authorization policy before execution.
This is the firewall between the AI's "brain" and the "world". Without it, a confused or hijacked AI has unrestricted access to your infrastructure.
Architecture
The "Interceptor" pattern for tool execution.
Key Decisions
- Least Privilege: Tools should be scoped narrowly. Instead of a generic
sql_querytool (dangerous!), expose specific intent-based tools likeget_order_status(order_id). - On-Behalf-Of (OBO) Flow: The tool should execute using the user's credentials, not the agent's service account, whenever possible. This ensures the user cannot escalate privileges via the agent.
- Confirmation Flows: Identifying "consequential" tools (e.g., transfer money) that always require human approval.
Implementation
OAuth 2.0 On-Behalf-Of
When the agent needs to call the GitHub API:
- The user authenticates to the Agent App via OAuth.
- The Agent App stores the user's Access Token.
- When the LLM calls
list_repos, the runtime retrieves the user's token and calls the GitHub API. - Result: The agent can only see repos the user can see.
Tool Schemas & Validation
Use strict Pydantic/JSON schemas for tool definitions. Validate arguments before execution.
If a tool expects an integer ID, reject 1 OR 1=1 (SQL injection attempts generated by the LLM).
Risks
- Confused Deputy: The agent is tricked into performing an action (using its service account) that the requesting user should not be allowed to do.
- Argument Injection: The LLM generating malicious payloads into tool arguments (e.g., shell injection in a
run_scripttool). - Excessive Agency: Giving an agent a "browser" tool or "terminal" tool effectively gives it full control over that environment.
