When AI Acts for You: Getting “On Behalf Of” Right

I just got back from the RSA Conference in San Francisco, where I binged more sessions on AI agents than is probably healthy. I came for architecture diagrams and real-world case studies; I left with a notebook full of ideas — and a healthy dose of fear about what happens when we wire autonomous software into everything and hope the identity access management (IAM) team ‘has it covered.’
Much of this thinking was shaped by Aaron Turner and Rich Mogull’s “Multi-MCP and Multi-Agent Security Reference Architectures” session and Sriram Santhanam’s “Cloudy with a Chance of AI” session on identity-first AI architectures. Those talks drew a clear line from identity sprawl and non-human identity explosions to the way agents should use “on-behalf-of” tokens and delegated authority.
There is a clear takeaway from these presentations: The most dangerous AI systems aren’t the ones that hallucinate — they are the ones that act.
As enterprises wire AI agents into ticketing systems, code repositories, CRMs, and data lakes, a single design choice quietly determines whether those agents behave like trusted assistants or runaway insiders: how an agent uses identity and authorization.
That’s where OAuth “on-behalf-of” (OBO) comes in. In this blog post, I define what OBO is, why it’s essential for enterprise use of agentic AI, and how it helps organizations achieve a Zero Trust AI architecture.
The Moment AI Starts Using Your Permissions
In a classic web app, OAuth is straightforward: a user signs in, gets a token, and the app calls APIs using that token. The app is essentially a messenger. It doesn’t have its own long-term power; it just passes along the user’s authority.
AI agents break that mental model.
Agents don’t just pass requests along. They interpret goals, choose tools, and run multi-step workflows. That means they will call downstream APIs and tools without the user being directly involved. If all the agent has is a broad service account token, then they have effectively been given permanent admin rights — forcing the user to just hope for the best-case scenario.
OBO is how you avoid that.
With OBO, the chain looks more like this:
- The user authenticates and gets a token that represents their identity and base permissions.
- The agent requests a new token, explicitly marked as acting on behalf of that user and scoped to the specific task or resource it needs.
- Downstream APIs see not just “an agent” but “this agent, acting for this user, with these narrow permissions, for this limited time.”
Instead of hard-coding secrets into agents or giving them standing access, delegated, short-lived OBO tokens are issued for each hop. The agent never owns the keys; it temporarily borrows just enough authority to do a well-defined job.
That’s the difference between “this agent can do anything in the CRM” versus “this agent can read opportunities for this salesperson for the next 10 minutes to draft follow-up emails.” OAuth OBO helps enforce Zero Trust by issuing short-lived, scoped tokens that let each backend verify and limit what an app or agent can do while still acting explicitly in the context of a specific user.
Every Agent Needs a Human Shadow
Treating agents as first-class identities is now table stakes. They should be registered, authenticated, and governed like any other application. But they must never be allowed to exist and operate without being anchored to a specific, accountable human identity.
An AI agent is not a person. It doesn’t own risk, sign contracts, or go to court. It is an execution proxy. Every meaningful action it takes should be traceable back to a human principal: Who was the AI acting for?
Linking agents to humans through OBO tokens makes that connection concrete:
- Each agent has its own client identity, but every privileged call it makes carries the user’s identity in the delegation chain.
- Logs and audit trails show: “Agent X, acting on behalf of User Y, called API Z with scopes S at time T.”
- If something goes wrong, the lapse can be attributed to which user’s authority was used, which agent misbehaved, and which systems were touched.
That linkage isn’t just for forensics. It’s how cybersecurity teams can attenuate risk in real time. If an agent is spotted doing something suspicious, OBO ensures that the admin or security architect can revoke the user’s access, which expires all related tokens. The security team can then disable that agent’s identity and tighten its policies — all without decommissioning other necessary agents or redesigning the agent itself.
Without that clear relationship — the OBO parameter tying tokens back to real people — IAM and security teams are left with a fog of autonomous systems making changes no one can fully explain or stop.
You Can’t Govern Agents on a Disorganized, Disconnected Identity Layer
All of this assumes something uncomfortable: that your human identity layer is in good shape. In many organizations, it isn’t.
Years of cloud adoption have left behind overlapping directories, stale accounts, over-permissioned roles, and long-lived secrets. The result is identity sprawl: multiple “versions” of the same person, shadow admin access, and service accounts no one really owns.
If OBO-based agents are dropped into that environment, existing problems simply multiply:
- If humans are over-permissioned, their agents inherit that excess by design.
- If you have duplicate or shared accounts, logs that say “acting on behalf of user X” don’t actually tell you which human that is.
- If secrets and roles are messy, it’s hard to scope OBO tokens meaningfully — everything looks like “full access.”
This is why, before an enterprise gets clever with agent design, IAM teams need to do the unglamorous groundwork:
- Consolidate human identities. Aim for one canonical identity per person across major systems. Kill duplicate and shared accounts.
- Right-size human permissions. Strip back roles to least privilege. If a salesperson doesn’t need edit access to every customer, their agent probably shouldn’t either.
- Enforce strong auth and ownership. MFA, SSO, clear account owners, and regular reviews are now prerequisites — not nice-to-haves.
Once your enterprise’s human identity fabric is clean and reasonably least-privileged, OBO becomes a precision tool instead of a blunt instrument. Agents can safely inherit just enough authority from humans, and you can actually trust that your logs and policies accurately reflect reality. When an action appears in the logs as “Agent X acting on behalf of User Y with Scope Z,” you can rely on that being true, so you can investigate incidents, enforce rules, and make decisions based on that data with confidence rather than guessing who really did what or what access they actually had.

