Everyone’s shipping AI agents right now. Connect your agent to Slack, let it read your email, have it make PRs on your behalf. This can be via MCP, no-code AI tools like n8n or direct via APIs. However, we’re in this awkward middle ground where the tooling to use agents is miles ahead of the tooling to secure them. The fundamental problem isn’t new, it’s the same challenge we’ve had with any system that acts on behalf of a user, but agents amplify it in ways that existing access control patterns weren’t built for.
OAuth Wasn’t Built For This
When you connect an AI agent to a SaaS tool today, in most cases you’re going through an OAuth flow. The agent requests access, you click “Allow,” and it gets a token. Simple enough on the surface. The problem is that OAuth scopes in most SaaS products are incredibly broad. You don’t get “read the last 5 emails about Project X.” You get “read access to your entire inbox.” Forever. Or at least until someone remembers to revoke it.
This is the same issue we’ve had with any third-party app integrations for years, but it’s worse with agents for a few reasons. Agents are non-deterministic. A traditional integration does roughly the same thing every time. An agent might decide it needs to read your calendar, then your email, then your Drive, then Slack, all in one workflow. Each of those connections is a separate broad OAuth grant, and the agent has access to far more data than it actually needs for any single task.

The Standards Race
There are currently multiple efforts to solve this, and like most things in the standards world, they don’t all agree with each other.
The biggest push right now is Cross App Access (XAA), which Okta has been driving alongside partners like AWS, Box, and Glean. XAA is built on the Identity Assertion Authorization Grant (ID-JAG), an OAuth extension being standardised through the IETF. The core idea is solid: instead of the agent doing an OAuth dance directly with each SaaS tool, the enterprise IdP sits in the middle. The IdP issues short-lived, scoped tokens for each specific app interaction, and enterprise admins get centralised visibility and policy control over what agents can access. Plus, XAA was recently incorporated into the November 2025 MCP spec update as an authorization extension.
The challenge is that XAA is still very much Okta-led, and while the underlying ID-JAG spec is open, the practical implementation today requires Okta as the IdP. If you’re an Okta shop, great. If you’re not, then it’s likely your IdP is doing something different. Other identity vendors like Descope, WorkOS, and Scalekit are building their own implementations of ID-JAG, but we’re in early days, and fragmentation is a real risk. We’ve seen this movie before with SSO and SCIM, where it took more than a decade to get reasonable adoption, and even now coverage is patchy. Just look at the recent discourse around MCP authorization more broadly, and you can see much of the same disagreement playing out. Not to mention Microsoft is doing their own thing with Microsoft Entra Agent ID as usual.
Who Did What?
There’s a subtler problem that doesn’t get talked about enough: attribution. When an agent acts on behalf of a user, it typically uses that user’s token. From the perspective of every downstream system, it looks like the user took the action. The user sent that email. The user deleted that file. The user approved that pull request.
This isn’t necessarily a security problem in the traditional sense, but it creates real issues for audit trails and incident response. When something goes wrong, you need to be able to answer a very specific question: did the user intentionally perform that action, did the user instruct the agent to do it, or did the agent hallucinate and do it on its own?
These are three very different scenarios with very different implications. If an agent sends an email containing sensitive data to the wrong person, was that the user being malicious and using the agent as a proxy? Or was it a hallucination? Your response is completely different depending on the answer, and today most systems give you no way to tell without considerable analysis. The agent’s actions and the user’s actions are indistinguishable in the logs.
XAA helps here somewhat by making cross-app interactions visible to the IdP, but within a single application, you’re still mostly relying on whatever logging the agent platform provides, which varies wildly in quality.
There have been legitimate cases of DLP scenarios happening with AI agents in real life too. I’ve seen a case where an AI agent failed to upload an image to a GitHub PR, so instead publicly uploaded it to Imgur and included a link instead. These events are really hard to prevent, just like it’s hard to prevent insider threats with people; it mostly relies on response activities and having good data to support those investigations (unless you go full sandboxing, which adds friction).
The Ideal: Least Privilege, Just-In-Time
The access model we actually want is conceptually straightforward. Think of it as three overlapping circles: the user’s permissions, the agent’s total allowed permissions (like a service account boundary), and the permissions required for the specific task at hand. The ideal token sits at the intersection of all three, granting only what’s needed, only for the duration of the task, and nothing more.
This means short-lived tokens scoped to the exact operation. Not “read all of Gmail,” but “read emails from person X in the last 24 hours about topic Y.” Not “write to any Jira project,” but “create a single ticket in project Z with these specific fields.” The token expires the moment the task is done.
This is essentially what XAA and ID-JAG are trying to achieve at the protocol level, with the IdP issuing scoped, short-lived tokens for each cross-app interaction. But even with a perfect protocol, you still need the downstream applications to actually support fine-grained scopes, and most SaaS products simply don’t. Gmail doesn’t offer a scope for “read emails from one person about one topic.” It offers “read all email” or “no access.” Until SaaS providers build more granular scopes, the protocol improvements only get you so far. This is probably the biggest blocker in that, similar to SAML and SCIM, it requires all SaaS tools to support it, which takes considerable time for widespread adoption.
Fine-Grained Access Is Great, If You Own The System
This is where tools like SpiceDB and Oso come in. These are purpose-built for fine-grained authorization, and they fix a lot of the issues I’ve explained above.
These tools are based on Google’s Zanzibar paper and use attribute-based access control (ABAC) and relationship-based access control (ReBAC) to model permissions as relationships between users and resources. Both let you express incredibly granular policies: this agent, acting on behalf of this user, can read only these specific documents, only if the documents are in draft status, for example. At Canva we use these to power our customer consent pipeline, ensuring that customers consent to their data being accessed and an attribute is generated to allow further access specifically to just that customer’s data.
For internal tooling, this is a near-perfect solution. Building a customer support agent that needs to look up order history? You can model exactly what data it can access. Building a code review agent? You can scope it to specific repositories with read-only access. You control the application, you control the data model, and you can wire fine-grained checks into every operation.
The problem is that this falls apart the moment you need to connect to something you don’t control.
So you end up with a split world. For internal systems and tools you build yourself, fine-grained access control is achievable and works well. For third-party integrations, you’re stuck with whatever granularity the vendor offers, which is usually broad OAuth scopes and not much else. This is a frustrating gap, and it’s not one that any single tool can fix. It requires SaaS providers to invest in more granular permission models, and historically, that hasn’t been a priority unless enterprise customers are actively demanding it.
Where Does This Leave Us?
In practice, most organisations shipping agents today are doing one of a few things: using broad OAuth tokens and hoping for the best, building custom middleware that tries to filter and log agent actions before they hit downstream APIs, or restricting agents to internal systems where they have more control.
None of these are great long-term answers. Here’s what I’d actually recommend if you’re building or deploying agents today:
If you’re okay with waiting, consider vendor-built approaches like Okta’s XAA and Microsoft Entra ID Agent Access. They will take significant time to reach widespread adoption and I think there are still blockers that mean they might end up dead in the water, but it’s probably the best approach as it stands today.
For internal tools and systems you build, adopt a fine-grained authorization tool like SpiceDB or Oso from the start. Don’t bolt it on later. Model your agent’s permissions separately from your user’s permissions so you can distinguish between the two.
Plus, push your SaaS vendors for more granular OAuth scopes and to support new protocols as they release. Every time you’re on a call with your GitHub, Atlassian, or Salesforce account team, ask them about fine-grained agent scopes. It won’t change overnight, but vendor roadmaps are influenced by customer demand.
Looking Forward
I think we’re about 2-3 years away from access control being a genuinely solved problem for AI agents and it being widespread across the core tooling that people use. The pieces are falling into place, but it’s going to take significant time and customer demand to get there.
The messy part will be the transition. We’ll have some apps that support fine-grained agent access, some that only support broad OAuth, and some that have no agent-specific access model at all. Security teams will need to build different strategies for each category, which is painful but manageable.
If you’re using agents now, you’re locked into OAuth and broad-based access, which is likely going to open you up to additional risk and potentially block AI agent usage in certain contexts. So you’ll either have to risk-accept, build custom tooling, or simply wait until further releases happen.
We’ve pushed up against what’s possible without giving agents significant access to take actions. Most people I know are toying with simple ideas, things like refunding accounts, basic customer support, auto-raising PRs to bump versions etc. If we want truly autonomous background agents, we need to get both the access control and logging portions done before we can expand wider, else we’ll end up with AI agents that give 100% discounts to customers and are significantly prone to exposing company secrets.
If you’re building agent workflows, build with the assumption that access control will get better. Focus on the parts that won’t change first. Invest in attribution, logging, data governance, and least-privilege patterns and you’ll be in a much better position when the standards mature.