We’ve spent the last decade building zero trust architectures. Device posture checks, tiered application access, network segmentation, conditional access policies. All designed around a core principle: don’t implicitly trust anything, verify everything, and limit the blast radius when something goes wrong.

AI tools are quietly undoing a lot of that work.

We’re connecting AI tools to everything, and we’re making them controllable from anywhere. That combination is a fundamental challenge to how we’ve been thinking about defense in depth.

The Remote Control Problem

The major AI tool providers are all shipping remote control capabilities. Anthropic launched Claude Code Remote Control in February 2026, followed by Claude Code Channels which lets you message Claude Code from Telegram or Discord. OpenClaw sold out every mac in SF, doing much the same thing via WhatsApp and iMessage. Then finally you have AI connector apps for your communication tooling, like using Claude via Slack or Gmail integrations, and these tools are usually accessible on the go from any device. The barrier to remote control is getting lower by the week.

The pitch is compelling: start a task at your desk, review the PR while walking the dog, deploy the code from the toilet. Developers love the flexibility. Leadership love it because people are working more.

You now have a tool that is connected to your most sensitive enterprise applications (via MCP and API tokens), and that tool is controllable from any device, from any location, over the internet. The AI tool has become, in effect, a bridge between an unmanaged personal device and your most locked-down enterprise systems.

If this sounds familiar, it should. These features are functionally equivalent to tools like ngrok and cloudflare tunnels: an outbound connection initiated from a corporate device that allows a remote client to operate the local session. Basically SSH, except the scope is limited to the AI tool and everything connected to it, which in most setups is a lot. A lot of security teams already block ngrok and similar tunnelling tools on managed devices via MDM and EDR controls. AI remote control features are the same risk with a different label.

Everything Connected to Everything

If you’re using AI tools in any meaningful capacity today, you’re probably connecting them to a lot of things. MCP servers give Claude, Cursor, and others direct access to your Slack, GitHub, Jira, Google Drive, Salesforce, and whatever else you’ve wired up. Even without MCP, people are handing API tokens to AI tools, setting up OAuth grants, or using no-code platforms like n8n to pipe data between systems.

I’ve been using these features recently, and honestly, they’re really good. Being able to pick up a fix remotely on the go without needing a fully specced MacBook and a proper IDE is genuinely great. You’re no longer tethered to your desk to do meaningful work. Using these tools is actually what inspired me to write this. The implications of giving a single tool broad access to your most sensitive systems are significant, and I don’t think most organisations are adequately thinking through the aggregate risk.

This is the same centralisation problem I outlined in my enterprise AI search threat model, but turned up to the max. Enterprise search tools were read-only for the most part. AI agents connected via MCP and APIs can read and write. They can send Slack messages, create pull requests, modify Jira tickets, and interact with your systems in ways that enterprise search never could.

How Zero Trust Tiering Works

Zero trust architectures typically rely on a tiered approach to application access. The most common implementation I’ve seen, and one well documented in Okta’s levels of assurance guide, works like this:

  • Tier 1 (High sensitivity): Only accessible from corporate-managed devices with strong EDR posture, phishing-resistant MFA, and managed network requirements. Think Salesforce, GitHub, AWS, production databases.
  • Tier 2 (Medium sensitivity): Accessible from registered and known devices (including personal phones enrolled in Okta Verify or similar). Think collaboration tools, project management, internal wikis.
  • Tier 3 (Low sensitivity): Accessible from any device with basic authentication. Think learning platforms, low-risk internal tools.

This model works because it creates friction proportional to sensitivity. Your most sensitive applications are the hardest to reach, and they’re only reachable from devices you trust.

The device posture piece is important here. The managed device requirement confirms that the device is owned by your company and is in a known good security state, things like disk encryption, up-to-date OS, EDR running, not jailbroken. Personal phones are generally pretty secure these days, but standard personal PCs usually aren’t. There have been plenty of incidents where an employee’s personal device has been compromised and used to pivot into corporate data. The LastPass breach a few years back is a good example, where an attacker targeted a developer’s personal home computer to eventually access corporate vault data. You can still apply good zero trust principles like strong MFA everywhere, but limiting access to a set of known devices that meet specific criteria helps cut down on these sorts of issues, including insider threat scenarios.

How AI Tools Break This

AI tools break this model in a way that’s hard to fix with existing controls. Your AI tool is connected to Tier 1 applications via MCP servers and API tokens. Those connections don’t go through your IdP for each interaction. Once the OAuth grant is established or the API token is configured, the AI tool has standing access. Now, that same AI tool is remotely controllable from a personal phone or home PC, which, in your tiered model, should only have access to Tier 2 or Tier 3 applications at best.

The result is that an unmanaged device now has indirect access to Tier 1 applications. Not through the front door, where your conditional access policies would block it, but through the AI tool as a proxy. Your zero trust policies are still enforced at the application layer, but they’re being bypassed at the agent layer.

This is made worse by the fact that leadership across most organisations is actively pushing people to use AI more and squeeze out every productivity gain they can. If you tell the sales team they can’t connect Salesforce to their AI tool, you’re going to get pushback from the sales team and almost certainly from leadership as well. Security teams are in a tough spot here because the pressure to enable these integrations is coming from both leadership and the users.

To be fair, this has always been somewhat of an issue with API tokens. Any developer with a personal access token for GitHub could use it from any device, regardless of your zero trust posture checks. Some of the more stringent security teams mitigated this with IP allowlisting on API tokens, restricting usage to known corporate IP ranges or VPN egress points. This was an exception rather than the norm however, typically with only developers using them. Now it’s hit the mainstream and popularity has exploded.

What You Can Actually Do

It’s worth noting that the AI tool vendors themselves could fix a lot of this. I could totally see something like dynamic connection selection based on access context, where your AI tool detects you’re connecting via remote control from an unmanaged device and automatically limits which MCP integrations are available in that session. You’d get access to your low-sensitivity tools like docs and calendar, but Salesforce and GitHub get gated behind a managed device session. The problem is working out how the AI tool would actually know what you can and can’t access. It would need to tie into your existing tiering system and IdP somehow, pulling context about device posture and user risk from Okta or Entra ID in real time. Otherwise you’re just recreating all of your conditional access rules inside yet another tool, which is a maintenance nightmare and a recipe for drift. It’s a solvable problem, but it requires the AI vendors and the identity vendors to work together closely, and we’re not there yet.

In the meantime, here are your options, roughly ordered from most secure to least secure:

IP allowlist your highest tier apps. Restrict API and application access to known corporate IP ranges. This gives you strong guarantees that connections are coming from a managed device, as long as you also block remote tunnelling tools like SSH, ngrok, and similar channels that could proxy traffic from the device. Claude Remote Control and similar features work from the device itself, so the API calls still originate from an allowed IP, but if you block the remote control tooling at the endpoint level, you close that gap. The downside is significant friction for your staff and real overhead to maintain the allowlists. It works, but it’s heavy.

Don’t connect AI tools to your highest tier apps. Take an all-or-nothing approach: AI tools get access to lower sensitivity systems, but your most sensitive apps are simply off limits. This avoids the remote access bypass entirely since there’s nothing sensitive to reach. The friction here can be even worse, though, because there’s currently no way to limit AI tool connections to managed devices only. And given that leadership is pushing everyone to adopt AI tooling, outright blocking integrations can have unintended consequences. People will work around it by handing AI tools personal API tokens or using unofficial integrations, which is less secure than the official setup you’re trying to protect. This approach really requires you to think carefully about data flows and the sensitivity of data in each connected system. You could allow remote control scenarios in this model, but people wouldn’t be able to do anything useful with sensitive systems that they can’t already do from their phone.

Invest in agent detection and governance. Use tooling to get visibility into what agents people are running, what systems those agents are connecting to, and what commands are being executed. This doesn’t prevent the bypass, but at least you can see it happening and respond. It’s a detection-and-response approach that accepts the model is broken but gives you the data to manage the risk. This is probably the most pragmatic starting point for most organisations.

Eliminate the managed device tier entirely. Accept that the managed device boundary has too many holes to be meaningful and collapse your tiers. Move to a model where everything requires a registered device (BYOD-style) with strong authentication. This isn’t a terrible approach for some organisations, especially as we move toward agent-based workflows where the device running the agent matters less than the identity and permissions behind it. You lose some of the posture guarantees, but you gain a model that’s at least honest about what it can and can’t enforce.

Skip device posture, focus on the most important parts. If you’re not going to enforce device posture, at minimum implement the core controls that provide the most value regardless of device state. Strong MFA everywhere, phishing-resistant authenticators, short session lifetimes, and robust logging. Device posture is a useful signal, but it was never the whole story, and strong authentication alone blocks the vast majority of credential-based attacks.

Looking Forward

Many years ago it was unheard of to allow BYO devices into the enterprise, but then email, Slack, and other collaboration tools became more widely accepted on personal devices and the boundaries shifted. Tools like GitHub, CI/CD pipelines, and now AI agents will go the same way. Some more regulated organisations will push back on security grounds and others will open things up. It’ll be up to enterprise security teams to decide where they land, and they will be pushed by leadership and company culture one way or another.

The reality is that defense in depth still matters, but the boundaries have shifted. We need to start thinking about AI tools as a distinct layer in our security architecture, not just another SaaS app to throw behind Okta. They are aggregation points for access, and they need to be treated with the same care we give to our IdP, our HRIS, and our production infrastructure.

If you’re building or deploying AI agent workflows right now, the most important thing you can do is ensure your security architecture accounts for the fact that these tools are becoming the single most connected piece of software in your environment. Build your access controls, logging, and incident response plans around that reality, not around the comfortable fiction that your tiered zero trust model is still doing what you designed it to do.