Over the past few weeks, my feed has been filled with a juxtaposition: security people warning about the risks of AI browsers and SF tech bros saying they have 10x’d their performance by using an AI browser to order lunch from DoorDash.

A lot of people are crying about prompt injection, but that isn’t the only risk posed by AI browsers. The guidance seems to be very much “just block them” at this stage, and while that’s a pragmatic short-term solution, I don’t think it’ll work for very long.

Below is a short dive into the security risks, where I think AI browsers are headed, and ultimately, why hard blocking will only work for a while.

AI Browser Evolution

Today, many people are simply banning AI browsers, but what even is an AI browser at this point? You can just install extensions into regular browsers to get some of the same functionality. I’ve seen people using https://browsermcp.io/ for a while now to do much the same things, but it’s also hugely insecure and something most people aren’t banning because it’s simply not as popular.

My view is that within 6 months, you’ll be forced to use an AI browser even if you don’t want to. What I mean by this is that the following is likely to happen:

  • AI companies will merge their desktop apps and browser products. A browser is the natural evolution of the OpenAI app. There’s no point in these companies offering different apps for the same thing. Getting you in the browser increases the time you spend in the product and, ultimately, what you can do with it.
  • Traditional browsers, like Chrome and Edge, are major players in AI, and they won’t want to lose their market dominance. Yes, they’ll probably have ways for enterprises to turn it off, but it’ll almost certainly come baked in as the default for normal use. At the moment, these are mostly locked to summarization rather than actions, but it won’t stay like that forever.

Enterprise browser companies will likely try to use this as a selling point. I think the gap between enterprise browsers and consumer browsers will grow even more than it is today. I’ve historically been a hater of enterprise browsers. People don’t want to use some knockoff of Chrome; they want to just use the browsers they know, but I can see a market potentially opening up if traditional browsers force AI on enterprises.

There will likely be holdouts like Brave and Firefox that aim to preserve privacy, and this is a good thing. Enterprises will have ways to turn the AI features off, but average users probably won’t, so having consumer alternatives is key.

Really, the only way I see this not happening is if enough people stop using and stop paying for these products. I suspect there will be some backlash amongst some of the user base, but if they don’t change, then the companies won’t change course. Companies have invested in AI products and need people to use AI features; this is the next logical step, and they don’t want to lose their market-dominant positions.

AI Browser Risk

People point to prompt injection as the primary issue, and rightly so, because it’s the worst of the issues, and highly likely we can’t fix it. It takes us back to the days of the old internet, where things like Flash Player existed, and just visiting a URL could get you owned via a 0-click vulnerability. Something that is extremely rare and hard to pull off in today’s world of mostly secure internet browsing.

Other than prompt injection, most of the risks aren’t unique by any means; all AI browser risk is just the same as agent risk, no different from MCP. It does make it easier, though. Not everyone knows how to use an API, and not every website has one, but most people can use a browser.

Diving deeper, I think we have a few risks here:

  • General Agent Risks - Deleting prod databases, sending data to the wrong email, and accidentally ordering you overpriced caviar on DoorDash. No different from a human here, but much more likely in AI contexts.
  • Prompt Injection - AI agents taking action on behalf of a command baked into a website. Really unfixable at this point and unlikely to get much better. Lots of variations of this, including image and video prompt injection, scripts baked into the sites, and more.
  • Lack of Understanding- All the AI browsers say that “you should just monitor its activity in agent mode, yet few of them provide a realistic way to do that. You can just watch the agent take actions and probably miss something, or you can dive deep into every request, at which point you may as well do it manually. I think the task block logging from Google’s new Antigravity release will probably make a feature here in the next few months. Realistically, though, are people going to monitor their logs? Probably not.
  • Employee Misuse - I’ve heard of some great examples here. Simulating basic activity, then slacking off, or automating those boring yearly security compliance training by having AI do it. Any sort of clickbox action can be easily automated, which really limits their effectiveness as a control.
  • Lack of Enterprise Controls - Not unique to AI by any means, but many of the new browsers don’t have things like audit logs, SOC2, etc. This’ll come with time, though.
  • Shadow IT Risk - Employees using AI browsers even though you don’t want them to, or using browsers you don’t have an enterprise agreement with, meaning they likely train on the input data in most cases.
  • Fraudulent Web Content - It would be really easy to put up some junk, overpriced products on a webstore and pay for ads to be top of the list. If someone asks an AI agent to buy some Lego on Amazon, maybe it picks the $ 1,000 single-brick fraudulent ad at the top.

Reducing Risk Today

Really, you only have a few options available to you:

  • Blocklisting - Just block the browsers. It’s the easy option for most via MDM or allowlisting tools, and it’s what even Gartner is recommending right now, which is surprising, knowing how vendor-focused they are. You can tie this into your Zero Trust setup so that AI browsers can’t access sensitive apps if a full block is too much, but this still comes with considerable risk.
  • Review Actions - Watch every action the agent takes and review all the logs. The UX to do this is awful today, and nobody is actually going to do it. Not to mention, there are almost no enterprise settings to collect these logs to your SIEM and perform detections on them.
  • Browser Security Extensions - You could install Push Security, LayerX, etc., to at least get some telemetry and logs, with new security features coming in the future as AI browsers add functionality.
  • Prompt Guards - Put in some prompt injection guards, but it’s analogous to a WAF. It helps prevent issues, but isn’t foolproof. You can bypass WAFs and prompt guards. It reduces your likelihood of being breached from 5% to 4%, and if an attacker really wants in, it’s unlikely to stop them; it just requires more effort on their part.
  • Logged Out Mode - OpenAI Atlas has a pretty good logged-out mode. If you aren’t logged into anything, it’s hard to actually have security issues. This significantly reduces the effectiveness of AI browsers, though, as you can’t do much if you aren’t logged in.

While all of the above are options available to you, the following is what I would do:

  1. Monitor AI browser usage to see if people are actually using these tools, as you would with any shadow IT.
  2. Build public guidance on how to use them correctly and securely in your company.
  3. Consider gating access behind a training requirement. Optional for the folks who consider this high risk.
  4. Limit the blast radius, just pick one browser from the lot if you can.
  5. Stop AI browsers from accessing sensitive data stores if you have a zero trust setup. Do not let them connect to enterprise apps with customer or sensitive data. (This is often a user agent string checks, so it can be bypassed)
  6. Work with that AI company to get enterprise settings, like logs to your SIEM etc.

AI Browser Usage

From initial reviews, most people don’t like these browsers right now. They’ve had a relatively lackluster release, and anecdotally, most people I know who were excited for them don’t actually use them as a daily driver, maybe once a week for a one-off task.

I personally don’t trust them enough to say, “Buy me some Warhammer on Amazon,” and have it buy exactly what I want at a reasonable price, and many others share the same sentiment. At most, I trust them to summarize content or take basic actions to fill in a form. I think, similar to agents, we’ll see a very slow uptake of real-world applications. At least for AI agents, the slow adoption is because we’re building out access-control patterns via features like cross-app access to support these use cases.

With AI browsers, I don’t see any immediate plans to reduce the risk, and so unless vendors force it upon people to speed up adoption, I don’t see it taking off any time soon, except for niche tasks. You can totally block them today, and it’s probably recommended, but have a plan for when it isn’t an option anymore, and start building this into your roadmaps for 2026.