This is one of those posts where I talk about big ideas, and big ideas are unfortunately cringe. You’ll have to forgive me. But for a moment, I ask that you allow yourself to be cringe too. Read this like no one is watching. Step away from the details of your day to day work, and think with me about where we are headed and what that means for the work that needs to be done.
Let’s start here: The future is AI. It’s the technology that will shape the rest of our careers. I don’t know exactly what will happen, but the use of AI is going to grow. It is going to do more impressive things, it’s going to find new use cases, and it’s going to transform industries.
But AI has a cybersecurity problem. The entire model of the internet is built around human actors and mindless automated systems. It doesn’t account for a world of AI agents, non-deterministic software, and the seamless interweaving of human and AI decision making.
That’s going to be a huge problem, and it will hold AI back. Not at first, no. But when the value of AI starts to be realized, so will the exploits. The biggest impediment to AI progress won’t be the models or the tooling, it will be the security.
So, where are all of the security people?
AI’s Cybersecurity Problem
Before answering that, I want to discuss why I think cybersecurity, not models or tooling, will be AI’s great limiting factor if left unaddressed.
I don’t think anyone seriously doubts the existence of security problems in this space. However, people often view security with a nihilism of sorts; they think that even if security is a huge problem, no one will care. Leadership teams have left security understaffed and underfunded for years, at least according to some, so why would anything change now?
What’s different is that we are entering a world where the known vulnerabilities don’t have patches. AI agents are vulnerable software that we don’t know how to fix without limiting their powers. And as the agents become more powerful, their access grows, and the value of exploiting the vulnerabilities increases.
Maybe a few executives won’t quite understand that and will land on the “find out” side of the FAFO equation, but after a few early sacrifices it will become clear that this doesn’t work.
Hence, I’m convinced that within the next few years the challenge won’t so much be what an AI agent is capable of doing, but what an AI agent is capable of doing securely. And then what? Do we give up on the full potential of AI because we can’t solve the security problem? That would be a tragedy. This is an area that desperately needs solutions before it grinds progress to a halt.
Cybersecurity’s AI Problem
There’s a notable absence of cybersecurity people tackling the challenges of AI. I recently had the misfortune of listening to just about every cybersecurity podcast that exists, and I probably listened to half a dozen episodes about AI. I have to admit that the quality of discussion on the topic was poor. Cybersecurity professionals seem to be stuck in a GPT3 understanding of AI.
This applies to both the use of AI in cybersecurity, and to securing AI systems. The former is still extremely primitive and wildly underexplored, with tooling having frankly not evolved in any dramatic way over the last few years the way it has in software development. And the latter is hardly a discussion topic at all in most cybersecurity circles. It’s a niche subtopic at best.
But it is also hard to blame them. Unlike software development, the tooling in the cybersecurity space is far more opaque. Developers get to “feel” the AI, whereas cybersecurity professionals experience the output from a black box. Vendors claim that they’re using AI behind the scenes, and that is where the state of the art is supposedly happening. You can understand the skepticism, especially when the products used in 2020 are still in use today, just with some AI copy slapped on top. There’s certainly no Cursor or Claude Code but for cybersecurity. Some SOCs have AI tooling that helps them triage alerts, but there’s nothing revolutionary about that.
Plus, threat actors really haven’t moved much in this space either. You get these silly reports, like the one by Anthropic, trying to claim otherwise. But there’s just not been much happening. Nothing transformative at least. If you work in cybersecurity and ignored AI entirely, you’re probably doing just fine.
This has created an environment that makes it easy to sleep on AI. Not much has happened. And that attitude almost certainly explains why there’s so little interest in finding solutions to secure AI too, or at least discussing whether solutions might exist.
The Unsolved Problem
There are cybersecurity people reading this who are likely shaking their head in disagreement because they believe this is a solved problem.
In one camp, you have people who believe that securing AI is the same as securing any application: limit permissions and ensure visibility. I think that view falls very short, as in practice it amounts to saying “don’t do that”. No, don’t connect your AI to production. Don’t connect it to your email, and don’t let it make bank transactions. And while I agree with that advice today, it doesn’t do much in helping us achieve the world we want to get to. If we can’t do those things, we’ve greatly lost out on the future promise of AI. So that doesn’t solve our problem.
In another camp there are people who want to treat AI the same as human risk. There’s a sort of appeal to that, I’ll concede. But the accountability mechanism that applies to people is totally missing. AI doesn’t go to prison when it commits a crime. AI doesn’t lose its job when it screws up (or at least it doesn’t care if it does). AI isn’t human, and treating it as one doesn’t get us very far.
That leaves us still without a solution. Even if AI could do everything a human can do, the cybersecurity problem is what stops that from happening. That’s the problem to solve.
Aligning Capabilities and Use
Much of what defines modern cybersecurity is the product of work done by people who just really love computers. They were deeply interested in technology and helped push the field forward, often more as a hobby than a job. A whole industry was born from that work.
AI security isn’t quite as greenfield as that. I don’t think a bunch of gen Z AI kids are about to solve this puzzle for us. If we are going to make progress, we need the people who understand cybersecurity today to contribute.
Maybe it isn’t possible. Maybe granting god-like powers to a non-deterministic entity simply can’t be achieved without introducing catastrophic tail risk that no rational system will accept. But I don’t believe that myself. I’m hoping that others will choose to not believe that too, and from that will find a way to make this world possible.
In other words, if cybersecurity proves to be the bottleneck blocking AI agents, I hope it isn’t for a lack of trying.