An inconspicuous moment in artificial intelligence history arrived this week when Anthropic's Claude Opus 4.6 examined code written in 1986 and identified security problems nobody had caught in four decades. The software in question was a utility for the Apple II personal computer, written in pure 6502 machine language. But this wasn't nostalgia. It was a warning.

The real concern here is not that 40-year-old hobby software has bugs. It is that AI can now decompile machine code and reason about its security properties the way an experienced human researcher would. That capability, when applied at scale, exposes a vast blind spot in global infrastructure.
Billions of microcontrollers are embedded in everything from industrial sensors to medical devices to IoT networks. Many of these devices run firmware written decades ago, often by contractors or in-house teams who had no formal security training. The code was rarely subjected to rigorous auditing. It was simply burnt into chips and forgotten.
Now imagine what happens when a determined adversary points an AI tool at that firmware. Anthropic has already demonstrated that Claude found over 500 previously unknown zero-day vulnerabilities in well-tested open source projects, some of which had gone undetected for years despite millions of hours of automated testing. Legacy microcontroller code had no such scrutiny.
The Dual-Use Problem
This is the uncomfortable part. The very capability that allows defenders to harden their systems before attackers find the flaws can just as easily be weaponised. An attacker with access to Claude can use it to systematically hunt vulnerabilities in legacy firmware faster than any human researcher. And unlike modern software, which gets security patches and updates, many embedded devices cannot be patched at all. Once deployed, they are frozen in time.
Anthropic and others in the security industry are aware of this risk. The company has implemented detection systems called "probes" that monitor Claude's internal activity for signs of malicious use. It has also restricted access to Claude Code Security to authorised users. But these are reactive controls in a fundamentally asymmetric arms race. Defenders must find and fix every flaw. Attackers only need to find one.
There is a legitimate counterargument. Many security researchers argue that accelerated vulnerability discovery is on balance beneficial. If AI can find bugs faster than humans, then defenders who move quickly have an advantage. Mozilla's collaboration with Anthropic resulted in 22 high-severity Firefox vulnerabilities being identified and patched in just two weeks. That is a powerful demonstration of AI-assisted defense in action.
But the Firefox case is almost the ideal scenario. Firefox is modern, widely deployed, maintained by a dedicated security team, and running on systems that can be updated over the internet. The vast majority of legacy microcontroller firmware has none of these properties.
The Scale of Neglect
A crucial detail often overlooked in the recent hype around AI vulnerability discovery is that this technology does not solve the fundamental problem facing security teams: they are drowning in alerts. AI-generated security reports often come with a high false positive rate and put enormous burden on maintainers already stretched thin. For open source projects maintained by volunteers, a deluge of AI-discovered vulnerabilities can overwhelm rather than help.
For legacy embedded systems, the problem is worse. Many of these devices have no development team to speak of. The original engineers have moved on. The hardware manufacturer may no longer exist. Finding a vulnerability in a device's firmware is almost meaningless if there is no path to patching it.
The practical implication is sobering. We are entering a period where AI can find security flaws faster than the world can fix them. That creates a window of vulnerability that grows wider each month as AI models improve. Standard vulnerability disclosure timelines, typically 90 days from discovery to public release, may become obsolete. The security community will need to move much faster.
Microsoft Azure CTO Mark Russinovich, who first shared the Apple II example publicly, framed the challenge clearly: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers." That is the situation. The question now is whether defenders can move fast enough to stay ahead.