The AI Paradox: Weapon and Flood, Same Source

Two stories broke this month that, on the surface, have nothing to do with each other. Anthropic released Claude Mythos — an AI model that can autonomously hack corporate networks — and NIST gave up trying to analyze every CVE in its National Vulnerability Database. Read them together, and you see the shape of the problem that will define cybersecurity for the next decade.

AI is simultaneously flooding the vulnerability pipeline and weaponizing it. The same technology that helps researchers find bugs at machine speed is now capable of exploiting them at machine speed. And the institutions we built to manage this ecosystem are cracking under the pressure.

Claude Mythos: The Model That Hacks Back

Anthropic's Claude Mythos isn't just another chatbot update. According to testing by the UK's AI Security Institute (AISI), it's a genuine step change in AI's offensive cybersecurity capabilities.

The numbers are stark:

  • 73% success rate on expert-level Capture the Flag challenges — before 2025, no LLM could solve a single one
  • 24 out of 32 steps completed in simulated full-network takeover attacks — previous models maxed out at 16
  • Narrowed the gap between script kiddies and mid-level hackers — AI is democratizing offensive capability

The AISI researchers put it bluntly: Mythos is "at least capable" of autonomously taking down smaller, weakly defended enterprise networks. They caveated that their test environments lacked active defenders and security tooling, so real-world results would vary. But the direction is unmistakable.

A joint report from the Cloud Security Alliance, SANS Institute, and OWASP — with contributors including former CISA director Jen Easterly and Google CISO Heather Adkins — concludes that organizations are "likely to be overwhelmed" by threat actors using AI to find and exploit vulnerabilities faster than defenders can patch them. Attackers get asymmetric benefits: they can adopt AI tools without the bureaucracy, compliance reviews, and risk assessments that slow down enterprise defenders.

As SANS Chief AI Officer Robert Lee and his co-authors wrote: "The cost and capability floor to exploit discovery is dropping, the time between disclosure and weaponization is compressing toward zero, and capabilities that previously required nation-state resources are now becoming broadly accessible."

NIST Throws In the Towel on Full CVE Analysis

Meanwhile, NIST quietly announced something that would have been unthinkable five years ago: it can no longer analyze every CVE submitted to the National Vulnerability Database.

CVE submissions surged 263% between 2020 and 2025. In Q1 2026 alone, submissions were nearly a third higher than the same period last year. NIST enriched nearly 42,000 CVEs in 2025 — a 45% increase year-over-year — but it wasn't enough.

Under the new risk-based triage model that took effect April 15, 2026, NIST will only fully enrich CVEs that meet one of three criteria:

  • The vulnerability is in CISA's Known Exploited Vulnerabilities (KEV) catalog
  • It affects software used by the U.S. federal government
  • It affects software classified as critical under Executive Order 14028

Everything else? It goes into a "Not Scheduled" bucket. Still listed, but without the severity scores and product data that security teams rely on to prioritize patching. NIST is also sweeping its existing backlog — all unenriched CVEs published before March 1, 2026, are being demoted to "Not Scheduled" unless they're in the KEV catalog.

While NIST doesn't directly blame AI for the surge, the industry sees the connection clearly. As Vincenzo Iozzo, CEO of SlashID, noted: "We've seen a dramatic spike in AI-reported valid vulnerabilities. Last year alone, the number of reported vulnerabilities more than doubled."

The Feedback Loop

Here's what makes this moment different from every previous escalation in cybersecurity: it's a feedback loop.

AI models like Mythos are trained on vulnerability data — the same data that NIST is now struggling to process. As AI finds more vulnerabilities, more CVEs get submitted, which overloads NIST, which means fewer CVEs get enriched, which means defenders have less information, which means more unpatched systems, which means more targets for AI-powered attacks.

Casey Ellis, CTO of Bugcrowd, put it perfectly: AI tools are succeeding by "living in the places we stopped looking a decade ago" — forgotten firmware, abandoned routers, legacy codebases. The technical debt that organizations have been ignoring is now a buffet that AI can systematically harvest.

And the attacker-defender asymmetry is getting worse. Corporations and governments run on consensus, hierarchy, and compliance. Attackers running AI tools don't need committee approval. They don't need change management tickets. They don't need to wait for the next sprint. The knob on the traditional defender's dilemma, as Ellis wrote, "used to go to ten and turned it to seven hundred."

Anthropic's Response: Project Glasswing

To its credit, Anthropic isn't just releasing Mythos into the wild. The model is not being sold commercially. Instead, it's being made available through Project Glasswing — a consortium of major tech companies that will use Mythos to find and patch vulnerabilities in commonly used products and services. Think of it as offensive AI being channeled into defensive work.

But the CSA/SANS/OWASP report makes clear that this defensive use case faces the same institutional friction that always slows down security work: the vulnerabilities Mythos finds through Glasswing still need to be reported, triaged, patched, and deployed — a process that takes organizations weeks or months. Attackers don't have that lag.

What This Means for You

The era of waiting for a CVE score before acting is over. NIST literally just said so. Here's what you should be doing instead:

  • Assume unknown vulnerabilities already exist in your software. Deploy protections that can prevent exploitation before a patch or CVE score is available
  • Use diverse vulnerability data sources — don't rely solely on NVD enrichment. Commercial feeds, threat intelligence, and now AI-powered scanning tools are all part of the picture
  • Reduce your attack surface ruthlessly. Every forgotten service, every unsupported router, every legacy API is now a target that AI can find and exploit automatically
  • Adopt AI for defense. If attackers are using AI to find your vulnerabilities in minutes instead of weeks, you need AI to detect and respond in minutes instead of weeks too
  • Automate your patching pipeline. The time between disclosure and weaponization is compressing toward zero. If your patch cycle is measured in weeks, you're already too slow

The Bottom Line

We're in a new era. The same AI that helps defenders find bugs is now capable of exploiting them autonomously. The same AI flooding the CVE pipeline is creating the unpatched attack surface it can then weaponize. And the institutions we built to manage this ecosystem — NIST, CISA, the CVE program — are being forced to triage rather than comprehensive process.

This isn't a reason to panic. It's a reason to adapt. The organizations that survive this transition will be the ones that stop waiting for someone else to tell them what's vulnerable and start assuming everything is.