Experts warn that the cyber threat is already there

World banks, tech giants and governments were sent scrambling last month to contain the Mythos vulnerability, an Anthropic model said to be so powerful that it has discovered thousands of previously unknown vulnerabilities in the world’s software infrastructure.
There’s one problem: the ability they’re worried about has arrived.
Cybersecurity experts and artificial intelligence researchers told CNBC that the software vulnerabilities exposed by Mythos can be detected using existing models, including those from Anthropic and OpenAI.
“What we’re seeing across the industry now is that people are able to reproduce the vulnerabilities found in Mythos through a clever set of social models to get very similar results,” said Ben Harris, CEO of cybersecurity firm watchTowr.
The Mythos has shaken executives and policymakers alike out of concern that a dangerous new era of AI-powered cybercrime may be approaching. Anthropic has limited its release to a few American companies including an apple, Amazon, JPMorgan Chase again Palo Alto Networks to reduce the risk that bad actors get.
Even with that precaution, the release has prompted the Trump administration to consider new government oversight of future models.
It’s the latest in a series of high-profile launches from Anthropic that have intensified its rivalry with OpenAI as the two AI giants move closer to their highly anticipated public offerings. A few weeks after the arrival of Mythos, OpenAI CEO Sam Altman announced GPT-5.5-Cyber, a model designed specifically for cybersecurity.
OpenAI on Thursday allowed limited access to GPT-5.5-Cyber to vetted cybersecurity teams.
The controlled release of Mythos, part of a security initiative called Project Glasswing, was to give the global company time to brace its cyber defenses against future attacks from criminal groups and rival nations.
“The risk is a huge increase in the number of risks, in the number of breaches, in the financial damage caused by ransomware in schools, hospitals, not to mention banking,” said Anthropic CEO Dario Amodei this week at an Anthropic event.
‘Scaring enough’
But for those fighting in the trenches of cyber warfare, one of the key capabilities Anthropic has touted — detecting software vulnerabilities at scale — has been around since last year.
“The models we have right now are powerful enough to detect zero days on a large scale, and this is scary enough,” Klaudia Kloc, CEO of cybersecurity firm Vidoc, told CNBC.
That’s been going on for “several months, if not a year,” he said.
The term “zero-day” refers to a previously unknown software flaw that has not been exploited, giving attackers a window to exploit it before defenders can react.
Researchers at Vidoc relied on a technique called “orchestration” to test whether they could find the same vulnerabilities that Mythos found. As the name suggests, the process involves creating a workflow that divides the code into small pieces, linking between different tools or models to evaluate the results.
“We ran the old models against the same code base to see if we could find the same vulnerability,” Kloc said. “We did, with both the old OpenAI and Anthropic models.”
Another cybersecurity company, Aisle, found that many of the results of the Mythos article could be reproduced using cheaper models that work in parallel – suggesting that scale and connectivity are more important than the latest model.
“A thousand good searchers searching everywhere will find more bugs than one smart searcher has to guess where to look,” wrote Aisle founder Stanislav Fort in a blog post.
In comments to CNBC, Anthropic did not deny that earlier models were able to detect software vulnerabilities.
In fact, a company spokesperson said, Anthropic has been warning for months that AI’s cyber capabilities are rapidly advancing. They point to a February blog post showing that Claude Opus 4.6, the widely available model, found more than 500 “high severity” vulnerabilities in open source software.
At the Anthropic event this week, Amodei confirmed this point, saying that although the level of software vulnerabilities detected by Mythos increased from previous models, the trend was not new.
“The risks are very real. That’s why we took the steps we did,” Amodei said. “But also, in a way, that’s not surprising. … We’ve been seeing warnings of this for a while.”
Hysteria and panic
What sets Mythos apart is its ability to take the next step, developing operational exploitation with little or no human input, automating a process that previously required skilled researchers, said an Anthropic spokesperson.
But criminals working for gangs and rival nations already have these skills, computer researchers say. Hackers in North Korea, China and Russia “know how to do this, with or without Anthropic,” Kloc said.
The threat of AI-enabled hacking has companies and government regulators worried about protecting critical systems from a new wave of ransomware and other types of attacks, according to Harris.
He described negotiations with banks, insurance companies and regulators in recent weeks as “hysteria.”

Even before the advent of productive AI, companies faced the problem of skilled hackers exploiting newly discovered vulnerabilities in hours, while patching code often took days or weeks. Some patches require critical systems to be taken offline, which complicates matters.
“The industry is panicking about the number of risks they are facing now,” said Harris. “But even before Mythos became widely available, it couldn’t fix the weaknesses fast enough.”
Previously, only a small number of experts around the world had the ability and time to find obscure vulnerabilities in software and exploit them, according to Harris. Now, using the AI models currently available, the barriers to entry to creating cyber violence have been lowered.
That means banks and other targets will see more attacks, and that software programs that used to attract less attention from hackers will now face threats, Harris said.
Benefit: Offensive
While Anthropic, OpenAI and others are working to develop cybersecurity capabilities that match the problems they’ve identified, the primary focus is on offense, not defense, the researchers said.
JPMorgan’s Jamie Dimon made a similar point when he said last month that while AI tools may help companies protect themselves from cyberattacks, they first make them more vulnerable.
“You have a significant increase in the volume of vulnerabilities discovered, but they don’t seem to have released a tool to help fix it,” said Justin Herring, a partner at law firm Mayer Brown and former deputy chief of cyber security at New York’s financial regulator.
“Risk management is the biggest Sisyphean task of cybersecurity,” Herring said.
The limited team that was part of the first release of Mythos got off to a good start in hiding the vulnerability, but something is wrong. AI researchers are not given access to the Mythos to independently verify Anthropic claims or begin building defenses against it.
Some say it has prevented the wider internet community from being part of the solution.
It has created “haves and have-nots,” which can slow the pace of cybersecurity innovation, said Pavel Gurvich, CEO of cybersecurity startup Tenzai, which uses Anthropic models.
Many cybersecurity startups are working on solutions that can help businesses in this new era of AI, he said.
“They’re trying to figure out the best way to fix the world before this hits the world,” said Ben Seri, co-founder of cybersecurity startup Zafran Security. “It’s a chicken and egg situation, you’re going to break the eggs. It’s inevitable.”




