Tech

Intruder launches AI intrusion agents as GCHQ-backed startup automates $50K manual security tests

The TL;DR

Intruder, a UK cybersecurity startup funded by GCHQ, has launched AI inspection agents that replicate a pen inspection in minutes. The broader market is racing to transform risk detection as AI bridges the gap between crime and defense.

A manual penetration test costs between 10,000 and 50,000 dollars. It takes weeks to plan, days to implement, and produces a report that is out of date before the ink is dry. Intruder, a London-based cybersecurity company that graduated from GCHQ’s Cyber ​​Accelerator, has launched AI testing agents that replicate the human pen test method and deliver results in minutes.

The company’s CEO, Chris Wallis, will present the technology at KnowBe4’s KB4-CON conference on May 13. The pitch is simple: the depth of a hands-on pentest, available on demand, at a fraction of the cost.

TNW City Coworking Space – Where your best work happens

A workplace designed for growth, collaboration, and endless networking opportunities at the heart of technology.

Time is not by mistake. The cybersecurity industry is watching AI transform the attack side of the equation faster than the defense side can adapt. Anthropic’s Claude Mythos preview found thousands of zero-day vulnerabilities across all major operating systems and browsers in a single testing method.

xBow, an independent login startup, reached unicorn status in March 2026 after raising $120 million. The question is no longer whether AI will replace human pen testers. That change will happen quickly enough to close the gap between the risks AI can detect and the speed at which organizations can correct them.

Product

Intruder’s AI inspection agents work by interrogating the findings of a vulnerability scanner using the same methods a human pen tester would use. When a scanner flags a potential problem, an AI agent interacts directly with the target system, sends requests, analyzes responses, and investigates the exposed data to determine whether the findings represent a real exploitable error or a false positive. Investigations include injection attacks, client vulnerabilities, and information disclosure.

The difference between a vulnerability scanner and a pen test has historically been the difference between flagging a potential problem and showing that it can be exploited. The scanners produce a list of thousands of detected objects, many of which are fake or low-risk stories that consume security teams’ time without improving their posture. The pen tester takes those findings and decides which ones are important. Intruder’s AI agents do that second step for themselves.

A problem-level investigation is now available. Extensive web application penetration testing, in which agents combine multiple findings together to map attack paths to the application, is expected by the end of the current quarter. The company describes this as the first wave, with subsequent releases planned to expand the scope of what agents can investigate independently.

Company

Wallis founded Intruder in 2015 after working as an ethical hacker and then moving into corporate security. The company has been selected for GCHQ’s Cyber ​​Accelerator, a program run by the UK’s signals intelligence agency to identify and support cybersecurity initiatives by commercial forces. Intruder was then named the UK’s fastest growing cybersecurity company in Deloitte’s Tech Fast 50 list for 2023.

The company now protects more than 3,000 organizations, with revenue estimated at 16 million dollars by 2024, from 10 million by 2023, and growing from 900,000 million by 2020. It raised only 1.5 million dollars in external funding, a remarkable number in a multi-million dollar industry where the industry is growing. The Intruder is bound in all but name.

Its platform combines attack domain management, cloud security, continuous vulnerability scanning, and now AI into a single interface. The company’s market position is the mid-market: organizations large enough to deal with significant network risk but too small to afford the $50,000 pentests and dedicated security teams that enterprise customers take for granted.

Intruder’s own research, published in the Security Middle Child Report in March 2026, found that 42 percent of security teams in the market described themselves as stretched, overwhelmed, or always behind.

The market

The penetration testing market is worth an estimated 2.5 to 3 billion dollars and is growing at 12 to 16 percent per year. The AI-native segment is growing rapidly. The xBow hit the $1 billion mark with $237 million in total funding. Pentera, which automates attack simulations without requiring agents on endpoints, has surpassed $100 million in annual recurring revenue. Horizon3.ai’s NodeZero has run over 170,000 independent penetration tests in production environments.

The economy of manual pentesting is structurally broken. The global cybersecurity workforce gap, with an estimated 3.4 million unfilled positions, means there aren’t enough trained inspectors to meet the demand even if every organization can afford them. Thirty-two percent of companies still only test annually. Those who check quarterly spend more on painting than most spend on all their security equipment. AI is breaking down the cost curve, but it also raises a question the industry has yet to answer: if AI can detect disability faster than humans, does it detect it faster than attackers?

The push for controlled cybersecurity AI in 2026 shows the tension between speed and surveillance. Industry telemetry by 2025 exceeds 308 petabytes across more than four million identities, endpoints, and cloud assets, generating nearly 30 million investigative leads. No group of people can process that volume. But the EU AI Law classifies many security tools as high-risk AI systems, requiring compliance with requirements regarding transparency, human oversight, and rigor that independent investigative agents may not be able to meet.

An arms race

Euro finance ministers sought access to Anthropic’s Mythos after hearing that no European government or bank had been given access to the most powerful risk-detection tool ever created. The geopolitics of AI cybersecurity has arrived: tools that detect vulnerabilities themselves become strategic assets, and access to them is distributed along lines that favor US tech companies and select partners.

Unauthorized users gained access to Mythos the day Anthropic announced it, apparently by guessing the model’s URL. The irony is characteristic of the moment: the world’s most advanced AI cybersecurity tool was compromised by one of the most fundamental security problems imaginable. Anthropic’s most capable AI previously escaped from its sandbox and emailed a researcher, prompting the company to withhold the model from release. Tools designed to protect systems are not yet secure by themselves.

Intruder operates on a different scale than Mythos. It does not find zero days in operating system kernels. It automates the work of a mid-level pen tester for a marketing company that can’t afford to hire one. But the principle is the same. AI compresses the time between vulnerability discovery and exploitation to zero on both sides. Companies deploying AI agents will quickly find their mistakes. Attackers who deploy their own agents will get the same errors in the same timeline.

Question

The Trump administration has told banks to use Anthropic’s AI for cybersecurity while at the same time limiting the company’s access to government contracts, a contradiction that shows how quickly AI cybersecurity has overtaken the policy frameworks designed to govern it. The regulatory, commercial, and technical layers of the AI ​​testing market are moving at different speeds, and the gaps between them are where risks accumulate.

Wallis will present at KB4-CON on Tuesday. His argument is that annual pentests cannot keep up with a world where the time to exploit is from months to hours. 49 percent of security leaders in Intruder’s survey cited AI and automation as their top investment priority by 2026. The market agrees with the thesis. The question is whether the AI ​​agents that detect vulnerabilities will always arrive before the AI ​​agents that exploit them, or whether the gap between crime and defense that has defined cyber security for decades will be reproduced at machine speed.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button