Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Amazon’s Kindle Colorsoft Gets a Dark Mode (2026)

Amazon’s Kindle Colorsoft Gets a Dark Mode (2026)

28 April 2026
Helldivers 2 Machinery of Oppression Update 6.2.2 Out Now — Check Out the Patch Notes

Helldivers 2 Machinery of Oppression Update 6.2.2 Out Now — Check Out the Patch Notes

28 April 2026
Asus ROG Zephyrus Duo (2026) review: 2 screens 2 furious

Asus ROG Zephyrus Duo (2026) review: 2 screens 2 furious

28 April 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Attack of the killer script kiddies
What's On

Attack of the killer script kiddies

News RoomBy News Room28 April 2026Updated:28 April 2026No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email
Attack of the killer script kiddies

Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that — they found more than a dozen bugs that DARPA hadn’t inserted at all.

Even before the security earthquake that Anthropic delivered this month with Claude Mythos — the new AI model that seems to find vulnerabilities in every piece of software it’s pointed at — automated systems were growing increasingly capable of finding coding flaws. And fears are growing that not only can AI detect these flaws, but also be used to exploit them, putting hacking skills into the hands of everyone across the planet.

“Mythos or not, this is coming.”

This isn’t an empty threat. For decades, this type of no-skill hacker, known as a script kiddie, has wreaked havoc, running scripts they ripped from the internet or copied from exploit tool kits. They didn’t fully understand or have the technical know-how to write these scripts themselves. And yet they were still able to deface websites and propagate viruses.

What’s happening now represents a major escalation, where people without technical backgrounds are able to use AI to enhance their capabilities in a way that wasn’t possible with simple scripts. It is likely to have far more wide-reaching repercussions.

“There’s a tidal wave coming. You can see it. We can all see it,” said Dan Guido, CEO and cofounder of cybersecurity firm Trail of Bits, which was a runner-up in the challenge. “Are you going to lay down and die, or are you going to do something about it?”

Image: Joseph Rogers / The Verge

Even beyond Project Glasswing, Anthropic is trying to prevent the misuse of its software by criminals. A week after announcing Mythos, the company released Claude Opus 4.7, which for the first time built in safeguards meant to block malicious cybersecurity requests. (Security professionals who want to use the model defensively can apply to the company’s Cyber Verification Program.)

Anthropic’s announcement of Mythos sent shockwaves throughout the industry, but there were warning signs of AI’s cybersecurity prowess prior to it. In June 2025, the autonomous offensive security platform XBOW beat out human hackers to top the leaderboard of HackerOne, a bug bounty platform, indicating big leaps in the ability of AI models to find bugs.

By the time AIxCC rolled around, “there were already 10 to 20 different bug-finding systems that could find orders of multitude more bugs than we could patch,” Guido said.“This is actually not a new problem.”

“2026 is the year when all security debt comes due… 2026 is the make-it-or-break-it year.”

AI is great at pattern matching, and it’s becoming easier and easier for people to find variants of bugs that are already known and ones that have not yet been discovered. And writing exploits is becoming easier as well.

“You can use AI tools and with very minimal human guidance, and in some cases no human guidance, find a zero day in widely used software,” said Tim Becker, senior security researcher at Theori, which was also a finalist in the competition.

The concern is palpable across the industry, and improvements to models — along with improved understanding of their capabilities — are happening at lightning speed.

Open-weight models, or models whose trained parameters (also known as weights) are publicly available, also pose risk. In fact, sophisticated threat actors would be far more likely to run their own deployments to prevent the exploits from being exposed on Anthropic or OpenAI servers, Becker said, as Anthropic may retain data to monitor abuse. And the industry is bracing for what may come next. Other model creators may not be as cautious as Anthropic, potentially unleashing their powerful new tools straight to the public.

“Mythos or not, this is coming,” Guido says.

Mythos represents a step up at writing exploits, but current models are capable, too. Security researchers are already using more widely available models to report vulnerabilities to vendors before they’re exploited in the wild. That means there’s also the risk of malicious actors using them for ill purposes, such as creating exploits for oppressive regimes or stealing sensitive data on their own.

Industry experts predict that the advancement in AI security capabilities is going to lead to a lot more exploits. Bad actors could direct AI to find bugs in uncommon pieces of software that no one previously would have put in the effort to exploit.

“The bar to diving into a new million-line codebase and finding a bug is so much lower than it used to be.”

“Now, because effort is cheap, you can do things that are lower down the food chain. You can write exploits for software that only one company has. You can write exploits for software that exists in only one configuration that one company has. And you can do it on the fly. So during the middle of an intrusion into some hospital and there’s a wall standing between you and what you want, you can just point an LLM at that wall and say, ‘Figure out a flaw here,’ and it can grind until it’s successful. And it’ll find some vulnerability, it can find some configuration, it’ll run an exploit, for a weakness that no one ever has before, and it’ll do it with almost no effort on the part of the user… the hacker… the script kiddie,” said Guido.

This supercharges script kiddies, he says, because they’ll be able to operate on their feet without the constraints of memorizing the weaknesses in random UNIX utilities but instead defaulting to the pretraining in the tool they are using. They’ll be able to iterate through exploits targeting weaknesses at machine speed, something that no human — let alone script kiddie — can do.

It’s hard to determine exactly how much this is improving attacker capabilities, though there definitely seems to be a correlation. Security researchers can help us try to wrap our heads around the scale of bugs being discovered.

Before Becker started working on automatic bug finding with AI, he worked on vulnerability research, finding zero days and reporting them to maintainers. He said it used to take him weeks or months to find a high-impact vulnerability in a brand-new codebase, and now it only takes hours.

“I just drop the code into our AI bug-finding tool and in a couple hours I get a report with a bunch of candidate vulnerabilities, and most of them end up checking out and being real issues,” he said. “The bar to diving into a new million-line codebase and finding a bug is so much lower than it used to be.”

Every release of an automated tool has led to some level of panic about how it might be exploited, whether that’s text-to-image generators or open-source tools like the exploit development and delivery system Metasploit. The panic even goes back to 1995, when a free software vulnerability scanner named SATAN (an acronym for Security Administrator Tool for Analyzing Networks) was released.

“You can just point an LLM at that wall and say, ‘Figure out a flaw here,’ and it can grind until it’s successful.”

Often automated tools don’t lead to the same level of mayhem that had been expected or predicted, due to prevention measures put in place, low adoption rates by attackers, or other factors.

Joshua Saxe, CTO and cofounder of Security Superintelligence Labs, wrote in a blog post that exploits themselves don’t cause cyberattacks, and that adoption of AI vulnerability research tools has been incremental.

“There seems to be an implicit mental model where some new adversarial tool becomes available… and therefore we will immediately see criminal behavior with those tools. It’s a kind of mental model where you don’t even have to think about or do any empirical inquiry into what the humans are actually doing,” he told The Verge.

Saxe points out that it’s possible there’ll be friction in various attacker constituencies adopting these tools within their existing workflows and organization cultures.“There’s a whole human and organizational element here,” he said.

“It may be that there are certain attacker constituencies that are going to jump on these new tools, or it might be that the adoption curve is quite slow.” Some may keep breaking into networks by phishing or using exploits they already have, while others might begin developing new exploits using these tools.

Image: Joseph Rogers / The Verge

While the rate of adoption is impossible to predict, there are steps companies can take to prepare for the coming onslaught of vulnerability reports.

Katie Moussouris, founder and CEO of Luta Security, coined the term “Vulnapalooza” in a blog post complete with a concert poster and festival survival guide for security teams, explaining that this is the moment for companies to secure their weaker points. The advice for companies is not different from standard best practices: segmentation, working on identity and access management, using memory-safe code, and using phishing-resistant authentication and up-to-date software.

The Cloud Security Alliance released an expedited strategy briefing on developing a “Mythos-ready” security plan detailing many of these concepts. The report also emphasized the need to not only patch vulnerabilities but also to identify which ones to prioritize. But the need to match machine speed threats is new, and the amount of bug reports is already skyrocketing, leading to the need to prepare for more incidents and mitigate and contain them at a faster rate.

Moussouris says that many people in cybersecurity roles have been laid off because of AI’s efficiencies, even though those efficiencies are exactly why more humans need to remain in the mix. Companies will need human threat hunters, threat intelligence officers, and incident responders to deal with the onslaught of new exploits. And they’ll need people to decide which patches to prioritize and implement.

“We don’t have the AI defensive equivalent to automate all of those tasks, and I think we’re going to need to staff up and hire a lot of people,” she said. And organizations will need to build out secure software and secure architecture for networks to avoid ending up in an endless cycle of patching. “You have to build more secure software in the first place. We can’t incident respond our way to resilience.”

Organizations that aren’t ready to hire people could at least streamline their vendor onboarding processes to make it easier to bring on people or services as needed. “You don’t want to be stuck in a four-month procurement process for a vendor when you’re under fire and can’t keep up with the patch rollout,” Moussouris said.

While many are concerned about vulnerabilities, Moussouris believes the so-called “vulnpocalypse” will actually manifest as a “patchpocalypse.”

“The model has already identified thousands of vulnerabilities, and that patch tsunami that’s about to come from this coordination effort, that’s going to be the first major pain point,” she said.

Organizations that are slow to patch their systems may have a rude awakening. Waiting too long risks active attacks on services that target vulnerabilities found by AI, perhaps even using exploits written by the models.

“From the time a vulnerability is announced to the time where there is exploit code available has now shrunk to pretty much zero, and that is a major adjustment that I think people will have to take into account in their risk assessments and how long they can take to do things and how many resources they are applying towards this problem,” she explained.

There is an opportunity to use AI to at least speed up the remediation or mitigation process. Becker says that Theori is building a commercial tool called Xint that it’s been running on open-source codebases, manually reporting high-severity findings to maintainers by sending detailed reports along with remediation suggestions on its own dime, both as a community hardening project and to demonstrate the tool’s capabilities. Xint’s current version was able to find all the bugs Mythos did when scanning the same codebases. It also found 12 additional zero-day vulnerabilities that were not part of Anthropic’s announcement.

But mitigating these bugs will not be as quick as finding them because it requires engineers who are extremely familiar with the codebase to determine whether the patches are the best way to fix the issues found or whether they may make the code less maintainable or harder to understand in the future. Sometimes a patch represents a way to fix a problem, but not the best way, so it’ll take human time and effort to get the solutions to the finish line.

The huge surge in bugs being reported can lead to a long queue of things to patch, especially for open-source maintainers, who may be unable to keep up with the load.

While not all bugs are useful in an attacker’s tool kit, sorting through the pile to determine which ones are a priority to fix can be almost as difficult as fixing them.

“A lot of the prioritization needs to be contextual,” Moussouris said. For example, a very bad bug running internally that would be hard for an outsider to access might be lower priority than a less critical bug that is exposed on the company’s perimeter.

Beyond prioritization of bugs, organizations will also need to decide when to apply patches that restrict functionality and may even lead to downtime, and when to wait. The fewer security controls they have in place, the more time they will need for patching.

Simply putting out a patch makes it easier for attackers to reverse engineer the bug fix and exploit vulnerabilities they may have been otherwise unaware of on devices that have not yet been updated. That means that consumers, too, will need to get used to updating their software as critical fixes for security flaws increase dramatically. And organizations will want to invest in secure architecture to minimize the amount of patches they need to manage in the first place.

“The thing is, it’s now or never. There’s a tidal wave coming.”

But as Moussouris frames it, it doesn’t have to be a reason to despair. “You don’t have to treat it like this is going to be the worst thing that ever happened,” she told The Verge. “You can treat it like, this is our opportunity to shore up some defenses and get some budget to do things we’ve been putting off.”

Whatever attitude organizations take, they need to be prepared. The stakes are higher, and even script kiddies have a lot more opportunities to find and exploit vulnerabilities. Companies need a plan to deal with this new threat of AI-enabled attacks.

“2026 is the make-it-or-break-it year,” Guido said. Companies need to secure their systems now, while they still have time to get ahead. “And if they don’t do that, we’re going to end 2026 with everything on fire.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Yael Grauer

    Yael Grauer

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Yael Grauer

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Security

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Security

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI extends partnership with Customers Bank

OpenAI extends partnership with Customers Bank

28 April 2026
Amazon’s color screen Kindles are finally getting a system-wide dark mode

Amazon’s color screen Kindles are finally getting a system-wide dark mode

28 April 2026
UAE to Exit OPEC After Nearly 60 Years

UAE to Exit OPEC After Nearly 60 Years

28 April 2026
Amazon’s Kindle Colorsoft Gets a Dark Mode (2026)

Amazon’s Kindle Colorsoft Gets a Dark Mode (2026)

28 April 2026
Editors Picks
Nintendo’s Next Animated Movie Gets April 2028 Release Date

Nintendo’s Next Animated Movie Gets April 2028 Release Date

28 April 2026
Amazon’s color screen Kindles are finally getting a system-wide dark mode

Amazon’s color screen Kindles are finally getting a system-wide dark mode

28 April 2026
UAE to Exit OPEC After Nearly 60 Years

UAE to Exit OPEC After Nearly 60 Years

28 April 2026
Resident Evil Requiem’s New Mode Only Unlocked After You Beat the Game

Resident Evil Requiem’s New Mode Only Unlocked After You Beat the Game

28 April 2026

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.