Hack and Slash

Google Alarmed by Formidable AI-Powered Zero-Day Cyberattack

"It's a taste of what's to come."
Frank Landymore Avatar
Google logo sign mounted on a teal-colored building facade with a bright yellow background. The letters are in the classic Google colors: blue, red, yellow, blue, and red.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Google was rattled by a cyberattack that used AI to unearth a major flaw in its software that its own developers had no idea about.

The attack, which the New York Times reports was ultimately thwarted, was revealed by researchers at the tech giant on Monday. Their report didn’t specify who the actors behind it might be or when it occurred, but it was clear about what cutting-edge technology was at the heart of it.

“We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” reads the report.

Google said the hackers used AI to identify what’s known as a zero-day vulnerability, a flaw in a piece of software that wasn’t previously known to its developers. When exploited, they leave the developers on the back foot, as the hackers are free to wreak havoc until the white hats figure out how to plug the hole. 

In this case, the zero-day bug would’ve allowed the hackers to bypass two-factor authentication on an unspecified “popular open-source, web-based system administration tool,” but only if the attackers knew a person’s user name and password. Given that two-factor authentication is the last meaningful line of defense for most users, and that their passwords are likely weak if they weren’t already leaked online in the first place, the ability to sidestep it could’ve been catastrophic even if the hackers weren’t armed with that information.

“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” the report stated.

The researchers said this was the first example of a zero-day vulnerability being exploited by hackers that was developed with AI.

“It’s a taste of what’s to come,” John Hultquist, the chief analyst at Google Threat Intelligence Group, which published the report, told the NYT. “We believe this is the tip of the iceberg. This problem is probably much bigger; this is just the first tangible evidence that we can see.”

The attack will add to the atmosphere of unease around AI’s implications for cybersecurity, particularly with the release of Anthropic’s Claude Mythos model last month. Anthropic claimed that the AI system could find zero-day vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” a capability so potentially devastating that the company made a show of only sharing the model with a select group of companies and government agencies. Its rollout has drawn alarm from government leaders and security experts alike.

AI’s cybersecurity threat derives from its much-touted and ever-improving ability to write and parse code, which is being rapidly embraced by businesses across the tech and financial sectors. Like AI prose, AI code bears its own hallmarks, albeit more subtle. The Google researchers found that hacker’s malware contained an abundance of annotations that explain its code called docstrings, some hallucinated text, and “a structured, textbook Pythonic format highly characteristic of LLMs training data.”

More on AI: Vibe Coded Apps Are Spilling Users’ Personal Information Directly Into the Maw of Greedy Hackers

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.


TAGS IN THIS STORY