Researchers have developed a computer virus that can leverage the power of ChatGPT to first disguise itself by changing its own code — and then, in a particularly devious twist, spread by attaching itself to AI-generated emails that sound like they were written by a human.

As New Scientist reports, ETH Zurich computer science grad student David Zollikofer and Ohio State University AI malware researcher Ben Zimmerman created a computer file that can spread to a victim's computer in the form of an email attachment.

"We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit," Zollikofer told New Scientist.

As a result, the "synthetic cancer," as the researchers call the virus, isn't even detectable by antivirus scans, making it the perfect camouflaged intruder.

Once established on the victim's system, the virus then opens up Outlook and starts writing contextually relevant email replies — while including itself as a seemingly harmless attachment.

It's a terrifying example of how AI chatbots can be exploited to efficiently spread malware. Worse yet, experts warn the tools themselves could even aid bad actors in making them even harder to detect.

"Our submission includes a functional minimal prototype, highlighting the risks that LLMs pose for cybersecurity and underscoring the need for further research into intelligent malware," the pair wrote in a yet-to-be-peer-reviewed paper.

The AI was alarmingly believable in its attempts to "socially engineer" email replies.

"Dear Claire," it wrote in one message. "I’m truly delighted to hear that you’ll be joining us for my 38th birthday celebration. Your company is always cherished, and I’m eagerly looking forward to our nostalgic trip down the 80s lane."

Attached to the fake email was an executable called "80s_Nostalgia_Playlist.exe" which would install the worm if opened — assuming that Claire wasn't "very technologically savvy," as the researchers wrote in their paper.

Intriguingly, though, ChatGPT sometimes figured out the virus' nefarious intentions and refused to comply.

Other researchers have previously used ChatGPT to create AI "worms" that can similarly infiltrate a victim's emails and access data.

"The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload)," a team of researchers wrote in a different paper earlier this year.

To experts, viruses like the one devised by Zollikofer and Zimmerman are only the tip of the iceberg.

"I think we should be concerned," University of Surrey cyber security researcher Alan Woodward, who wasn't involved in the research, told New Scientist. "There are various ways we already know that LLMs can be abused, but the scary part is the techniques can be improved by asking the technology itself to help."

"Personally, I think we are only just starting to see the potential for LLMs to be used for nefarious purposes," he added.

Zollikofer, however, thinks it's not all doom and gloom.

"The attack side has some advantages right now, because there’s been more research into that," he told New Scientist. "But I think you can say the same thing about the defense side: if you build these technologies into the defense you utilize, you can improve the defense side."

More on AI worms: Researchers Create AI-Powered Malware That Spreads on Its Own


Share This Article