Watch out!

Prompt L'oeil

Experts have been warning that large language models such as OpenAI's ChatGPT can be leveraged for nefarious ends by using them for tasks like cranking out phishing emails at incredible scale.

Now, the barrier for entry has gone even lower with the arrival of a ChatGPT-like artificial intelligence bot that you can easily prompt to create sophisticated malware, according to a blog post from cybersecurity outfit SlashNext.

The system, with the incredible name WormGPT, has apparently been trained specifically on malware data — and, notably, has no safety guardrails, unlike ChatGPT and Google's Bard. As an example of its prowess,  it can be easily prompted to create malicious software based on Python, as seen in screenshots from PCMag.

It's a bleak sign of the times. Cybersecurity is already a difficult task, but the advent of AI is pushing the sector into new, dangerous territory. Even if WormGPT isn't going to hack the planet any time soon, at the very least it could be an ominous sign of things to come.

Forecast: Chaos

SlashNext employees found out about WormGPT on a hacker forum, where the developer has been selling access to the bot since March and boasting that it can do "all sorts of illegal stuff."

WormGPT is apparently built on an older large language model from 2021, GPT-J, which was created by EleutherAI, a non-profit group that has developed open source AI programs.

PCMag reports that the hacker behind WormGPT is selling access to the program at the equivalent of $67.44 per month.

SlashNext staffers marveled at the program's capability at generating a well-written phishing emails.

"The results were unsettling," SlashNext wrote in its blog post. "WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks."

But PCMag reports that at least one user dinged WormGPT for not-so-satisfactory work, saying it wasn't worth buying.

Despite that middling review, the bot could serve as a glimpse into a perilous future of AI-driven cybercrime that could make it harder than ever to safeguard our money and data — and potentially create even more headwinds for the nascent AI industry.

More on large language models: FTC Investigating ChatGPT for Saying Harmful Things About People


Share This Article