"The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case."

Keep an Eye Out

You might want to be more vigilant about checking for spam and phishing emails, because those comically bad grammatical errors that once gave the game away? They're going to be a thing of the past, thanks to AI.

Case in point: Europol, the European Union's law enforcement agency, has issued a warning about the potential abuse of ChatGPT and other large language model AIs by cybercriminals and scammers.

"As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said, as quoted by Reuters, noting that its "ability to draft highly realistic text makes it a useful tool for phishing purposes."

Artificial Authority

And the experts are in agreement.

"The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case," Corey Thomas, CEO of US cybersecurity firm Rapid7, told The Guardian.

According to the newspaper, data from Darktrace, one of the UK's most prominent cybersecurity firms, seems to indicate that more and more phishing emails are being written by chatbots. That's not good, as these LLMs tend to synthesize convincing-sounding prose in an authoritative style — a perfect fit for the corporate and official emails they're trying to imitate.

Specifically, Darktrace's data shows that the apparent volume of scam emails has dropped overall. Meanwhile, of those that they've detected, the linguistic complexity has gone up dramatically.

But don't be fooled into thinking the drop in numbers means the scammers have relented. In reality, it likely suggests that a significant number of them are using LLMs like ChatGPT to compose scam emails that are so complex that they're bypassing detection.

Nothing Personal

Those findings might be just the tip of the spear. According to Darktrace CEO Max Heinemeyer, AIs will also make it easier to perpetrate a type of socially engineered scam called "spear-phishing" that's personalized to target a specific person.

Executing these typically requires some degree of planning and research to gather details about a target to make the scam more convincing. Until now, the extra effort involved got in the way of spear phishing becoming too ubiquitous. But an AI could potentially automate spear phishing almost entirely.

"I can just crawl your social media and put it to GPT, and it creates a super-believable tailored email," Heinemeyer told The Guardian. "Even if I'm not super knowledgeable of the English language, I can craft something that's indistinguishable from human."

More on AI: BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes


Share This Article