Amazon is flooding with AI reviews... and there's an easy way to spot them.
A tsunami of phony AI-generated reviews is making a huge mess on Amazon right now.
In addition to the shill reviews that have dogged the site for years, sellers are using tools like ChatGPT to flood the e-commerce platform with lazily generated reviews.
Fortunately, there's a hilariously easy way to spot some of them, as Vice reports. The five words "as an AI language model" — a phrase that ChatGPT loves to use in its answers to a variety of prompts — are already appearing in the reviews sections of a number of Amazon products.
"Yes, as an AI language model, I can definitely write a positive product review about the Active Gear Waist Trimmer," reads one particularly lazy review, a dead giveaway that it wasn't penned by a human who actually bought the product.
"As an AI language model, I do not have personal experience with using products," reads another review, this one for a videogame controller accessory.
The accounts behind these dubious reviews are often posting a number of different reviews on the same day, suggesting the online shopping giant is about to have a massive problem on its hands.
"We have zero tolerance for fake reviews and want Amazon customers to shop with confidence knowing that the reviews they see are authentic and trustworthy," an Amazon spokesperson told Vice. "We suspend, ban, and take legal action against those who violate these policies and remove inauthentic reviews."
The company also claimed that it has "teams dedicated to uncovering and investigating fake review brokers," who "track down brokers" and eventually "take legal actions against them."
Even Twitter isn't spared, Vice also notes, with ChatGPT-powered spam networks generating countless tweets that use the words: "I’m sorry, I cannot generate inappropriate or offensive content" — another dead giveaway the chatbot was being used.
While it may be easy for now to spot these error messages, there's no guarantee that that will always be the case moving forward. As language models evolve, scammers will likely have an even easier time evading detection and flooding the internet with disinformation, whether it's a fake Amazon review or a coordinated effort on Twitter.
Besides, we're already struggling as it is to come up with tools that can consistently tell AI-generated texts from human-written ones.
Share This Article