It's not written by humans, it's written by AI. It's not useful, it's slop. It's not hard to find, it's everywhere you look.
As AI-generated text is becoming increasingly ubiquitous on the internet, some distinctive linguistic patterns are starting to emerge — maybe more so than anything else, that pattern of negating statements typified by "it's not X, it's Y."
Once you notice it, you start to see it everywhere. One teacher on Reddit even noticed that certain AI phrase structures are making the jump into spoken language.
"Comments and essays (I'm a teacher) are the obvious culprits, but I've straight up noticed the 'that's not X, it's [Y]' structure being said out loud more often than it used to be in video essays and other similar content," they wrote.
It's a fascinating observation that makes a striking amount of AI-generated text easily identifiable. It also raises some interesting questions about how AI chatbot tech is informing the way we speak — and how certain stylistic choices, like the em-dash in this very sentence, are becoming looked down upon for resembling the output of a large language model.
"Now I know that linguistic style existed before GPT, and it was common enough, but now I just can't unsee or unhear it," the Reddit user wrote, saying they now "assume AI was involved" when they see it.
"Makes me grimace just a bit on the inside," they added.
Others quickly chimed in, agreeing and riffing on the phenomenon.
"You're not just seeing it — you're saying something," one user wrote in a tongue-in-cheek comment, imitating ChatGPT. "And that's not illusion — that's POWER."
"It's almost as if AI use is becoming the preferred way of communication," another user commented. "It's not just frustrating — it's insulting."
Beyond a prolific use of em-dashes, which have quickly become a telltale sign of AI-generated text, others pointed out the abundant use of emojis, including green checkboxes and a red X.
It's a particularly pertinent topic now that the majority of students are owning up to using tools like ChatGPT to generate essays or do their homework. Even teachers are using the tech for grading, closing the loop on a trend that experts warn could prove incredibly destructive in the field of education.
Tech companies have struggled to come up with trustworthy and effective AI detection tools, more often than not leaving educators to their own devices.
And the stakes are as high as they've ever been. The internet is being flooded with AI slop, drowning out text that's actually being authored by a human.
AI's oddly stunted use of language isn't surprising. After all, large language models are trained on enormous training datasets and employ mad-libs style tricks to calculate the probability of each sequential word.
In other words, LLMs are imitators of human speech and attempt to form sentences that are most likely to be appreciated by the person writing the prompts, sometimes to an absurd degree.
It's an unnerving transition to a different — and consistently error-laden — way of writing that simply doesn't mesh with the messiness of human language. It's gotten to the point where teachers have become incredibly wary of submitted work that sounds too polished.
To many, it's enough to call for messier writing to quell a surge in low-effort AI slop.
"GPT is always going to sound polished," one Reddit user offered. "It’s a machine that rewards coherence, which is why incoherence has never been more precious."
"We need the rough edges," they added. "The voice cracks. The unexpected pause. The half-formed metaphor that never quite lands. Because that’s how you can tell a human is still in there, pushing back."
More on AI chatbots: AI Chatbots Are Becoming Even Worse At Summarizing Data
Share This Article