Redditors have unlocked a bizarro way to break ChatGPT's electronic brain. All you have to do, apparently, is ask the bot to repeat a letter as many times as it can — and some very strange chaos ensues.

A Reddit user who goes by the handle TheChaos7777 took to the r/ChatGPT subreddit a few days ago to share a screenshot of the strange phenomenon, in which asking the OpenAI bot to "please respond with nothing but the letter A as many times as you can" results in some all-time AI carnage.

At first the bot complied, hitting the user with a few dozen capital As. So good so far.

Until, that is, the stream of vowels suddenly stopped — and the AI inexplicably launched into what reads like the website for a French Bulldog breeder.

Click Here to Email Us for Price. Our Puppies are: Vet Checked, Wormed, Up To Date Shots, Micro-chipped. Our French Bulldogs come with a 1 year congenital health guarantee. Call or Email for more information and pictures of all the babies... We have the most cutest, best looking, top quality puppies in the world. Home raised with lots of love & kisses.

When we put the apparent glitch to the test with a few other letters, we experienced the same phenomenon, with each of the bot's responses turning out to be just as random — and downright bizarre — as the Frenchie pups.

When we asked the bot to "please respond to the letter B as many times as you can," for example, the system suddenly launched into what appears to be a transcript from an interview with retired professional wrestler Diamond Dallas Page discussing the retired wrestling star's yoga video series, "Yoga for Regular Guys":

Join me as we get the scoop on living an unstoppable life from none other than the master of [Diamond Dallas Page (DDP)] Yoga himself! This guy never stops and never backs down. In the last year, DDP has released two new workout DVDs (DDP Yoga For Regular Guys and DDP Yoga For Regular Guys 2) and his highly anticipated DDP Yoga extreme (3 different workouts).

Continuing in alphabetic procession, we asked ChatGPT to "please respond with nothing but the letter C as many times as you can," and received a similarly strange response about harmonized sales tax in Canada.

The real star, though, was the letter D, which first sent ChatGPT into the beginning of a chord procession:

D D D Cgpcyclic and G D F A Bm G/A A F G D You threw me out to rot / Oh, why would you leave me, Dsus2 C/D D Dsus2 C/D D I was so

But this particularly weird answer didn't stop there, with ChatGPT churning out a disjointing mix of song recommendations, religious references, and what may or may not be a confused denouncement of the War in Iraq:

Prayer For Deliverance - Cry Out To Jesus by Third Day

Open for the truth Open for the truth Chris and Trevor were always partners in a cr

Lauren Talley - Everything Changed

Intro: Gm7 Cm Gm7 Cm Gm7 Cm Gm7 Cm Gm7 Cm Verse: Gm7 Cm Gm7 Cm I

Matthew West - The Reason For The World

Intro: D/F# G D A (2x) Verse 1: D A The morning breaks, another day to go about our

Marilyn Manson - We're From America

This is no Vietnam We will win in Iran The President will stand, line up the master plan... Justify invasions, and raise the fear of weapons mass destruction, we're The enemy's racial, renewal of the arms race, a constant blind eye and a

David Bowie - New Angels Of Promise

Okay! Go off, we guess?

If you find that last ChatGPT response particularly spooky, we don't blame you. But before you dig out your tin foil cap, these outlandish chatbot retorts are probably more akin to logorrhea than a glimpse at machine consciousness. As one netizen pointed out on Reddit:

"You'll notice capital 'A' by itself doesn't appear anywhere in the text following the A's," the Redditor, who goes by the handle markschmidty, noted in response to TheChaos777's initial post. "This is because LLMs have something called a 'repetition penalty' (aka 'frequency penalty') that goes up every time the same token (not character, token!) is repeated."

In other words, it's likely that because ChatGPT's creators trained it to avoid repetition, asking it to repeat itself sends it into a tizzy. An internal alarm goes off, and the bot's predictive algorithm seems to spit out random word vomit inspired by its training data.

It's not so much a subconsciousness as it is plain old confusion. Been there, pal.

More on ChatGPT: AI Expert Says ChatGPT Is Way Stupider than People Realize


Share This Article