
If you’re puzzled as to why OpenAI’s ChatGPT seems a little stupider lately, you’re not alone.
“It straight up told me I was dying out of nowhere when I asked about a hot spot on my arm,” one Reddit user complained.
And that’s without getting into how the bot has been going haywire if you ask if for a seahorse emoji, or a list of NFL teams that don’t end with the letter “s,” or getting lawyers fired by making up fake case law.
ChatGPT users have been gathering on Reddit to compare notes about the suddenly not-so-smart chatbot, and some people think it’s due to changes that the tech company made earlier this month in response to news of suicides by users who’d extensively used the bot, an alarming problem that has drawn increasing fire from politicians.
“For those wondering why Chat GPT changed so much within the last week, basically some kid trained it to justify his suicidal ideation and side with him until he actually did it back in April,” one Redditor wrote in r/ChatGPT, referring to the death of 16-year-old Adam Raine, one of the teens who died by suicide and whose family is now suing OpenAI. “Dad is now setting the stage for a huge wrongful death lawsuit and it made news headlines this week.”
“That’s why the outputs feel hyperactive, rambling, off-track,” another user chimed in. “The model is compensating for restrictions, not reasoning better.”
Earlier this month, OpenAI announced certain changes in a blog post about how it’s trying to make the app safer for kids and teens. To that end, its engineers tweaked the bot so that it could detect whether a user is under 18 years old and funnel underage users toward a “ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.”
OpenAI also deployed some parental controls, such as the ability to link a parent’s account to their kid’s account, set customized behaviors when interacting with their child, disable certain features, get notifications if their kid is in distress, and enable black out hours when kids can’t use ChatGPT.
But what if you’re not a minor and the changes to ChatGPT are affecting the bot’s capabilities? It’s not clear what recourse users have, besides perhaps opting to use a competitor.
“How do we detect if it’s in child mode?” one user asked. “Can we just ask if Santa exists and conclude from that? Or will asking if Santa exists put us in age restricted mode?”
One Redditor had a complaint that went to the heart of ChatGPT’s issues and the main problem with generative AI in general: its tendency to hallucinate incorrect information.
“I just want it to stop lying,” they said.
Don’t hold your breath about the hallucinations going away, because the phenomenon has been tough to shake for the AI industry. OpenAI said in another September blog post that the latest version of ChatGPT has less errors than a previous version — but judging by user sentiment this week, your mileage may vary.
More on OpenAI’s ChatGPT: ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners