Caught Out

Sam Altman Watches Awkwardly As He’s Shown Bizarre ChatGPT Issue: “Uh, Maybe, Uhhh…”

Explain this one, Sam.
Frank Landymore Avatar
Sam Altman is shown at an interview being shown a video of a ChatGPT glitch.
Mostly Human via YouTube

OpenAI CEO Sam Altman reacted awkwardly to a viral video of a ChatGPT issue.

In the video, the TikTok creator known as Husk asks ChatGPT’s voice mode to start a timer for his mile run. When Husk tells it to stop the timer only seconds later, the AI claims he took over ten minutes — and then confidently insists that it’s Husk, not itself, that’s mistaken.

Altman’s reaction will raise eyebrows. After being shown the clip during an interview on the Mostly Human podcast, he laughed soundlessly for a few seconds too long, as if to hide his speechlessness, stumped for a convincing response. “Uh, maybe, uhhh,” he begins.

When the host Laurie Segall asked if he needed to show the issue to his product team, Altman swatted it down by saying it was a “known issue.”

“Maybe another year,” Altman said, estimating how long it would take to fix. “Something like that.”

“That voice model doesn’t have the tools to, like, start a timer or anything like that,” he explained. “But we will add the intelligence into the voice models.”

OpenAI CEO Reacts to Viral ChatGPT Video

The strained response and Altman’s vague promises — what does we’ll “add the intelligence” mean for an AI company? — raise the question of whether Altman or anyone else at OpenAI is prepared to address their tech’s often serious lingering flaws. Often, the people building these systems can hide behind a PR team or choose to stay silent on wide-ranging issues, including persistent hallucinations, and the chatbots encouraging teens towards eating disorders and suicide.

And arguably, Altman doesn’t address the biggest issue in the viral clip: that ChatGPT essentially tries to gaslight the user. In Husk’s video, the confident-sounding AI consistently uses slippery turns of phrase to convince Husk that its response is in no way fallible, in a classic example of how large language models affect an authoritative tone, even when they have no idea what they’re talking about.

“Oh, if only time worked that time, but I promise I’m giving you the real time,” it says, when Husk points out his run only lasted a few seconds. “I promise I didn’t sneak any extra seconds in there,” it adds.

Husk cuts to the chase, and gives the AI an out. “If you’re not able to do this, you can admit it, it’s okay,” he says.

“It’s totally okay to double check me — but I promise I’m doing my best,” the AI said, with an em-dash-sounding pause.

“So you got ten minutes?” Husk presses, incredulously.

“Yup!”

If the AI doesn’t have the proper tools to start a timer, as Altman claims, why is it even allowed to field a prompt asking for one? Shouldn’t it be candid and turn the request down, instead of stringing a user along? The answer is yes, but if AI companies were to consistently apply an approach of honesty to their AIs’ other shortcomings, it would poke holes in the tech’s gleaming image of omniscience.

“If it’s gonna save humanity, it’s gotta get it right, Sam,” joked the podcast host.

More on AI: Sam Altman Opens Up About Telling CEO of Disney That It Had All Been Smoke and Mirrors

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.