Hot Air

Oxford Researcher Warns That AI Is Heading for a Hindenburg-Style Disaster

"It was a dead technology from that point on."
Frank Landymore Avatar
A city skyline featuring a mix of modern and older buildings under a partly cloudy sky. The shadow of an airship has been overlaid onto them.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Is the AI bubble going to burst? Will it cause the economy to go up in flames? Both analogies may be apt if you’re to believe one leading expert’s warning that the industry may be heading for a Hindenburg-style disaster.

“The Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,” Michael Wooldridge, a professor of AI at Oxford University, told The Guardian.

It may be hard to believe now, but before the German airship crashed in 1937, ponderously large dirigibles once seemed to represent the future of globe-spanning transportation, in an era when commercial airplanes, if you’ll permit the pun, hadn’t really taken off yet. And the Hindenburg, the largest airship in the world at the time, was the industry’s crowning achievement — as well as a propaganda vehicle for Nazi Germany.

At over 800 feet long, it wasn’t far off the length of the Titanic — another colossus whose name became synonymous with disaster — and regularly ferried dozens of passengers on Trans-Atlantic trips. All those ambitions were vaporized, however, when the ship suddenly burst into flames as it attempted a landing in New Jersey. The horrific fireball was attributed to a critical flaw: the hundreds of thousands of pounds of hydrogen it was filled with were ignited by an unfortunate spark. 

The inferno was filmed, photographed and broadcasted around the world in a media frenzy that sealed the airship industry’s future. Could AI, with its over a trillion dollars of investment, head the same way? It’s not unthinkable. 

“It’s the classic technology scenario,” Wooldridge told the newspaper. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

Perhaps AI could be responsible for a catastrophic spectacle, such as a deadly software update for self-driving cars, or a bad AI-driven decision collapsing a major company, Wooldridge suggests. But his main concern are the glaring safety flaws still present in AI chatbots, despite them being widely deployed. On top of having pitifully weak guardrails and being wildly unpredictable, AI chatbots are designed to affect human-like personas and, to keep users engaged, be sycophantic.

Together, these can encourage a user’s negative thoughts and lead them down mental health spirals fraught with delusions and even full-blown breaks with reality. These episodes of so-called AI psychosis have resulted in stalking, suicide and murder. AI’s ticking time bomb isn’t a payload of combustible hydrogen, but millions of potentially psychosis-inducing conversations. OpenAI alone admitted that ChatGPT that more than half a million people were having conversations that showed signs of psychosis every week.

“Companies want to present AIs in a very human-like way, but I think that is a very dangerous path to take,” Wooldridge told The Guardian. “We need to understand that these are just glorified spreadsheets, they are tools and nothing more than that.”

If AI has a place for us in the future, it should be as cold, impartial assistants — not cloying friends that pretend to have all the answers. A shining example of this, according to Wooldridge, is how in an early episode of “Star Trek,” the Enterprise’s computer says it has “insufficient data” to answer a question (and in a voice that is robotic, not personable.)

“That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he told The Guardian. “Maybe we need AIs to talk to us in the voice of the ‘Star Trek’ computer. You would never believe it was a human being.”

More on AI: It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.