Mile High Magic

Man Confused by AI-Generated Reports That He’s Dead

"Just doesn't make sense. I always thought, like — usually you see that happen to high-profile celebrities."
Joe Wilkins Avatar
A Denver Broncos beat writer was confused when a popular Facebook page claimed he had passed away due to a "domestic violence incident."
Getty / Futurism

You know the old adage “don’t believe everything you see on the internet”? In the age of AI, it’s never been more relevant.

Late in December, Denver Bronco’s beat writer Cody Roark was surprised to learn he was dead, leaving behind a 5 year old child. As far as he knew, he had never fathered a child, and was very much still alive.

Yet that’s not what the post on Facebook claimed. There, an image shared by the page “Wild Horse Warriors” featured an AI generated image of the sports journalist holding a child, with a big “RIP” stitched across it.

Though the Facebook page has since been taken down, the Denver Post reports that it described Roark as a “Denver Broncos analyst” who’d “dedicated over a decade to protecting the team,” before passing away due to a “heartbreaking domestic violence incident.”

Of course, the whole thing was AI, as Roark would later conclude.

“It was just one of those things you hate seeing,” he later told the DP during an interview. “Just doesn’t make sense. I always thought, like — usually you see that happen to, like, high-profile celebrities.”

“For that to happen to me was just really weird. Very, very weird,” Roark said.

The account in question, Wild Horse Warriors, had snared some 6,200 followers over the last few months. The DP reports that, prior to it being taken down by Facebook, it averaged about four completely hallucinated Denver Broncos stories every day. Many of them contained content that could impact people’s reputations, like a false claim that Bronco’s wide receiver Courtland Sutton refused to wear an armband in support of LGBTQ people during a game.

Though Roark seems no worse for the wear, the incident follows a similar pattern emerging from other AI systems. In December, for example, Google’s AI overview — that little summary that now appears at the top of every Google search — falsely claimed a Canadian folk musician was a convicted sex offender.

That claim cost the musician at least one gig, and untold reputational harm that can be difficult to repair — just one more sign that through AI, tech corporations have provided a potent new tool for anyone trafficking in misinformation.

More on misinformation: China Planning Crackdown on AI That Harms Mental Health of Users

Joe Wilkins Avatar

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.