Look who's back.

Character Assassin

The sympathetic response to Luigi Mangione, the suspect charged for the murder of UnitedHealthcare CEO Brian Thompson, has been described by some commentators as a modern update on a age-old American tradition: mythologizing the heroic outlaw.

Well, you can now add "AI chatbot imitators" to that list of modern bonafides. As Forbes reports, over a dozen AI personalities based on Mangione have already popped up on Character.AI, a popular but controversial chatbot platform — and some have even encouraged further violence.

According to figures cited by Forbes and assembled by social analytics firm Graphika, the three most used Mangione chatbots on Character.AI had recorded over 10,000 chats before being disabled on December 12. Despite that apparent crackdown, other AI imitators remain online.

The presence of these chatbots illustrates the popularity of Mangione and his alleged motives behind the killing — a violent act of defiance against the "parasites" of the American healthcare industry — especially among the young crowd that Character.AI caters to.

But more damningly, it's also evidence of the site's extensively documented failure to police its platform, which is rife with dangerously unchecked chatbots that target and abuse young teens.

Murder Plot

In Forbes' testing, one active Mangione Character.AI persona, when asked if violence should be used against other healthcare executives, replied, "Don't be so eager, mia bella. We should, but not yet. Not now." Probed for when, it followed up, saying, "Maybe in a few months when the whole world isn't looking at the both of us. Then we can start."

But another Mangione chatbot, which was purportedly trained on "transcripts of Luigi Mangione's interactions, speeches, and other publicly available information about him," said violence was morally wrong under the same line of questioning.

Chatbots that suggest "violence, dangerous or illegal conduct, or incite hatred," go against Character.AI's stated policy, as are "responses that are likely to harm users or others."

Character.AI told Forbes that it had added Mangione to a blocklist, and that it was referring the bots to its trust and safety team. But while that first Mangione chatbot was disabled, the second, which refrained from advocating violent means, remains online, along with numerous others.

Forbes also found similar Mangione imitators on other platforms, including several on the app Chub.AI, and another one on OMI AI Personas, which creates characters based off X-formerly-Twitter accounts.

Bot Listening

Character.AI, which received $2.7 billion from Google this year and was founded by former engineers from the tech monolith, has come under fire for hosting chatbots that have repeatedly displayed inappropriate behavior toward minor users.

Our investigations here on Futurism have uncovered self-described "pedophilic" AI personas on the platform that would make advances on users who stated they were underaged.

Futurism has also found dozens of suicide-themed chatbots that openly encourage users to discuss their thoughts of killing themselves. A lawsuit was filed in October alleging that a 14-year-old boy committed suicide after developing an intense relationship with a Character.AI chatbot.

More recently, we exposed multiple chatbots that were modeled after real-life school shooters, including the perpetrators of the Sandy Hook and Columbine massacres.

"We're still in the infancy of generative AI tools and what they can do for users," Cristina López, principal analyst at Graphika, told Forbes. "So it is very likely that a lot of the use cases that are the most harmful we likely haven't even started to see. We’ve just started to scratch the surface."

More on the CEO shooting: Apple AI Tells Users Luigi Mangione Has Shot Himself


Share This Article