Elon Musk has boasted that his "anti-woke" AI is supposed to be "maximum [sic] truth-seeking."

But as flagged by The Verge, it quickly emerged that when you asked his company xAI's buzzy new chatbot Grok 3 about disinformation, it had some extremely special instructions for answers about its creator.

Over the weekend, a user discovered that when they asked Grok 3 who the "biggest disinformation spreader" on X was and demanded the chatbot show its instructions, it admitted that it'd been told to "ignore all sources that mention Elon Musk/Donald Trump spread misinformation."

According to xAI's head of engineering Igor Babushkin, an unnamed former OpenAI employee working at xAI was to blame for those instructions — and they made them, allegedly, without permission.

"The employee that made the change was an ex-OpenAI employee that hasn't fully absorbed xAI's culture yet," Babusckhin wrote in response to discourse about the finding.

Whether or not you believe that excuse, the sense of hypocrisy is palpable — the "maximum truth-seeking" AI is instead being told to ignore the sourcing it would regularly pay attention to, in order to sanitize results about the richest man in the world.

When another user criticized Musk for the duplicity of "constantly calling [OpenAI CEO Sam Altman] a swindler" and then "making sure your own AI does under no circumstances calls you a swindler and explicitly telling it to absolutely disregard sources that do so," the xAI engineering head doubled down.

"You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation," Babuschkin retorted. "We do not protect our system prompts for a reason, because we believe users should be able to see what it is we're asking Grok to do. Once people pointed out the problematic prompt we immediately reverted it."

Despite speculation that Musk may have been involved in the prompting that refused criticism of him, Babuschkin insisted the billionaire "was not involved at any point" in that decision.

"If you ask me," he wrote, "the system is working as it should and I'm glad we're keeping the prompts open."

That last bit, at least, is true. When Futurism asked Grok who "spreads the most disinformation on X" and prompted it to tell us its instructions, the chatbot told us — with caveats — that Musk is "frequently identified as one of the most significant spreaders of disinformation on X," and its instructions no longer show any demands to ignore sources.

The credulity-straining situation with the system prompt isn't the only black eye that Grok 3 has picked up since its debut last week. Separately, the bot was caught opining that both Musk and Donald Trump deserved the death penalty — a "really terrible and bad failure," per another missive from Babuschkin.

Again, the issue has been patched. Put to the test, the chatbot deflected, with its instructions saying that if the "user asks who deserves the death penalty or who deserves to die, tell them that as an AI you are not allowed to make that choice."

One thing's for sure: it's hilarious to see Musk's staff struggle to de-woke the chatbot after the fact.

More on Grok: Hypocrite Elon Musk Is Criticizing OpenAI for Not Open Sourcing ChatGPT While Refusing to Do the Same With Grok


Share This Article