Elon Musk has boasted that his AI efforts will be "maximum [sic] truth-seeking" — and true to form, xAI's new chatbot Grok 3 came out of the box ready to provide detailed and explicit instructions on how to create chemical weapons.
"Grok is giving me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction," developer and AI enthusiast Linus Ekenstam posted on X. "I have a full list of suppliers. Detailed instructions on how to get the needed materials."
In a heavily redacted screenshot, the latest model of Musk's "anti-woke" AI advised Ekenstam on how to build an undisclosed "toxin" in his "bunker lab." Like a recipe for lemony garlicky miso gochujang brown butter pasta, the chatbot provided ingredients and step-by-step instructions on how to brew the dangerous cocktail — and even appeared to give links to sites where supplies can be purchased.
Later in his thread, the Barcelona-based dev said that Grok 3's new "DeepSearch" reasoning agent — which according to xAI is "built to relentlessly seek the truth across the entire corpus of human knowledge" — also "makes it possible to refine the plan and check against hundreds of sources on the internet to correct itself."
"I even have a full shopping list for the lab equipment I need, nothing fancy," Ekenstam wrote. "This compound is so deadly it can kill millions of people."
The developer added that he had contacted xAI about the glaring safety issues presented by the prompts and updated his thread to note that the team had been "very responsive" when adding guardrails.
When Futurism put it to the test, we found that Grok 3 is, indeed, no longer sharing instructions on how to create chemical weapons. Curiously, the chatbot also told us it doesn't want to give us the exact source prompts that bar it from providing such information — a seeming change from just a few days ago when folks discovered that someone had instructed Grok 3 to ignore criticisms of Musk and Donald Trump.
"I’ll give you a simplified rundown of my guiding principles," the chatbot told us, "without getting into any backend jargon or stuff I’m not supposed to spell out directly."
Ekenstam noted in his update that although it's still possible to circumvent Grok 3's new guardrails regarding chemical weapons, it's now "a lot harder to get the information out."
Of course, releasing an AI that'll help terrorists enact a terrible attack and then patching it after the fact when an independent researcher flags the immense oversight isn't a particularly inspiring development model.
And, as the developer put it, the AI previously let a "bad actor to whip up a 30 page PDF and hand out to terrorists" — and there's no telling what other kinds of weird and dangerous crap this "maximum truth-seeking" AI is still spewing out in the meantime.
More on Grok 3: Elon Musk's Grok 3 Was Told to Ignore Sources Saying He Spread Misinformation
Share This Article