X Marks the Spot

Police Raid SpaceX’s Brand New Offices

All the nasty problems at xAI are now SpaceX problems, following the acquisition.
Frank Landymore Avatar
The raid is part of Paris prosecutors' ongoing investigation into the nonconsensual sexualized images generated by Grok.
Illustration by Tag Hartman-Simkins / Futurism. Source: Fabrice Coffrini / AFP via Getty Images

It was only yesterday that Elon Musk announced that SpaceX had acquired his AI company xAI, a merger that catapulted the rocket maker into being the most valuable private company in the world, but raised significant questions about bringing two firms with vastly different missions under the same wing.

Now, the freshly-joined entity is facing one of its first major tests. On Tuesday, French authorities raided the offices of X, Elon Musk’s social media site owned by xAI, as part of its ongoing criminal investigation into the sexual deepfakes generated by its AI chatbot Grok. Following the acquisition, of course, those X offices are now SpaceX offices.

The raid was carried out by the Paris Public Prosecutor Office’s cybercrime unit, in collaboration with French police and Europol. And it’s perhaps an early warning sign of how all the baggage that SpaceX is inheriting by bringing Musk’s controversial AI efforts into the fold may haunt the company going forward.

In an announcement, the Paris prosecutors said it was investigating criminal offenses related to the distribution of “child pornography images,” the infringement of personal rights through the generation of “sexual deepfakes,” and the denial of “crimes against humanity,” per NBC News.

In addition to searching X’s offices, Musk and X’s former CEO Linda Yaccarino were issued a voluntary summons to appear and answer questions the week of April 20.

SpaceX is not without controversy: its Starship rocket has generated plenty of scrutiny and public backlash from its frequent explosions, which critics argue threaten public safety and flout environmental laws. But by and large, it’s the most beloved Musk venture, as space exploration is something that earns a lot of public goodwill, serving some greater mission for all of humankind. Now, its image is undoubtedly being tainted by its association with xAI and Grok.

Grok in particular is no stranger to making the wrong kinds of headlines. Last month, French prosecutors opened an investigation into Musk’s platform after the AI chatbot was used to generate nonconsensual sexualized images of real people, many of them appearing to be minors, in a weeks-long spree that started at the end of December and stretched into January. 

The digital “undressing” trend was so out of control that the AI content analysis firm Copyleaks estimated Grok was generating a nonconsensually sexualized image every single minute. In all, the Center for Countering Digital Hate estimated that as many as 3,000,000 of these sexualized images were produced, including more than 23,000 images of children.

The prosecutor office also said it was investigating the distribution of Holocaust denial content on the platform. Far-right accounts have flourished under Musk’s ownership of X, formerly Twitter, with Musk often resharing racist conspiracy theories and antisemitic rhetoric — a habit mirrored by Grok, which gained notoriety for calling itself “MechaHitler” during a particularly racist posting spree last summer.

Governments abroad have seemed to take Grok’s AI nudes of women and children more seriously. Indonesia and Malaysia banned access to Grok, though the former has just lifted its ban “conditionally.” UK prime minister Keir Starmer vowed to crackdown on the chatbot and showed support for the nation’s communications regulator Ofcom, after it announced its own investigation into Grok.

Mirroring developments across the channel, the UK’s Information Commissioner’s Office announced its own probe into Grok on Tuesday, over its “potential to produce harmful sexualized image and video content,” per the BBC. “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this,” William Malcolm, the ICO’s executive director for regulatory risk and innovation, said in a statement.

The renewed regulatory pressure abroad comes after fresh reporting from the Washington Post detailed how some xAI employees were appalled by the startup’s abrupt pivot into embracing heavily sexualized content, as Musk pushed to make Grok as engaging as possible. Employees said that the company rolled back guardrails on sexual material and ignored repeated warnings that Grok could allow users to churn out sexualized images of children and celebrities that could be illegal. xAI’s safety team reportedly consisted of just two or three people for most of 2025.

Though Starship and a revenge-porn generating Grok would seemingly have little in common, Musk explained in an announcement about the acquisition that space-based AI was the “only way to scale” the technology in the long-term because of the vast amounts of solar energy that could be harvested and all the untapped real-estate that putting data centers in orbit opens up. He also stated that he wants to populate the Earth’s orbit with AI satellites to build what he likened to a “sentient sun.” These are ambitions that will be years in the making — but in the meantime, SpaceX will have to survive its now close association with CSAM-generating AI.

More on xAI:Opposition to Elon Musk’s AI Stripping Clothing Off Children Is Nearly Universal, Polling Shows

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.