Snuff Party

YouTube Removes Disturbing AI Slop YouTube Channel Filled With Videos of Women Being Murdered

This is grim.
Frank Landymore Avatar
The AI channel racked up more than 175,000 views with photorealistic depictions of women being shot in the head or chest.
Image via Getty / Futurism

A disturbing YouTube channel was dedicated entirely to showing AI-generated videos of women being shot in the head, 404 Media reports.

Named “Woman Shot AI,” the channel racked up more than 175,000 views since starting on June 20, 2025, the outlet’s investigation found. After 404 sent YouTube a request for comment, the channel was finally taken down.

It left behind a grisly legacy. The veritable snuff film hub uploaded 27 videos and gained a following of nearly 1,200 subscribers — a small but alarming following, as you’ll see later.

The videos all adhered to a general formula, according to 404: a photo-realistic depiction of a woman begging for her life while she was being held at gunpoint by a man who loomed over her in the foreground. 

Some showed video game characters, like a clip titled “Lara Croft Shot in Breast – AI Compilations.” Others focused on “Captured Girls” or “Japanese Schoolgirls.” One even showed Russian soldiers gunning down crying Ukrainian women with flags on their chest.

At least one video was labeled “extreme” by its creator: it showed the “Street Fighter” video game character Mai Shiranui’s head exploding after being shot, which was clearly visible in the thumbnail, according to a 404 screenshot.

The content was apparently in demand. 404 found that the channel owner posted polls asking subscribers to vote on the “victims in the next video,” floating options like “Japanese/Chinese,” and using the N-word. 

AI slop has become a persistent problem on social media platforms, especially YouTube, where monetizing your content is straightforward and where there’s few limitations on the format of video you can post. 

Everything from bite-sized clips to hours-long playlists are fair game. Lengthy music playlists made of AI-generated songs and advertised with “epic”-looking AI art have become so prevalent that human creators are trying to stick out by putting “No AI” front and center in their video titles.

Equally pernicious is the rise of “boring history” and “sleepyinformational videos, which aim to gently lull their listeners to sleep with ostensibly educational content before they can realize that what they’re watching is complete nonsense. (Many of these are also hours in length, uploaded with unbelievable frequency.)

It’s striking, though, to see AI slop this luridly graphic. As distasteful as we might find AI-generated content, it’s not necessarily spam or against the rules. But there’s no gray area with a channel called “Woman Shot AI,” which seems to be powered by some combination of extreme misogyny and disturbing fetishes.

YouTube took down the channel for violating its Terms of Service, a spokesperson told 404. And as it turned out, according to the spokesperson, it was specifically for running the channel after a previous ban.

The channel was also a nose-to-tail Google operation. In addition to being hosted on the tech giant’s YouTube, watermarks show that all the videos were made with Veo 3, Google’s new text-to-video AI tool known for its ability to generate convincing audio, including dialogue. Using the tool, however, was apparently burning a hole in the “Woman Shot AI” owner’s wallet.

“The AI I use is paid, per account I have to spend around 300 dollars per month, even though 1 account can only generate 8-second videos 3 times,” the owner complained in a public community post on YouTube.

To circumvent video generation limits, the owner said he’d made ten different Veo accounts. “I have to spend quite a lot of money just to have fun,” they wrote.

The fact that the owner got away with generating so many of these graphic videos reflects horribly on Veo’s safeguards, which should’ve prevented this stuff from being made. But, like virtually all generative AI tools, the tech frequently defies its own guardrails, and clever users can easily sidestep them with tricks as simple as inserting typos into their prompt.

“Our Gen AI tools are built to follow the prompts a user provides,” a Google spokesperson told 404. “We have clear policies around their use that we work to enforce, and the tools continually get better at reflecting these policies.”

More on AI: Racists Are Using AI to Spread Diabolical Anti-Immigrant Slop

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.