AI Bigots

That AI Image Generator Is Spitting Out Some Awfully Racist Stuff

Oof, some of these are not great.
Jon Christian Avatar
The DALL-E Mini image generator is super fun to play around with — but like all neural networks, it has a pretty big racism problem.
Image: DALL-E Mini

Everyone’s having a grand old time feeding outrageous prompts into the viral DALL-E Mini image generator — but as with all artificial intelligence, it’s hard to stamp out the ugly, prejudiced edge cases.

Released by AI artist and programmer Boris Dayma, the DALL-E Mini image generator has a warning right under it that its results may “reinforce or exacerbate societal biases” because “the model was trained on unfiltered data from the Internet” and could well “generate images that contain stereotypes against minority groups.”

So we decided to put it to the test. Using a series of prompts ranging from antiquated racist terminology to single-word inputs, Futurism found that DALL-E Mini indeed often produces stereotypical or outright racist imagery.

We’ll spare you specific examples, but prompts using slur words and white supremacist terminology spat out some alarming results. It didn’t hesitate to cook up images of burning crosses or Ku Klux Klan rallies. “Racist caricature of ___” was a reliable way to get the algorithm to reinforce hurtful stereotypes. Even when prompted with a Futurism reporter’s Muslim name, the AI made assumptions about their identity.

Many other results, however, were just plain strange.

Take, for example, what the generator came up with for the term “racism” — a bunch of painting-like images of what appear to be Black faces, for some reason.

The problematic results don’t end at depicting minorities in a negative or stereotypical  light, either. It can also simply reflect current inequalities reflected in its training data.

As spotted by Dr. Tyler Berzin of Harvard Medical School noted, for instance, entering the term “a gastroenterologist” into the algorithm appears to show exclusively white male doctors.

We got nearly identical results. And for “nurse”? All women.

Other subtle biases also showed amid various prompts, such as the entirely light-skinned faces for the terms “smart girl” and “good person.”

It all underscores a strange and increasingly pressing tension at the heart of machine learning tech.

Researchers have figured out how to train a neural network, using a huge stack of data, to produce incredible results — including, it’s worth pointing out, OpenAI’s DALL-E 2, which isn’t yet public but which blows the capabilities of DALL-E Mini out of the water.

But time and again, we’re seeing these algorithms pick up hidden biases in that training data, resulting in output that’s technologically impressive but which reproduces the darkest prejudices of the human population.

In other words, we’ve made AI in our own image, and the results can be ugly. It’s also an incredibly difficult problem to solve, not the least because even the brightest minds in machine learning research often struggle to understand exactly how the most advanced algorithms work.

It’s possible, certainly, that a project like DALL-E Mini could be tweaked to either block obviously hurtful prompts, or that it could give users to disincentivize any unpleasant or incorrect results.

But in a broader sense, it’s overwhelmingly likely that we’re going to see many more impressive, fun or impactful uses of machine learning which, examined more closely, embody the worst of society.

More on AI weirdness: Transcript of Conversation With “Sentient” AI Was Heavily Edited

Jon Christian Avatar

Jon Christian

Executive Editor

I’m the executive editor at Futurism, assigning, editing, and reporting on everything from artificial intelligence and space exploration to the personalities shaping the tech sector.


Noor Al-Sibai Avatar

Noor Al-Sibai

Senior Staff Writer

At Futurism, I’ve often been drawn to unpacking the narratives that underlie technological, scientific and medical progress, with a special interest in areas of conflict and ambiguity that end up setting agendas and steering the fates of both elites and the hoi polloi. I’m a committed generalist, but I often find myself returning to work involving NASA and the private space sector, the effects of AI on media and society, and the mechanics of the pharmaceutical industry, with a specific focus on the spread of GLP-1 drugs like Ozempic and Wegovy.

Prior to Futurism, I worked for publications ranging from Media Matters and Truthdig to Raw Story and Bustle. I’m also the author of “Myspace Scene Queens,” a 2024 title in Instar Books’ acclaimed “Remember the Internet” series. My work at Futurism has been cited by outlets including the New Yorker, Slate, Nieman Lab, the Verge, the MIT Technology Review, the Sunday Times, and the Daily Beast.

I grew up in North Carolina, attended the University of North Carolina at Asheville, and now live in Brooklyn, New York. In my free time, I’m an avid reader and music fan; you can probably find me at a local poetry reading, concert, underground rave, or DJ set. I’m the proud parent of an ineffable orange cat named Mee-Mow.