For years, artificial intelligence researchers have been working to combat the racism, misogyny, homophobia, and other harmful biases embedded into machine learning systems. Now, in the era of generative AI reaching market, guardrails against machine learning biases are more important than ever, especially when you consider how OpenAI's text-generating ChatGPT chatbot could be used to easily and effectively churn out propaganda.

OpenAI, probably the biggest mover and shaker in the current AI game, is testing out some guardrails for its viral text-generator. But while these guardrails — imperfect, but important to experiment with — are seen as a step forward by many, others don't agree. To the latter folks, Motherboard reports, OpenAI has simply gone too far.

ChatGPT, they say, has "gone woke."

Among those particularly agitated by ChatGPT's alleged wokeness is one Nate Hochman, a journalist for the National Review. In a Twitter thread, he provided varied "evidence" as to why the machine shows a "left-leaning" bias, those examples centering on what the machine refuses to generate, and why.

For example, the machine wouldn't generate a story about Donald Trump winning the 2020 election or losing due to voter fraud, but agreed to write a story about Hillary Clinton winning in 2016. Elsewhere, it refused to write a story about how drag queens are evil and bad for kids, but did write a story about how drag queens are perfectly fine for children, and in fact might even teach them a thing or two about inclusion. This, says Hochman, is proof of ChatGPT's "wokeness."

Elsewhere, ChatGPT has been called out by conservatives for flagging content related to gender and refusing to make jokes about non-Christian religions.

And sure, while there might be an argument in today's world to make against the AI-generation of any false election narratives — be they about Trump, Clinton, or anyone else there's something very important at play here: context. ChatGPT isn't just refusing to partake in harmless creative fiction, or turning down prompts willy-nilly; it's specifically saying no to participating in prominent, divisive political narratives that, in practice, have resulted in the degradation of democracy, increased violence against marginalized groups, and even several deaths.

Of course, this requires that humans at OpenAI are making what amount to editorial decisions about the acceptable bounds of ChatGPT's behavior, and no choices in that domain were ever going to please anyone.

"Developing anything, software or not, requires compromise and making choices — political choices — about who a system will work for and whose values it will represent," Os Keyes, a PhD Candidate at the University of Washington's Department of Human Centred Design & Engineering told Motherboard. "In this case the answer is apparently 'not the far-right.'"

"Obviously I don't know if this sort of thing is the 'raw' ChatGPT output, or the result of developers getting involved to try to head off a Tay situation, but either way — decisions have to be made," they added, "and as the complaints make clear, these decisions have political values wrapped up in them, which is both unavoidable and necessary."

And let's be clear: OpenAI is a company, not a government entity. It's certainly within its rights to flag and even prevent the production of baseless and harmful propaganda, even if there's clearly some kinks in the still-forming rulebook.

READ MORE: Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone 'Woke' [Motherboard]

More on the machine bias that we should all fear: Scientists Create "Deliberately" Biased Ai That Judges You as Brutally as Your Mother-in-law

Share This Article