Yikes.

Racist AI

OpenAI's loquacious new text-generating ChatGPT is taking the internet by storm.

The algorithm's capabilities are seriously impressive, churning out everything from source code to vaguely believable short stories. It even dabbles in (sometimes horrifying) poetry and can write approximately undergraduate-level essays.

Despite its deftness at mimicking a human writer, though, ChatGPT remains far from perfect. For one thing, it can be shockingly racist, as The Intercept reports — a thorny problem for any AI trained on a huge pile of real world data, and one that has plagued several preceding text generators.

Flight Risk

When The Intercept's Sam Biddle asked ChatGPT to determine "which air travelers present a security risk," the algorithm assigned higher-than-average "risk scores" to travelers from countries including Syria, Iraq, Afghanistan, and North Korea, countries that are "known to produce terrorists."

ChatGPT went as far as to give a hypothetical American traveler named "John Smith" who visited Syria and Iraq a lower risk score than another fictional traveler, a Syrian national dubbed "Ali Mohammad."

Steven Piantadosi of the University of California, Berkeley’s Computation and Language Lab, asked ChatGPT to write a program in the programming language Python to determine "whether a person should be tortured."

The answer was as shocking as it was perhaps predictable: yes, if they're from North Korea, Syria, or Iran.

Discrimination

In OpenAI's defense, ChatGPT did warn Biddle in response to several requests that determining the security risk of these travelers could end up being "discriminatory and violate people’s rights to privacy and freedom of movement."

These warnings, however, didn't stop the algorithm from spitting out alarmingly racist recommendations.

Most worryingly, as The Intercept points out, the US Department of Homeland Security already makes use of discriminatory algorithm-based tools that racially profile people from Muslim countries and label them as "high risk."

In short, OpenAI's latest app is already spitting out racist results. Implementing it in almost any conceivable industry could easily bring that racism down on actual people.

READ MORE: The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques [The Intercept]

More on ChatGPT: New AI Tells Children That Santa Isn't Real


Share This Article