Can AI Make Us Sexist?

While supercomputer engineers are pushing the boundaries of what is possible in the field of computational intelligence, they may also be reinforcing outdated gender norms through the anthropomorphism of their creations. Although they may be driving technological progress, this could be at the expense of socio-cultural progression.

Most pertinent is the fact that the world's top supercomputer, IBM Watson, has a man's name and voice while nearly all virtual assistants have female names and voices — Siri, Alexa, Cortanae. This has most frequently been attributed to a marketing strategy.

Jason Mars, CEO of Clinc financial software, told the New York Times that he gave his artificial intelligence (AI) assistant a “helpful, young Caucasian female” voice because, in order to be successful commercially, “there’s a kind of pressure to conform to the prejudices of the world.” A slightly different argument is summarized by Clifford Nass, a Stanford University Professor, who told CNN in a 2011 interview, that “it’s a well-established phenomenon that the human brain is developed to like female voices.”

This, though, is no adequate argument — habit or conditioning is no indicator of what is ethically responsible. A racial equivalent is giving AI assistants a Mexican- or Black-sounding voice because these people groups — to the great detriment of human history — have traditionally held subservient jobs rather than powerful ones.

Click to View Full Infographic

Judith Butler, a seminal Feminist philosopher, believed that there is no biological causality between having a penis or a vagina and behaving in a certain way. Gender norms, such as women wearing dresses and men wearing suits, have been built up over thousands of years through societal conditioning until they seem normal.

Professor Nass's makes an assertion in his book Wired for Speech that male voices are generally perceived as authoritative and dominating, while female ones are associated with kindly helping people reach conclusions themselves — this, though, according to Butler, is due to the patriarchy's dominance throughout time galvanizing a coincidence into a truth.

A Subtle Conditioning

The fundamental issue is that technology is becoming more ubiquitous in our lives, and therefore playing an increasingly significant role in our conditioning — when subservient AI assistants use women's voices, it plays into draconian stereotypes that women are inferior.

While it may mean a product sells more, or people find it more comfortable, it does not mean that it is right. In order to challenge archaic power structures, we must challenge the ways they manifest themselves. Changing the names and voices of AI assistants is pivotal to preventing the reinforcement of the conception that women are assistants by default.

Perhaps even worse is the possibility that these AI are not just coming in sexist packaging, but are programmed to be sexist as well. Programming is language, and like any language it requires definitions. These definitions are engrained with prejudices, weights, and subtle power structures that even the individual programmer may not be aware of. Therefore, if an AI is programmed with discriminatory definitions — whether these are intentional or not — the AI itself will become discriminatory.

Kate Crawford summarized a similar phenomenon with AI picture recognition by writing in the New York Times that:

Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A real world example of this is Google's photo app categorizing black people as Gorillas. This is concerning because the majority of Silicon Valley, and the tech world as whole, consists of white, middle class men. At the more horrific end of the spectrum, this could to an oppressive and regressive AI; at the less severe, it could mean that AIs are narrowed because of the lack of diversity of their creators, which must be rectified soon by employing more female futurists, technicians, and researchers.

We have given a lot of power to our technology, granting it a remarkable capability to color our perception. The cost of this is that we must be extremely careful that we understand the potential consequences of anthropomorphic or programming decisions, because they have the ability to subtly condition us.

We must be careful that the use of female assistants in technology is not the first degree along an incline that leads to further stereotyping and oppression. Companies may be wise to follow the example of the Google Assistant, which is taking a step towards gender-neutrality. Further, by moving away from the humanizing of AI assistants, programmers would risk impacting only how we interact with technology rather than jeopardizing how we interact with one another.


Share This Article