Don't be evil.

Since 2000, this phrase has served as a motto-of-sorts for Google. It was even the first sentence in Google's Code of Conduct  – emphasis on was. On Friday Gizmodo reported that, sometime near the end of April or the beginning of May, Google removed "don't be evil" almost entirely from their Code of Conduct. It was only included once, in the very last sentence.

This lack of inclusion might seem small, but it can also be seen as the latest sign that Google may be shifting its moral priorities. And given that the company now provides far more than just answers to our most important (or silliest) questions, that shift could spell trouble for the future of humanity.

Now that Google is in this position of influence over our lives, will the company use its power primarily for good or for evil? Here are three signs pointing toward the latter.

Project Maven

Good and evil are subjective, of course, and one person's "evil" may be another's morally gray area. Still, helping drones kill people seems pretty firmly on the darker side of the ethical spectrum, and it's one of Google's recent side projects.

In March, news broke that Google was helping the U.S. Department of Defense with Project Maven, an initiative to build artificially intelligent drones to use in war. Thousands of Google employees protested the partnership with a petition. Last week some even resigned in protest.

Still, Google is forging ahead with Project Maven, and as employees told Engadget, the company seems to be getting more and more interested in military operations, and less interested in what its employees think.

Deceptive AI

Lying is another moral gray area, and it's one that Google seems to be getting more comfortable navigating. Not only did employees tell Engadget that Google wasn't as transparent with them as it used to be, the same goes for the company's relationship with the public.

Earlier this month, the company demoed its Google Assistant's new Duplex feature – an AI that can make tedious calls on a user's behalf. Allegations that the demo might have been "faked" have since cropped up, but even if it wasn't, some say the tech is deceptive, as the AI never identifies itself as non-human during the demo call.

“Google’s experiments do appear to have been designed to deceive,” Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, told TechCrunch in reference to the demo. "[E]ven if they don’t intend it to deceive, you can say they’ve been negligent in not making sure it doesn’t deceive."

Yossi Matias, Google’s vice president of engineering, did tell CNET post-demo that the AI would “likely” let people know it’s an AI once the software rolls out. We'll believe it when we see it.

The Selfish Ledger

Last week, The Verge obtained access to a video circulated internally at Google in 2016. Titled “The Selfish Ledger,” the video depicts a future in which Google doesn't just collect data on users, but uses that data, ultimately with the goal of controlling peoples' behavior.

When The Verge reach out to Google for comment, a spokesperson didn't deny the creepy factor, but asserted that the whole thing was merely theoretical:

We understand if this is disturbing – it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.

Still, it's not hard to see how Google could technically create the Selfish Ledger. Additionally, the video notes that the goal would be to align people's actions with "Google's values," such as environmental sustainability. If Google's values changed, though, so could the direction in which the Selfish Ledger guides users.

Of course, Google does still have a lot going for it on the "good" side of the equation: the company donates money and resources toward projects to help the environment, provide children with educational opportunities, and support disadvantaged communities.

However, Google is so powerful that if it decides "don't be evil" is no longer a goal worth pursuing, who knows how much damage it could do? For now, let's just hope as the company continues to grow, its conscience doesn't shrink.


Share This Article