Google pulled a headfake.

Let's catch you up real quick: Google partnered with the Department of Defense for Project Maven, in which artificial intelligence would analyze military drone footage. Google employees made it clear they're not happy to be working on the project. And last week, it looked like the company was going to meet their demands  Google announced that it would not renew its contract with the military when it expires next year.

Well, it turns out that that sweet, sweet military dough is too good to pass up. On Thursday, Google CEO Sundar Pichai revealed new internal guidelines for how Google plans to conduct itself in the future. And we can expect the company's military deals to continue, as WIRED reported (is it a coincidence that, last month, the company apparently removed its longtime motto "don't be evil" from its code of conduct? You decide).

The updated guidelines, which Google laid out in a blog post, do say that Google will have no part in building weapons or “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and also ruled out surveillance technologies like those sold by Amazon.

You may be thinking, “But that’s the same stance Google had at the beginning of this whole Project Maven mess!” And, dear reader, you would be right. At least as far as military involvement goes, Google’s stance seems to be something along the lines of: “Hey, we hear you. But we’re gonna keep doing what we want anyway. Trust us, we're gonna be, like, really really ethical."

As a response, many are calling for Google to establish an independent ethics committee to oversee its military involvement, according to WIRED. Because, strangely enough, this fake out may have shaken people’s trust in the unavoidable, omnipresent provider of critical online services such as emails, driving directions, and late-night paranoid internet searches.

In all fairness, other tenets of Google’s new guidelines could be crowd-pleasers. They call for technology developed with particularly care towards racial minorities, members of the LGBTQIA community, and other marginalized groups. This is likely a response to the fact that most AI systems are biased since they have been inadvertently trained to treat people unfairly.

It’s not yet clear how Google and the Department of Defense will work together in the future. And, as many have pointed out, the Department of Defense certainly won’t stop developing artificial intelligence tools and weapons just because Google isn’t going to help. But Google employees, and the public, will likely make sure the company abides by its own guidelines and stays out of the weapons game.

This article has been updated to include Google's blog post about the new guidelines.


Share This Article