An AI Was Taught to Hunt and Kill Humans In Video Games: Here’s Why This Matters

Violent video games? Try violent AI.

9. 27. 16 by Dom Galeon
Futurism / ET
Image by Futurism / ET

Way too cool, or way too much?

Okay, here’s what happened.

A pair of programmers at Carnegie Mellon developed an artificial intelligence (AI) that can play a version of the video game Doom. Using what they call “deep reinforcement learning,” Guillaume Lample and Devendra Singh Chaplot made an AI that plays the game the way humans would — essentially, to hunt and kill anything that moves.

The reinforcement in learning came from the AI getting points for picking up items, moving about, and scoring kills, while it was reprimanded for taking hits and dying. Basically, like how a human player would, making it different from programmed bots in the game.

It’s just a video game, it’s not real. Right?


Here’s the thing, though. The AI is as real as it gets. While it may have only been operating inside an environment of pixels, it does raise up questions about AI development in the real world.

Keeping tabs on AI development

Some of the current applications of AI have been controversial. Most however, have been technological breakthroughs with very useful consequences in the fields of medicine, space technology, and transportation, just to name a few.

While we do not want to fall into the hype of AI hysteria, the importance of developing clear and sound policies about AI research and development and its applications are still to be considered.

Those who propose to have clear policies regarding AI development think it is okay to do so even now, at a rather early stage of the technology. Miles Brundage, AI policy research fellow at the University of Oxford and fellow at Arizona State University, believes the “key question related to AI policy […] is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”


Sound policy with regard to AI, perhaps, does not obstruct the possibilities for developing the technology. Rather, it should protect the technology from those who might try to abuse or misuse it.

Sounds noble enough.

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at By signing up through this link, may receive a small commission.

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy


Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.