Okay, here’s what happened.
A pair of programmers at Carnegie Mellon developed an artificial intelligence (AI) that can play a version of the video game Doom. Using what they call “deep reinforcement learning,” Guillaume Lample and Devendra Singh Chaplot made an AI that plays the game the way humans would — essentially, to hunt and kill anything that moves.
The reinforcement in learning came from the AI getting points for picking up items, moving about, and scoring kills, while it was reprimanded for taking hits and dying. Basically, like how a human player would, making it different from programmed bots in the game.
It’s just a video game, it’s not real. Right?
Here’s the thing, though. The AI is as real as it gets. While it may have only been operating inside an environment of pixels, it does raise up questions about AI development in the real world.
Some of the current applications of AI have been controversial. Most however, have been technological breakthroughs with very useful consequences in the fields of medicine, space technology, and transportation, just to name a few.
Those who propose to have clear policies regarding AI development think it is okay to do so even now, at a rather early stage of the technology. Miles Brundage, AI policy research fellow at the University of Oxford and fellow at Arizona State University, believes the “key question related to AI policy […] is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”
Sound policy with regard to AI, perhaps, does not obstruct the possibilities for developing the technology. Rather, it should protect the technology from those who might try to abuse or misuse it.
Sounds noble enough.