If you were to go by recent headlines, you might understandably conclude that artificial intelligence is out to get us. The open letter that’s been all over the news forces a final confrontation with the question: do we actually need to worry about AI?
Despite recent attention garnered by alarmists, not all have issued such dire warnings as Elon Musk and Stephen Hawking. Mustafa Suleyman, a co-founder of Google-acquired AI firm DeepMind, came back with excellent points about taking a more tempered, pragmatic approach to dealing with AI. He should know: he and his team have been behind some of the most impressive advances in the field seen to date. A calm voice in a storm of hyperbole, he proposes to focus on the benefits AI lends us as a controllable tool.
It is precisely this approach that can provide some much-needed grounding during the present fear frenzy. While AI is frequently being compared to nuclear weapons, we ought to remember that nuclear fission has transformed our entire modern society for the better. Though the tendency to weaponize new technology unsurprisingly continues, it is not worth stagnating growth and investments in AI while citing unfounded concerns.
While the open letter specifically discusses lethal AI in military settings, all that would be required to effectively stall otherwise useful advancements in the field would be to accept an overly broad definition of what could be considered harmful. AI-controlled decision support systems in military communications, for instance, could drastically improve the quality of situational awareness and save hundreds of lives. Though the letter calls for a ban on “offensive autonomous weapons”, the actual definitions may become murky in negotiations. Should a UN ban ensue, even more benign applications may come under fire.
Suleyman proposes that artificial general intelligence will be “a cheap and abundant resource to solve our toughest global problems”. Viewed in this light, it is clear that a focus on development of AI in civilian and humanitarian circles can position it for the greatest positive impact. Today we have the tools of open source software and international policy frameworks which can help regulate and guide the technology.
If AI were to be developed primarily in the confines of the world’s military complexes, then we would have cause to worry. However, any ban on military use is obviously unenforceable — one needs only look at the difficulties of regulating nuclear weapons. Perhaps unintuitively, the best way to ensure that AI is used for the best purposes is to simply do so and, most importantly, do so openly with a vast international community. Out in the open, reasoned dialog can take place.
Though real dangers from AI do exist, worries regarding its misuse can be allayed by proactively pursuing open, civilian development.