It's been — presumably — a truly bizarre week at the Hague, where backdropped by the ongoing circus of Microsoft's Bing AI publicly melting down into a monstrous, homewrecking, Pinocchio-role-playing chaos machine, military leaders from 50 countries gathered to discuss "responsible" use of artificial intelligence in the military.
The timing? Absolutely impeccable. The substance of the Dutch-hosted summit, though? Worrying.
According to Reuters, leaders have certainly gathered, and they've certainly said some things. They've also reportedly signed an agreement to abide by "international legal obligations" in "a way that does not undermine international security, stability and accountability," which sounds good on the surface. But that already-modest agreement was reportedly non-binding, and human rights advocates, per Reuters, warn that it lacked any specific language regarding weaponry "like AI-guided drones, 'slaughterbots' that could kill with no human intervention, or the risk that an AI could escalate a military conflict."
In other words, while the whole point of the summit has been for leaders to establish some firm ground rules, any norms that may have been established lack any firm specifics — seemingly rendering them impossible to coherently implement. Even so, the US cordially asks you to please abide by them, thank you very much!
"We invite all states to join us in implementing international norms," US Under Secretary of State for Arms Control Bonnie Jenkins put forth in a Feb 16 declaration, according to Reuters, "as it pertains to military development and use of AI" and autonomous weapons.
"We want to emphasize," added Jenkins, "that we are open to engagement with any country that is interested in joining us."
Per Reuters, if the US offered anything more concrete, Jenkins did say that "human accountability" and "appropriate levels of human judgment" should be leveraged in order to responsibly incorporate artificial intelligence into military operations. Which, sure, is all good.
But both human accountability and human judgment are the baseline expectations for any weapon. AI systems don't get dropped off by storks on military doorsteps; at the end of the day, even if military-AI integration means that human beings won't be literally pulling as many triggers as they do today, humans are both building and unleashing the AI systems that will do the killing. Humans, no matter what, are accountable for the outcomes of AI systems, whether the machines function as planned or go completely off the rails.
And again, without actually defining any clear rights and wrongs — especially when it comes to the use of specific and lethal AI-powered weapons — broader statements regarding sound judgment and accountability are ultimately hollow.
Perhaps unsurprisingly, we're not the only ones with questions.
To be fair, the US Department of Defense does have some written guidelines regarding the American military's use of AI.
But countries can break and remake their own rules. Actually establishing strong, distinct international guardrails and expectations, as was seemingly the goal of this summit, may well be the best way of ensuring AI responsibility, especially considering how poorly understood these much-hyped systems really are. Researchers have also warned that an international AI arms race could pretty easily destroy civilization, so there's that. (Reuters reports that Chinese representative Jian Tan urged that international leaders should "oppose seeking absolute military advantage and hegemony through AI," adding that the United Nations should play a big role in facilitating AI development.)
In the most optimistic reading, the summit was a first step, albeit a baby one.
On that note, for everyone's sake, let's hope next year's meeting has a bit — or a lot — more juice. If there's ever a time for pinky promise-level agreements, deciding how to go about building war robots isn't it.
Share This Article