Reinforcing AI Systems
When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithms or deep learning systems. While AI has advanced significantly over the years, the principle behind these technologies remains the same. Someone trains a system to receive certain data and asks it to produce a specified outcome — it's up to the machine to develop its own algorithm to reach this outcome.
Alas, while we've been able to create some very smart systems, they are not foolproof. Yet.
Data science competition platform Kaggle wants to prepare AI systems for super-smart cyberattacks, and they're doing so by pitting AI against AI in a contest dubbed the Competition on Adversarial Attacks and Defenses. The battle is organized by Google Brain and will be part of the Neural Information Processing Systems (NIPS) Foundation's 2017 competition track later this year.
This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it won't work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart system's defenses.
“It’s a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,” Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review.
Responsible AI Development
AI is actually more pervasive now than most people think, and as computer systems have become more advanced, the use of machine learning algorithms has become more common. The problem is that the same smart technology can be used to undermine these systems.
“Computer security is definitely moving toward machine learning,” Google Brain researcher Ian Goodfellow told the MIT Technology Review. “The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.”
Training AI to fight malicious AI is the best way to prepare for these attacks, but that's easier said than done. “Adversarial machine learning is more difficult to study than conventional machine learning," explained Goodfellow. "It’s hard to tell if your attack is strong or if your defense is actually weak.”
The unpredictability of AI is one of the reasons some, including serial entrepreneur Elon Musk, are concerned that the tech may prove malicious in the future. They suggest that AI development be carefully monitored and regulated, but ultimately, it's the people behind these systems and not the systems themselves that present the true threat.
In an effort to get ahead of the problem, the Institute of Electrical and Electronics Engineers has created guidelines for ethical AI, and groups like the Partnership on AI have also set up standards. Kaggle's contest could illuminate new AI vulnerabilities that must be accounted for in future regulations, and by continuing to approach AI development cautiously, we can do more to ensure that the tech isn't used for nefarious means in the future.
Share This Article