The Future of Hacking
There are many benefits that can come with the adoption and implementation of artificial intelligence (AI), but experts believe such widespread acceptance will lead to more effective and more dangerous cyberattacks. And that they’ll happen soon.
At this year’s Black Hat cybersecurity conference, 100 attendees were polled about various aspects of artificial intelligence, and 62 percent said they firmly believe AI will be used by hackers within the next twelve months. Despite the idea that AI may be the best defense against such attacks, its increasing availability will likely lead to more advanced hacking techniques.
Hackers have already hacked into huge institutions and disrupted the lives of many. It was only last November that hospitals in the UK were the targets of a cyberattack that resulted in shutdown for three facilities, with hundreds of scheduled surgeries cancelled. Ukraine’s power grid has been assaulted multiple times, prompting the U.S. to look into the system behind its own power grid and eliminate any potential weaknesses.
As the Internet of Things adds new devices every day, the potential pool for AI hacks is also growing exponentially larger. SpaceX and Tesla CEO Elon Musk has spoken about the dangers of AI on multiple occasions, calling it the biggest threat to our society, as well as urging world leaders to impose regulations before its too late.
Jeremy Straub, Associate Director of the NDSU Institute for Cyber Security Education and Research, explained how the use of AI could improve cyberattacks at The Conversation. Compared to humans, who need food, sleep, and other things that impose limitations, AI can act at any time and don’t need to take breaks. AI are also capable of processing large amounts of data quickly, thereby making attacks on databases faster and easier to accomplish.
Even if met with opposition, or programming specific to vulnerabilities that have since been changed, the AI can adapt quickly and more effectively than any human — and do so without human input. Humans defending against cyberattacks will be outmatched by AI, and unable to keep up with the speed at which it operates.
Straub posits the start of an AI arms race involving hackers and cyber security experts, with all attempting to make better AI capable of outperforming the competition. This, of course, could lead to larger attacks, and the possibility of attacks that spiral out of control.
Despite the potential dangers AI possess, it should be noted that not everyone is standing idly by and waiting for the worst to happen. A number of people have also spoken out against Elon Musk’s warnings, saying the CEO’s statements are focusing on the wrong scenarios and making it harder to have an open conversation and debate about our future with AI.
Beyond these discussions, there’s also technology being developed that can aid against potential threats. Quantum computers are said to be one of the tools we can use against cyberattacks, and researchers around the world are working to bring us closer to a reality in which quantum computers are widely used.
Companies like Google and IBM have also taken steps to strengthen our cyber security. Google has created an “AI Fight Club” that will train systems to more effectively combat harmful AI, while IBM’s new IBM Z mainframe system can run more than 12 billion encrypted transactions per day to prevent the theft of financial data.
It’s inevitable that artificial intelligence is soon going to become a huge influence in our lives. While it’s easy to dwell on the negative aspects of its advancement, we also need to dedicate an equal amount of time and effort to its benefits, lest we find ourselves incapable of dealing with either.