Machines vs Hackers: Has Cyber-Security Exceeded the Limitations of Human Intellect?

Written by Maxalan Vickers and Charles Mbaruguru

Remember, back in the day, when corporations used to put a handful of guys in a room, with a bunch of computers, and hoped they were ensuring cybersecurity? And then the inevitable outrage – “we’ve got a whole bunch of guys looking at this! How could there have been a breach?” Well, the game has officially changed. Fact is that in 2014, companies reported 42.8 million detected cyber-attacks worldwide, and that’s a 48% Y/Y increase from 2013*. The forceful push of companies, governments and just plain people, onto a digital infrastructure is driving unprecedented cyber security risk. This year, annual federal government spending on cyber security will reach $13.3 billion, earmarked to combat cyber-attacks, which have increased a whopping 445% since 2006. Over the last year alone, federal agencies have seen a 78 % growth in cyber incidents*. The genie, as they say, is out of the bottle.

Cyber security is a major problem. Consider this. It takes 10x as much effort to defend against a cyber-attack as it does to create one. Bad odds for the good guys. Multiply that by the fact that governmental experts are now uncovering 110,000 cyber security breaches every hour and the numbers get scary, fast – that’s 30 breaches every second! IT spend on security in the US in 2012 was about $60 billion, but in order to counter the surveillance state, that growth rate will need to quadruple to 24% which, extrapolated ten years puts security spend at $639 billion by 2023 – a tenfold increase. But will even that be enough? Global corporations are sucking up cyber security professionals faster than universities can spit them out, but at some point we must ask ourselves - Has cyber-security exceeded the limitations of human intellect? Are we to the point of needing artificial intelligence (AI) in order to fight cyber crime?

Threats are mounting - spyware, malware, spoofing, phishing, botnets, data leakage, identity theft – it’s almost too much to get the human mind around. This evolutionary leap in threat complexity continues to evolve with malicious software codes and viruses, becoming more intelligent every day, and increasingly capable of adapting against legacy defenses. Could introducing the concept of machine learning into the ever-expanding war against cyber crime be our knight in shining amor, come to save the day?

Machine learning is a branch of artificial intelligence (AI) that focuses on enabling machines with the capability to understand information; its intent and context. In a world where technology is growing at exponential rates annually, it’s impossible for human intelligence to keep up with the capabilities of technology. In security, hackers are winning the battle being waged in cyber space. For instance, the U.S. Army has 21,000 security analysts tasked with protecting its Cyber Command unit, and yet estimates say they’re are still outnumbered 100:1 by hackers looking to tamper with the system. Other estimates go as high as 1000:1*! All we know for sure is that this problem needs some new thinking quick, and our machines may be best equipped to do just that.

Machine learning could be the answer. Machine Learning allows machines to learn based on patterns, relationships, and associations between all bytes of data in an entire system. The more data the system is fed, the more connections the machine can make, and begin to learn in the same way humans learn from birth - by associating stimuli with patterns. Computer programming in the cyber security space is behind the times and relies too much on human programmers, who make mistakes and are often limited in the knowledge they can provide to the computer. A new doctrine of thought suggests that instead of humans teaching machines to think for us, we should be teaching machines to think for themselves. It is impossible for humans to be able to predict every type of attack that that could be launched on their systems. Because of this limitation, programmers are facing an uphill battle getting steeper and steeper by the day. Machine learning provides a first-mover advantage for defenders against cyber-attacks because machines can learn patterns humans cannot foresee, and be better prepared for any possible attack.

Several cyber security companies have woken up to the vast potential of using AI to fight cybercrime. Two are Rippleshot and Norse. Rippleshot built a machine learning system to fight against credit card account takeovers and swears that it enables them to fight fraud at the speed of data. Their system uses millions of past instances of takeovers as blueprints to defend against future attempts, and can now detect data breaches within hours, or minutes, instead of weeks or months, as was the case for Target. Norse fights attacks from botnets and other compromised hosts. This cyber security firm sends millions of sensors and agents, called honeypots, throughout the inter-net to detect attacks. The honeypots deliver a risk score on certain websites and blocks those that exceed a certain risk score. They also block risky outgoing links to protect against phishing attacks.

In Peter Diamandis’ book, Abundance, Diamandis predicts every computer will be as smart as the human brain by the year 2016 and, by 2020, every computer will be as smart as all of the human brains in the world combined. Now is the time where we need to use, and trust, technology to do our heavy lifting in the war against cyber crime. Machine learning is the next step in protecting ourselves, and our clients, against hatching swarms of hackers. Instead of defending against hackers with human intellect, we have no option but to utilize machines that can learn at a much faster pace than humans ever could. Machines are winning at Jeopardy. Machines are driving our cars. Why would we not want machines protecting us from cyber crime? Sorry guys, when it comes to cyber-security, my bet’s on Watson*.



Click Here for the Original Post

Share This Article