Goodbye, CAPTCHAs?

If you've ever been browsing the internet, spotted something interesting like a video or an article that you wanted to read, only to discover that before you can view it, the website asks you to prove that you're human. You're usually instructed to complete a simple task, such as typing out a word that's hidden under a bunch of squiggly lines, or to correctly identify which image matches a certain description. While such tasks can be annoying, the test is a necessary component of cybersecurity in the modern age; at least in terms of telling human users from bots. But advances in machine learning may mean that's soon to change.

This is called a Completely Automated Public Turing test — better known as a CAPTCHA. New research from artificial intelligence (AI) company Vicarious has found that while these have become a cybersecurity staple, it may only be a matter of time before bots can outsmart them. Vicarious developed a machine learning algorithm that's able to mimic the human brain, which uses a computer vision model they dubbed Recursive Cortical Network (RCN) to simulate what we call "common sense."

“For common sense to be effective it needs to be amenable to answer a variety of hypotheticals — a faculty that we call imagination,” according to a blog posted by Vicarious. Details of this AI model have been published in the journal Science. Essentially, what Vicarious' RCN uses a techniques derived from human reasoning to parse a text, like recognizing the letter A, by building its own version of a neural network.

Image Credit: Vicarious

Blurring the Lines?

Vicarious' RCN was able to answer BotDetect system CAPTCHAs, with a 57 percent accuracy. RCN can understand CAPTCHAs faster than other deep learning algorithms. While the goal of this research wasn't to break down CAPTCHAs — it did. That begs the question: does this mean computer systems are more vulnerable to cybersecurity threats? If machines can crack CAPTCHAs like humans, such cybersecurity measures would be rendered ineffective and obsolete.

CAPTCHAs, while quite common, aren't the only layer of security computer systems employ. AI is making it difficult — if not nearly impossible — for today's cybersecurity measures to differentiate between a human and a machine. It won't be surprising, then, if AIs become useful to hackers. Indeed, the U.S. Department of Defense research arm, DARPA, wants to develop an AI hacker. On the flip side, AI can also be used to fight back against AI hacking. For example, scientists at the European Laboratory for Particle Physics (CERN) have been training a machine learning system to protect their data from cyber threats. A key challenge is teaching these intelligent algorithms to identify malicious network activity. Perhaps the next step could be identifying machines pretending to be humans?

Computers are becoming more effective at replicating how the human brain works, although AI is arguably still far from actually becoming as smart as we are. Still, with an AI like the RCN, the lines are becoming more blurred. Machines can dupe other machines into thinking it's a human —  albeit with limitations. As Vicarious CEO Dileep George previously told NPR, the development of such human-like capabilities by AI is the way technology is moving forward.


Share This Article