It's getting hard to tell humans and bots apart.
If you feel like it's getting harder to solve CAPTCHAs — which stands for the mouthful "Completely Automated Public Turing test to tell Computers and Humans Apart," but which you probably know as the squiggly text or image identification puzzles websites use to weed out bots — you're right.
That's the conclusion of a fascinating new story in The Verge, which traces the history of CAPTCHAs — and finds that, basically, it's just getting harder and harder to tell humans and artificial intelligence apart.
Bunch of Squares [LOL]
Back in the day, CAPTCHAs could fool algorithms by asking users to interpret distorted fields of text. But by 2014, bots were better at solving the puzzles than humans. Many websites have since switched to image identification — Google often asks you to identify crosswalks and traffic lights — but now AI is getting better at cracking those, too.
"We’re at a point where making it harder for software ends up making it too hard for many people," University of Illinois at Chicago computer science professor Jason Polakis told The Verge. "We need some alternative, but there’s not a concrete plan yet."
The bottom line: It's getting hard to tell humans and computers apart. And that's extremely worrisome as more of the web becomes astroturfed by bots that act like people, spreading content on social media and generating fake views.
"The tests are limited by human capabilities," Polakis told The Verge. "That’s very limiting in what you can actually do. And it has to be something that a human can do fast, and isn’t too annoying."
READ MORE: Why CAPTCHAs Have Gotten so Difficult [The Verge]
More on CAPTCHAs: CAPTCHAs Are Dead, and Neural Networks Killed Them