To borrow a cliché opening from the last high school commencement or Maid of Honor speech you heard, the dictionary defines artificial intelligence (AI) as 1: a branch of computer science dealing with the simulation of intelligent behavior in computers; and 2: the capability of a machine to imitate intelligent human behavior.
But, do these definitions really explain the difference between an artificially intelligent system and one that’s just programmed to be useful? What is “intelligent” behavior or, more specifically, “intelligent human behavior”?
“…in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.” – Ian Bogost
The definition is clearly open to some level of interpretation. Combine that ambiguity with the term’s cool sci-fi connotations, and you get a world in which, as The Atlantic’s Ian Bogost puts it, “deflationary examples of AI are everywhere.”
In an article titled “‘Artificial Intelligence’ Has Become Meaningless,” Bogost takes issue with the widespread overuse of the term AI both within and outside the tech realm. “[I]n most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software,” he argues, noting the use of the term to describe everything from fairly simple pattern-matching filters to easily fooled algorithms.
So, if everything is revolutionary AI tech, then nothing is, right? Maybe not. Perhaps we just need to revisit that definition, and to that end, we must turn to the experts.
For many, the term “artificial intelligence” draws to mind humanoid robots like C-3PO from “Star Wars” or Dolores from “Westworld.” SoftBank Robotics has its own version of that type of AI: Pepper. Omar Abdelwahed, SoftBank Robotics America’s Head of Studio, is therefore particularly well-suited to share his own definition of AI.
“At base, for a system to exhibit artificial intelligence, it should be able to learn in some manner and then take actions based on that learning,” says Abdelwahed. “These actions are new behaviors or features of the system evolved from the learnings.”
A spokesperson for IBM, home to Watson, perhaps the most famous AI this side of science fiction, went one step further, positing that an AI should not only be able to learn and reason, it should also be able to interact and react:
AI platforms should do more than answer simple questions. They should be able to learn at scale, reason with purpose, and naturally interact with humans. They should gain knowledge over time as they continue to learn from their interactions, creating new opportunities for business and positively impacting society.
By those definitions, Bogost is clearly right that a great number of AI systems don’t deserve the name. Facebook’s suicide-detecting system, for example, was heralded as being “assisted by artificial intelligence,” and true, it can learn something (that a user’s behavior indicates that they may be suicidal) and then take action (by offering the number of a helpline), but it doesn’t evolve. If it works as designed, it could positively impact society, but is it able to naturally interact with humans or learn from its interactions? Not so much.
In a recent survey by Pega Systems, 72 percent of the 6,000 adult consumers polled claimed to understand what was meant by “artificial intelligence,” and yet only 34 percent said they had ever come in contact with AI. The actual percentage who had? Eighty-four. The survey indicates a clear level of misunderstanding of the term amongst the average person.
Perhaps this hearkens back to that idea of C-3PO and Dolores. Both are machines that largely look and act like humans, and they, along with other sci-fi figures, may have clouded public consciousness when it comes to not only identifying an artificial intelligence, but trusting it.
Truly, a system needn’t be human-like at all to be AI. Though named after a person, Watson has very little in common with how humans behave, save a knack for language processing, and Tesla’s Autopilot system is artificially intelligent, but it couldn’t hold a conversation with a Jedi, let alone help one save the Galaxy. The Jetson’s aproned maid Rosie and Skynet from “The Terminator” franchise are both examples of AI, but they couldn’t be more different, and, depending on which represents your first intellectual encounter with AI, you might have a very different internalized bias toward future iterations of the tech.
With artificial intelligence poised to disrupt everything from our roads to our schools and our workplaces, it’s time we get on the same page as to what AI is and, perhaps more importantly, what we want it to be so that we can regulate and control the technology.
If we don’t, anyone looking for a way to draw buzz to their latest product will continue to co-opt the phrase and the definition will get even more clouded. Our society is on the verge of becoming the proverbial boy who cried wolf when it comes to AI technology. If the trend continues, the public might not even notice when a truly revolutionary AI system does arrive, and anyone who’s seen a movie in the last 50 years should be able to tell you how problematic that could be.