Artificial Intelligence: What It Is and How It Really Works
Which is which, and how are they related?
Which is Which?
It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.
Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?
Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with “intelligence,” with the goal of emulating a human being’s unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring upon machines the ability to “learn.” This is achieved by using algorithms that discover patterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps the need to be programmed specifically for every single possible action.
Deep learning, on the other hand, is a subset of machine learning: it’s the most advanced AI field, one that brings AI the closest to the goal of enabling machines to learn and think as much like humans as possible.
In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The following image perfectly encapsulates the interrelationship of the three.
Here’s a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:
Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage of the term “artificial intelligence” in 1956. And it’s still believed that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that “no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.”
Advancements in AI have given rise to debates specifically about them being a threat to humanity, whether physically or economically (for which universal basic income is also proposed, and is currently being tested in certain countries).
Machine learning is just one approach to reifying artificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought to react to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard on evolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.
Current applications are becoming more and more sophisticated, making their way into complex medical applications.
As we delve into higher and even more sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brain’s neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.
It means not having to laboriously program a prospective AI with that elusive quality of “intelligence”—however defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infant’s inchoate but infinitely flexible mind.
Watch this video for a basic explanation of how it all works: