If/When Machines Take Over
The term "artificial intelligence" was only just coined about 60 years ago, but today, we have no shortage of experts pondering the future of AI. Chief amongst the topics considered is the technological singularity, a moment when machines reach a level of intelligence that exceeds that of humans.
While currently confined to science fiction, the singularity no longer seems beyond the realm of possibility. From larger tech companies like Google and IBM to dozens of smaller startups, some of the smartest people in the world are dedicated to advancing the fields of AI and robotics. Now, we have human-looking robots that can hold a conversation, read emotions — or at least try to — and engage in one type of work or another.
Top among the leading experts confident that the singularity is a near-future inevitability is Ray Kurzweil, Google's director of engineering. The highly regarded futurist and "future teller" predicts we'll reach it sometime before 2045.
Meanwhile, SoftBank CEO Masayoshi Son, a quite famous futurist himself, is convinced the singularity will happen this century, possibly as soon as 2047. Between his company's strategic acquisitions, which include robotics startup Boston Dynamics, and billions of dollars in tech funding, it might be safe to say that no other person is as keen to speed up the process.
Not everyone is looking forward to the singularity, though. Some experts are concerned that super-intelligent machines could end humanity as we know it. These warnings come from the likes of physicist Stephen Hawking and Tesla CEO and founder Elon Musk, who has famously taken flak for his "doomsday" attitude towards AI and the singularity.
Clearly, the subject is quite divisive, so Futurism decided to gather the thoughts of other experts in the hopes of separating sci-fi from actual developments in AI. Here's how close they think we are to reaching the singularity.
Louis Rosenberg, CEO, Unanimous AI:
My view, as I describe in my TED talk from this summer, is that artificial intelligence will become self-aware and will exceed human abilities, a milestone that many people refer to as the singularity. Why am I so sure this will happen? Simple. Mother nature has already proven that sentient intelligence can be created by enabling massive numbers of simple processing units (i.e., neurons) to form adaptive networks (i.e., brains).
Back in the early 1990s, when I started thinking about this issue, I believed that AI would exceed human abilities around the year 2050. Currently, I believe it will happen sooner than that, possibly as early as 2030. That's very surprising to me, as these types of forecasts usually slip further into the future as the limits of technology come into focus, but this one is screaming towards us faster than ever.
To me, the prospect of a sentient artificial intelligence being created on Earth is no less dangerous than an alien intelligence showing up from another planet. After all, it will have its own values, its own morals, its own sensibilities, and, most of all, its own interests.
To assume that its interests will be aligned with ours is absurdly naive, and to assume that it won't put its interests first — putting our very existence at risk — is to ignore what we humans have done to every other creature on Earth.
Thus, we should be preparing for the imminent arrival of a sentient AI with the same level of caution as the imminent arrival of a spaceship from another solar system. We need to assume this is an existential threat for our species.
What can we do? Personally, I am skeptical we can stop a sentient AI from emerging. We humans are just not able to contain dangerous technologies. It's not that we don't have good intentions; it's that we rarely appreciate the dangers of our creations until they overtly present themselves, at which point it's too late.
Does that mean we're doomed? For a long time I thought we were — in fact, I wrote two sci-fi graphic novels about our imminent demise — but now, I am a believer that humanity can survive if we make ourselves smarter, much smarter, and fast...staying ahead of the machines.
Pierre Barreau, CEO, Aiva Technologies:
I think that the biggest misunderstanding when it comes to how soon AI will reach a "super intelligence” level is the assumption that exponential growth in performance should be taken for granted.
First, on a hardware level, we are hitting the ceiling of Moore’s law as transistors can’t get any smaller. At the same time, we have yet to prove in practice that new computing architectures, such as quantum computing, can be used to continue the growth of computing power at the same rate as we had previously.
Second, on a software level, we still have a long way to go. Most of the best-performing AI algorithms require thousands, if not millions, of examples to train themselves successfully. We humans are able to learn new tasks much more efficiently by only seeing a few examples.
The applications of AI [and] deep learning nowadays are very narrow. AI systems focus on solving very specific problems, such as recognizing pictures of cats and dogs, driving cars, or composing music, but we haven’t yet managed to train a system to do all these tasks at once like a human is capable of doing.
That’s not to say that we shouldn’t be optimistic about the progress of AI. However, I believe that if too much hype surrounds a topic, it’s likely that there will come a point when we will become disillusioned with promises of what AI can do.
If that happens, then another AI winter could appear, which would lead to reduced funding in artificial intelligence. This is probably the worst thing that could happen to AI research, as it could prevent further advances in the field from happening sooner rather than later.
Now, when will the singularity happen? I think it depends what we mean by it. If we’re talking about AIs passing the Turing test and seeming as intelligent as humans, I believe that is something we will see by 2050. That doesn’t mean that the AI will necessarily be more intelligent than us.
If we’re talking about AIs truly surpassing humans in any task, then I think that we still need to understand how our own intelligence works before being able to claim that we have created an artificial one that surpasses ours. A human brain is still infinitely more complicated to comprehend than the most complex deep neural network out there.
Raja Chatila, chair of the IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems and director of the Institute of Intelligent Systems and Robotics (ISIR) at Pierre and Marie Curie University:
The technological singularity concept is not grounded on any scientific or technological fact.
The main argument is the so-called “law of accelerating returns” put forward by several prophets of the singularity and mostly by Ray Kurzweil. This law is inspired by Moore’s law, which, as you know, is not a scientific law — it's the result of how the industry that manufactures processors and chips delivers more miniaturized and integrated ones by scaling down the transistor, therefore multiplying computing power by a factor of two approximately every two years, as well as increasing memory capacity.
Everyone knows there are limits to Moore’s law — when we’ll reach the quantum scale, for example — and that there are architectures that can change this perspective (quantum computing, integration of different functions: “more than Moore,” etc.). It’s important to remember that Moore’s law is not a strict law.
However, the proponents of the singularity generalize it to the evolution of species and of technology in general on no rigorous ground. From that, they project that there will be a moment in time in which the increasing power of computers will provide them with a capacity of artificial intelligence, surpassing all human intelligence. Currently, this is predicted by the singularity proponents to happen around 2040 to 2045.
But mere computing power is not intelligence. We have about 100 billion neurons in our brain. It’s their organization and interaction that makes us think and act.
For the time being, all we can do is program explicit algorithms for achieving some computations efficiently (calling this intelligence), be it by specifically defining these computations or through well-designed learning processes, which remain limited to what they’ve been designed to learn.
In conclusion, the singularity is a matter of belief, not science.
Gideon Shmuel, CEO of eyeSight Technologies:
Figuring out how to make machines learn for themselves, in a broad way, may be an hour away in some small lab and may be five years out as a concentrated effort by one of the giants, such as Amazon or Google. The challenge is that once we make this leap and the machines truly learn by themselves, they will be able to do so at an exponential rate, surpassing us within hours or even mere minutes.
I wish I could tell you that, like all other technological advancements, tech is neither good nor bad — it’s just a tool. I wish I could tell you that a tool is as good or as bad as its user. However, all this will not apply any longer. This singularity is not about the human users — it’s about the machines. This will be completely out of our hands, and the only thing that is certain is that we cannot predict the implications.
Plenty of science-fiction books and movies bring up the notion of a super intelligence, figuring out that the best way to save humankind is to destroy it, or lock everyone up, or some other outcome you and I are not going to appreciate.
There is an underlying second order differentiation that is worth making between AI technologies. If you take eyeSight’s domain expertise — embedded computer vision — the risk is rather low. Having a machine or computer learn on their own the meaning of the items and contexts they can see (recognize a person, a chair, a brand, a specific action performed by humans or an interaction, etc.) has nothing to do with the action such a machine can take with respect to this input.
It is in our best interest to have machines that can teach themselves to understand what’s going on and ascribe the right meaning to the happenings. The risk lies with the AI brain that is responsible for taking the sensory inputs and translating them to action.
Actions can be very risky both in the physical realm, through motors (vehicles, gates, cranes, pipe valves, robots, etc.) and in the cyber realm (futzing with information flow, access to information, control of resources, identities, various permissions, etc.).
Should we be afraid of the latter? Personally, I’m shaking.
Patrick Winston, artificial intelligence and computer science professor, MIT Computer Science and Artificial Intelligence Lab (CSAIL):
I was recently asked a variant on this question. People have been saying we will have human-level intelligence in 20 years for the past 50 years. My answer: I'm ok with it. It will be true eventually.
My less flip answer is that, interestingly, [Alan] Turing broached the subject in his original Turing test paper using a nuclear reaction analogy. Since, others have thought they have invented the singularity idea, but it is really an obvious question that anyone who has thought seriously about AI would ask.
My personal answer is that it is not like getting a person to the Moon, which we knew we could do when the space program started. That is, no breakthrough ideas were needed. As far as a technological singularity, that requires one or more breakthroughs, and those are hard/impossible to think of in terms of timelines.
Of course, it depends, in part, on how many have been drawn to think about those hard problems. Now, we have huge numbers studying and working on machine learning and deep learning. Some tiny fraction of those may be drawn to thinking about understanding the nature of human intelligence, and that tiny fraction constitutes a much bigger number than were thinking about human intelligence a decade ago.
So, when will we have our Watson/Crick moment? Forced into a corner, with a knife at my throat, I would say 20 years, and I say that fully confident that it will be true eventually.
Share This Article