Introducing the Singularity (Pt 1/3)

Q: So what is this "Technological Singularity" I keep hearing about?

In the broadest sense it refers to "an event or phase brought about by technology that will radically change human civilization, and perhaps even human nature itself before the middle of the 21st century."1

Think of it as the "tipping point" where the accelerating pace of machines outrun all human capabilities and result in a smarter-than-human intelligence.

Q: Seriously? People actually believe this?

Yes. The belief is credited to the accelerating progress in many disruptive technologies that include genetic engineering, artificial intelligence, robotics, and nanotechnology.

Q: Sounds like something out of a sci-fi novel…

Well the term "technological singularity" was coined by Vernor Vinge, a professor of Mathematics who originally used the term in one of his sci-fi novels. In 1993 he then wrote his well-known essay "The Coming Technological Singularity" that served as a fundamental building block for the Singularity community.

In it, he writes:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. [...] I think it's fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown."2

[expand trigclass="heading" title="Learn More"]

His use of the term “singularity” stemmed from a mathematical concept where there exists a point at which output is not defined and expected rules break down.

Vinge himself thought the Singularity could occur in four ways:

1. Humans develop computers that are “awake” and superhumanly intelligent.
2. Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
4. Biological science may find ways to improve upon the natural human intellect.a


Q: How did the concept make the jump from sci-fi into practical discussions? What real evidence to supports this?

Although the year of his prediction seems quite off, there is certainly some compelling evidence that supports the notion of rapid technological progress. Ray Kurzweil, a pioneer in the Singularity movement, puts forth what he calls “The Law of Accelerating Returns.” Applied to technology, it says that technological progress is occurring at an exponential pace, in part because each new iteration of any given technology is used to help build the next, better, faster, cheaper one.

According to Kurzweil, this means that we won't experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today's rate).

Take a look at the graph labeled 3

It took the human race 400 years for the printing press to reach a mass audience. But as technology has improved, its rate of adoption has become exponentially faster. It took only 7 years for the cell phone to reach a quarter of the population, and it’s taken social networks only 3 years.4

The following graph (see 5) applies the same logic to computing power:

Not only does Kurzweil chart historical data, he even goes as far as predicting future advancements. Applying this trend to the future, he predicts we’ll be able to purchase the equivalent to a human brain’s worth of computing power for about $1000 by 2023.

Kurzweil believes the rate at which technology grows in power is never going to slow down or plateau (it never has); not as long as humans continue to exist. Whenever a technology approaches some kind of a barrier, a new technology will be invented to allow us to transcend it.

Q: Where does the Singularity come into play here?

That's the best part. Ray Kurzweil's ultimate prediction is that the Singularity will arrive by 2045.

By that time:

“Artificial intelligences will surpass human beings as the smartest and most capable life forms on the Earth. $1000 will buy a computer a billion times more intelligent than every human combined. Machines can think, act and communicate so quickly that normal humans cannot even comprehend what is going on. The machines enter into a "runaway reaction" of self-improvement cycles, with each new generation of A.I.s appearing faster and faster. From this point onwards, technological advancement is explosive, under the control of the machines, and thus cannot be accurately predicted (hence the term "Singularity").”6

[expand trigclass="heading" title="Learn More"]

The Law of Accelerating Returns is essentially Moore’s Law applied to everything.

In Kurzweil's 1990 book The Age of Intelligent Machines, he predicted the widespread explosion of both users and content on the Internet at a time when there were only 2.6 million people connected. He also predicted the preferred method of Internet access would be through the widespread use of wireless systems.d

This is one of the reasons that the human genome was mapped so much faster than anyone (even those involved with the research) predicted it could be. It took researchers seven years to get even 1% of the genome mapped, and they pretty much said, "See? We told you so. This is going to take about a hundred years or so to complete."

Meanwhile, Kurzweil was saying, "Nope. If you're at one percent, then you're almost there." This confused many, but his notion of Accelerating Returns quickly moved past theory and into the territory of Law when his point was made for him. You see, if it took seven years to reach 1% of the genome, then the next seven years would yield about 2-3%. Then, the next seven years would yield another 4-9%, so on and so forth. (This is what he terms a "doubling".)

And viola, we now have it mapped.b

Ray Kurzweil’s full piece on accelerating returns found here:


Q: Whaaaaat. Hold on a sec - you're telling me I'm supposed to believe that?

No, I'm not telling you what to believe. Some of his predictions are a contested topic, but all things considered he's still been astonishingly accurate.

His claim to fame was his prediction into the rise and explosive growth of the Internet while it was still a niche and unreliable network in the 80’s. Since the 1990’s he claims to have made 147 more predictions and been “fully correct” on 115, or 86%, of them (though his criteria for fully accurate is relatively lenient). Another 12 of those are considered "essentially correct" and are off by only a year or two.

Plus he was named the Director of Engineering at Google in early 2013, so there's surely some method to his madness.

[expand trigclass="heading" title="Learn More"]

Ultimately his criteria for an "accurate prediction" is relatively lenient (he considers his prediction that computers would be embedded into clothing "accurate" because we have iPod Nanos and we "wear" smartphones in our pockets).c

In Kurzweil's 1990 book The Age of Intelligent Machines, he predicted the widespread explosion of both users and content on the Internet at a time when there were only 2.6 million people connected. He also predicted the preferred method of Internet access would be through the widespread use of wireless systems.

List of Kurzweil’s predictions:

Detailed analysis of his predictions by Kurzweil himself:

Q: So… you’re telling me that the world is going to be controlled by super-intelligent robots?

Not necessarily. Note that I said the Singularity revolves around creating
“super human intelligence”. Now there are two different methods as to how this could occur:

Transhumanism - Technology will act as more of a "human enhancement" that will amplify existing human physical and cognitive capabilities. This will ultimately lead to a "biointelligence explosion" that merges humans and technology and creates a new species that merges biology and technology and conquers aging, death, and diseases.7

Artificial Superintelligence – Accelerating progress in computing technology that leads to the creation of a synthetic mind that surpasses our own in terms of intelligence. So yes, robots.

Q: Well what exactly do you mean by “intelligence”?

Defining “intelligence” is no easy question as it’s something that scientists, philosophers, and psychologists have struggled to agree upon for centuries. For the purpose of this dialogue we'll use Kurzweil’s definition: “the ability to solve problems with limited resources, including time."8

[expand trigclass="heading" title="Learn More"]

Another widely accepted and simple definition among many in the community is that of Ben Goertzel, who defines intelligence as “achieving complex goals in complex environments”

For the purposes of these discussions, Yudkowsky stated that he typically uses the notion of "intelligence = efficient cross-domain optimization", constructed as follows:e

"1. Consider optimization power as the ability to steer the future into regions of possibility ranked high in preference ordering. For instance, Deep Blue has the power to steer a chessboard's future into a subspace of possibility which it labels as "winning", despite attempts by Garry Kasparov to steer the future elsewhere. Natural selection can produce organisms much more able to replicate themselves than the "typical" organism that would be constructed by a randomized DNA string - evolution produces DNA strings that rank unusually high in fitness within the spaces of all DNA strings

2. Human cognition is distinct from bee cognition or beaver cognitiion in that human cognition is significantly more generally applicable across domains: bees build hives and beavers build dams, but a human engineer looks over both and then designs a dam with a honeycomb structure. This is also what separates Deep Blue, which only played chess, from humans, who can operate across many different domains and learn new fields"f

Direct source: For over 70 other definitions, check out the book "Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms", pages 17-24


Q: Fine – “super-intelligent beings”. And this is all going to happen by 2045?

There is no commonly agreed upon date. Some believe it could happen as early as 2030, but many others believe we’re at least a century away.

Kurzweil predicts that it will happen by 2045 as I mentioned above, but we’ll get to more on him in Part 2

[expand trigclass="heading" title="Learn More"]

Vernor Vinge, the individual who coined the term "Singularity" predicted it will happen by 2030.

At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there's an 80% probability that the singularity will occur between 2017 and 2112.g

Q: Well can you give me any more details about what this future is going to look like?

Imagine that you're trying to explain the capabilities of your iPhone to someone who's living in the Middle Ages. How will you explain to them the capabilities of your camera - that your phone seems to have a woman inside (Siri) who can look up stuff on the Internet for you.9

It would be impossible. There would have been no viable frames of reference you could have used to describe the capabilities of a technology like the Internet.

Q: But you’re saying that a transformational change of similar magnitude has happened before?

The Singularity is usually anticipated as a future transformation, but for comparison’s sake we can relate the enormous change that’s coming to drastic changes in the past, like the example above from the Middle Ages.10

Q: Now I’m a bit frightened. What are the odds that humanity survives?

Well it depends who you ask.

Kurzweil is known as the heavy optimist. He says “the extermination of humanity by violent machines is unlikely (though not impossible) because sharp distinctions between man and machine will no longer exist thanks to the existence of cybernetically enhanced humans and uploaded humans.”11

But if you ask many of those at other research labs such as Singularity Institute, you’ll hear that based on our current direction something far more cataclysmic seems probable. To them, it’s more likely that this superhuman artificial intelligence gives rise to a race of sentient machine that has no use for humans, subsequently wiping us from the face of the Earth.12

It's important to note that this is all prediction and right now we can't even get researchers and scientists to agree on what exactly the term "Singularity" means. In the next section we'll take a closer look at more optimistic scenarios and dive deeper into three distinct schools of thought.

[expand trigclass="heading" title="Learn More"]

Extinction is by far the largest concern in the community. Some possible scenarios this could happen include:

•Super-smart AI that gets out of control
•Nanotechnology gone rogue in which the mass of planet Earth – or the universe – is
Strangled by self-replicating nanobots (grey goo)
•Uncontrolled use/experimentation of super-powerful weapons/genetic manipulationh

Other reasons that many fear the Singularity is that it could theoretically remove us from our biological roots. The fear is that we could lose the very essence of what it means to “be human” through continuous genetic manipulation or transhuman growth.i

Jaan Tallinn, another Singularity advocate who helped create both Kazaa and Skype, says that “It will be the biggest change the universe has seen. Fasten your seatbelts, because this could be very bad."


Learn more about Ray Kurzweil's predictions and the other Singularity schools of thought in Part 2


Image Credit to LehDari:

Share This Article