Q: So you mentioned that there is no widely accepted view of what the Singularity is and what exactly is going to happen, is that correct?

That’s correct, there is very little continuity regarding what exactly the term “Singularity” refers to. A brilliant AI researcher by the name of Eliezer Yudkowky has dissected and categorized these beliefs into three schools of thought: the Event Horizon Thesis, the Intelligence Explosion Thesis, and finally the Accelerating Change Thesis.

Q: Well which school did Vernor Vinge fall into when he originally coined the term “Singularity”?

He would fall under the Event Horizon school of thought. This is the belief that humanity will hit a critical point in time where we will build machines that are more intelligent than us (remember our definition of intelligence is the ability to solve problems with limited resources, including time)1

Now this could occur through either artificial intelligence or purely bio-hacking, but whatever occurs will allow us to create/obtain an intelligence that is greater than ours on a trans-species scale (think the difference between us and chimpanzees).

The event horizon school then posits that since our brain can only imagine and predict actions within the realm of our own intelligence (a chimpanzee cannot accurately predict human societal actions), it would be impossible for us to predict the actions of something that is smarter-than-human. Therefore we have absolutely no idea what will happen past this point in time, which is referred to as the Singularity.

[expand trigclass="heading" title="Learn More"]

For the entirety of Earth’s history all technological and social progress has been the product of the human mind. Even everything we can imagine is created within our human mind...and so once a superior intelligence takes control, it's impossible to even fathom what the world will look like.


Q: Okay so the Event Horizon school refers to the Singularity as point in time after which we are unable to make accurate predictions?

Correct.Then you have the "intelligence explosion hypothesis", which views the Singularity not as a single point in time, but rather as a process. Think of it as a feedback cycle between intelligence and technology, where we will become more intelligent through technology, and we'll use that new technology to make ourselves even more intelligent.

Q: Could you provide me with an example?

If I provided you with a brain implant that allowed you to immediately search and access anything on the internet  instantaneously, what would you do with this ability? Well one good bet is that you'd probably use it to create an even better brain implant that would make yourself even more intelligent.2

Now that is just one example, and it's important to consider that this could occur via the creation of a smarter-than-human artificial intelligence or through biological augmentation. And once it does happen we may see a major spike in intelligence very quickly until we hit a form of "super-intelligence".

Now do you see what I mean by feedback cycle?

[expand trigclass="heading" title="Learn More"]

It's important to note that intelligence has always been what creates technology. But soon we'll hit a point where this is reversed and technology will begin to improve upon human intelligence.

To further quote Yudkowsky on this explanation,

"In the intelligence explosion the key threshold is criticality of recursive self-improvement. It’s not enough to have an AI that improves itself a little. It has to be able to improve itself enough to significantly increase its ability to make further self-improvements, which sounds to me like a software issue, not a hardware issue...

"The most extreme version of this thesis is an artificial intelligence improving its own source code. If you try to do intelligence enhancement by genetic engineering, then it takes 18 years for the kids to grow up and help engineer the next generation. It’s when we start talking about artificial intelligence that we start to see how large the intelligence explosion might be. Even if you consider only the hardware of the human brain, as opposed to the software, you can see plenty of room for improvement.

Human neurons spike an average of 20 times per second. And the fastest recorded neurons in biology spike 1000 times per second, which is still less than a millionth of what a modern computer chip does. Similarly, neural axons transmit signals at less than 150 meters per second. One meter per second is more usual. And that’s less than a millionth the speed of light. So it should be physically possible to have a brain that thinks at one million times the speed a human does without even shrinking it or cooling it. At that rate, you could do one year’s worth of thinking every 31 physical seconds.a

This is a strong argument for why we shouldn’t wait to address the possible implications of transhuman AI.


Q: Yup, got it. So if the Event Horizon school is supported by Vinge, who supports the Intelligence Explosion school of thought?

It’s supported by many AI researchers, including Eliezer Yudkowsky who founded the Machine Intelligence Research Institute. It’s also supported by the likes of IJ Good, Nick Bostrom, and is the general focus of the human rationality community LessWrong.3

Q: Where does Ray Kurzweil come in?

Ray Kurzweil pushes forward the Accelerating Change school of thought, which essentially says that technological change feeds on itself, and therefore accelerates. The rate of change today is faster than it was 100 years ago, which in turn was faster than it was 500 years ago.

Since we tend to think linearly, we expect roughly as much change as has occurred in the past to occur over our own lifetime. This leads us to have drastic underestimations as to how much change the future will bring, and the past isn't a reliable guide to how much change we should expect in the future. To echo Kurzweil’s quote from section 1, "we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). "

Now this next point is where Kurzweil veers drastically from the other two schools of thought. He states that not only is technological change accelerating, it's actually fairly predictable and follows smooth, typically exponential curves. Based on this,  Kurzweil actually believes that we can predict when new technologies will arrive and when future milestones will be met.

So aside from publishing books, part of the reason why Kurzweil’s theories are so much more main-stream is that he actually uses predictions to help us imagine what this future will look like. Remember how I said  that he’s predicted correctly on approximately 86% of his predictions (115 out of 147)? Let’s check out some of his interesting ones for the future:4

The Late 2020's - Virtual reality will be so high quality that it will be indistinguishable from real reality. This technology will be in widespread use. At this point, the majority of communication occurs between humans and machines as opposed to human-to-human.


2030's - Mind uploading becomes successful and perfected by the end of this decade as humans become software-based: living out on the Web, projecting bodies whenever they want or need (whether in virtual or real reality), and living indefinitely so long as they maintain their "mind file".5

By 2045 he predicts the Singularity to bring about a radically different world. He says:

"In this new world, there will be no clear distinction between human and machine, real reality and virtual reality. We will be able to assume different bodies and take on a range of personae at will. In practical terms, human aging and illness will be reversed; pollution will be stopped; world hunger and poverty will be solved. Nanotechnology will make it possible to create virtually any physical product using inexpensive information processes and will ultimately turn even death into a soluble problem."6

[expand trigclass="heading" title="Learn More"]

According to Yudkowsky, "The real argument for "accelerating change" goes something like this: if you look back at the rise of the internet from the perspective of the man on the street, the internet blew up from out of nowhere. There is a sudden, huge spike in the number of internet users on a linear graph. On a logarithmic graph, the increase looks much more steady. So, accelerationists would say there is no use in acting all surprised by your business model blowing up – you had plenty of warning. The core thesis of accelerationism is that huge changes are coming larger than you would expect from linear thinking. And the bold thesis is that you can actually time the breakthroughs."b


Q: Wow. How old is Kurzweil? Does he think that he’ll be alive to see the Singularity?

Oh, he’s 65 years old and thinks he's never going to die. Yup, right now he’s currently on a supplement regimen that includes over 150 pills. Not only does he believe that we’ll solve aging before it’s his time to go, he thinks we're just 15 years away from a tipping point in longevity.7

Q: You can't be serious.

Dead serious. Haven't you heard of Calico? It’s Google's own personal health startup that it plans to funnel hundreds of millions of dollars into over the next few years. Led by former Genentech CEO and current Apple Chairman Arthur D. Levinson, the startup has "plans to increase the lifespan of people born 20 years ago by as much as 100 years."8

Q: All of these schools sound very similar to me. Are they mutually compatible?

The core theses are generally supportive of one another. All the schools of thoughts support the notion that future change will be greater than past change because technology feeds on itself. Aside from this, there are some key differences that should be noted.

Most of the incompatibilities lie between the event horizon and intelligence explosion theses, and the accelerating change thesis. The main inconsistency between them is that Kurzweil predicts past the point of super-intelligent or self-improving AI, which clearly goes against both the event horizon thesis and the intelligent explosion thesis. Kurzweil also believes that we can predict the future via smooth exponential graphs, whereas a fundamental premise of the other two is this future change cannot be predicted.

[expand trigclass="heading" title="Learn More"]

The event horizon thesis doesn't require accelerating change. Theoretically we could reach the Singularity point by continuing up the historical line of progress that we're on now.

For Yudkowsky’s full overview of the three schools, check it out here: http://yudkowsky.net/singularity/schools


Q: I see. So can you sum up the three schools of thoughts for me?

Sure, their core theses are as follows:

Event Horizon: A single point in time where we create greater than human intelligence, after which we cannot make accurate predictions.

Intelligence Explosion: Minds making technology to improve minds through a positive feedback cycle.

Accelerating change: intuitive futurism is linear, but technological change accelerates along smooth, exponential curves.

The core theses all support each other. They don’t necessarily imply each other, or logically require each other, but they support each other.9

[expand trigclass="heading" title="Learn More"]

Let’s check out some other definitions of the Singularity by prominent individuals as aggregated by the Singularity Weblog:c

James Martin – Famed author and computer scientist

  • Singularity "is a break in human evolution that will be caused by the staggering speed of technological evolution."

Singularity Institute for Artificial Intelligence

  • “The Singularity is the technological creation of smarter-than-human intelligence.”

I.J. Good – British mathematician who greatly influenced Vernor Vinge

  • I.J Good was an advocated of the positive feedback cycle within which minds will make technology to improve on minds until we create a super-intelligence. He says:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Alan Turing – Famed mathematician, logician, and computer scientist

  • “Once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”


Check out Singularity counter-arguments and a call to action in Part 3


Feature Image Credit to Kynereth

Virtual Reality image credit to: http://www.telepresenceoptions.com/2008/09/designers_developing_virtualre/
1 http://hplusmagazine.com/2014/02/21/video-friday-ray-kurzweil-how-do-you-define-intelligence-how-do-you-build-a-mind/
2 http://www.acceleratingfuture.com/people-blog/2007/introducing-the-singularity-three-major-schools-of-thought/
3 http://www.acceleratingfuture.com/people-blog/2007/introducing-the-singularity-three-major-schools-of-thought/
4 http://bigthink.com/endless-innovation/why-ray-kurzweils-predictions-are-right-86-of-the-time
5 http://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil
6 http://www.singularity.com/aboutthebook.html
7 http://www.inquisitr.com/1000017/google-exec-ray-kurzweil-takes-150-vitamin-supplements-every-day/
8 http://techcrunch.com/2013/09/19/wtf-is-calico-and-why-does-google-think-its-mysterious-new-company-can-defy-aging
9 http://www.acceleratingfuture.com/people-blog/2007/introducing-the-singularity-three-major-schools-of-thought/
a http://www.acceleratingfuture.com/people-blog/2007/introducing-the-singularity-three-major-schools-of-thought/
b http://www.acceleratingfuture.com/people-blog/2007/introducing-the-singularity-three-major-schools-of-thought/
c http://www.singularityweblog.com/17-definitions-of-the-technological-singularity/

Share This Article