Roman Yampolskiy is a Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of Artificial Superintelligence: a Futuristic Approach. Recently, he spoke with Futurism Contributor  Daniel Araya about the dangers imposed by robots that have artificial superintelligence, weaponized AI, and what the future of robotics might look like. 

DA: Roman, could you say a little about your professional background and your interest in artificial intelligence (AI)? I know that in your new book Artificial Superintelligence, you agree with Nick Bostrom that AI poses a serious existential risk. Could you give us some sense of the risks that we face with superintelligent AI?

Roman V Yampolskiy

My research lies at the intersection of cybersecurity and artificial intelligence. I am particularly interested in AI Safety work because it is my view that if we are successful in creating advanced AIs–which are superior to us intellectually–then our very existence may be jeopardized.

The main problem with technology exceeding human capabilities is, of course, the control problem. How can we ultimately control an entity that is smarter than we are?

An entity that is in fact capable of deriving its own plans may not necessarily be aligned with our expectations or values. I see a non-zero probability of such machines being extremely dangerous to all life because of either poor goal alignment or poor implementation.

DA: What theoretical schools/groups do you see as most promising for superintelligence? Or more specifically, what researchers/ organizations do you think are closest to making superintelligence a reality?

I expect a lot of novel results to come from Google’s DeepMind as well as from Facebook’s AI lab. I mean, basically the best funded and most prominent researchers are very likely to produce the results with the most impact.

DA: In your book, you propose Efficiency Theory (EF) as a unification of computability theory, communication, complexity, and information theories. Is EF computable or only a conceptual model? If it is computable, how might algorithms be developed using EF to develop superintelligent AI?

It is a conceptual model for an evaluation algorithm—for putting other aspects of computation, such as communication, into a common domain. It is aimed at recognizing some fundamental properties of brute force (try every option) computing and using those easy to discover, optimal algorithms as a measuring stick for our progress in computing.

DA: One of the very real concerns that I have is the danger that governments will begin to weaponize AI. While I assume that this is inevitable, I wonder what we can do to contain AI? Or at least make AI safer?

Some political activism has already begun with groups like Campaign to Stop Killer Robots attempting to prevent the weaponization of AI. However, this is just one path to dangerous AI.

I explore a number of other possibilities in my recent publication. For example, there exist significant dangers of intelligent systems containing semantic or logical errors in coding, or problems in their goal alignment. I also think that we should be particularly concerned about systems that are malevolent by design. For example, I am not too worried about the automation of labor; it is the next step in this process that is really dangerous. Very shortly after getting to human-level performance, machines will self-improve–to levels far beyond human beings, becoming superintelligent.

For this reason, I am more concerned about machines taking our place in the universe, and less concerned about them taking our place in the factory.

Now, how to deal with that problem is a much more complicated issue. I don’t think anyone has a solution to it right now. The work on it is just beginning, so I would recommend interested readers to check out my new book on the topic: Artificial Superintelligence: a Futuristic Approach, which attempts to formalize the problems we are facing.

DA: Theories on the Singularity (the point at which machine intelligence evolves beyond human intelligence) seem to imply a kind of metaphysical view of technology. Jaron Lanier, for example, argues that the debate on AI is really about religion; that is, people turn to metaphysics to cope with the human condition. What’s your perspective on his criticism? Do you believe that AI will eventually become conscious in some way?

In my view, Singularity has nothing to do with religion. It is a purely scientific topic. On the other hand, consciousness is not purely scientific.

I am not predicting that machines will become conscious; rather, I am saying that they will become superintelligent. Consciousness cannot be detected or measured in any way, for example. It also does not necessarily do anything (as so clearly demonstrated by philosophical zombie thought experiments).

DA: A common view now is that AI could (will?) displace human labor and even enable a kind of future utopia. Assuming machine intelligence becomes the infrastructure for a postindustrial civilization, doesn’t that suggest that we could begin to design and build very different kinds of societies? Something similar perhaps to Ian M. Banks’ Culture Series or the Venus Project?

The Venus Project

Yes, machines have long ago begun to replace humans working in physical labor-related jobs. Next, we are going to see a much greater increase in the number of “intellectual jobs” that can be done by machines. Eventually, as AIs reach a level of human performance, all jobs will be automated. This will undoubtedly produce a very significant change in how mass industrial societies are organized. Our economy, class structure and such concepts as money will change profoundly.

Regarding the Culture Series and Venus Project, I had to look up what they are. It seems to me that neither model is taking into account overall technological progress, which will take place alongside progress in AI.

For example, it is likely that around the same time that we get to human level AI, it will become possible to upload a mind to a computer shifting our environment from physical to completely virtual. That would, of course, fundamentally alter everything about our civilization. But we can’t really know for sure in what ways, as the whole notion behind technological singularity is that you can’t predict what is going to happen until after it takes place.

It will certainly be interesting.


Daniel Araya is a researcher and advisor to government with a special interest in education, technological innovation, and public policy. His newest books include: Augmented Intelligence (2016), Smart Cities as Democratic Ecologies (2015), and Rethinking US Education Policy (2014). He has a doctorate from the University of Illinois at Urbana-Champaign and is an alumnus of Singularity University’s graduate program in Silicon Valley. He can be found here: www.danielaraya.com and on Twitter at @danielarayaXY.

 


Share This Article