AI as We Understand It

Most of the AI we know today operates on a principle of deep learning: a machine is given a set of data and a desired output, and from that it produces its own algorithm to solve it. The system then repeats, perpetuating itself. This is called a neural network. It is necessary to use this method to create AI, as a computer can code faster than a human; it would take lifetimes to code it manually.

Professor of Electrical Engineering and Computer Science at MIT Tommi Jaakkola says, “If you had a very small neural network, you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable." We are at the stage of these large systems now. So, in order to make these machines explain themselves — an issue that will have to be solved before we can place any trust in them — what methods are we using?

Click to View Full Infographic

1. Reversing the algorithms. In image recognition, this involves programming the machine to produce or modify pictures when the computer recognizes a pattern it has learned. Take the example of a Deep Dream modification of The Creation of Adam, where the AI has been told to put dogs in where it recognizes them. From this, we can learn what constitutes a dog for the A.I: firstly, it only produces heads (meaning this is what largely characterizes a dog, according to it) and secondly, the patterns that the computer recognizes as dogs are clustered around Adam (on the left) and God (on the right).

A Deep Dream rendering of The Birth of Adam, where the AI has been told to look for dogs and modify the picture where it finds them. (Image Credit: PROMario Klingemann/Flickr)

2. Identifying the data it has used. This process of understanding AI gives AI the command to record extracts and highlight the sections of text that it has used according to the pattern it was told to recognize. Developed first by Regina Barzilay, a Delta Electronics Professor at MIT, this type of understanding applies to AIs that search for patterns in data and make predictions accordingly. Carlos Guestrin, a Professor of Machine Learning at the University of Washington, has developed a similar system that presents the data with a short explanation as to why it was chosen.

3. Monitoring individual neurons. Developed by Jason Yosinski, a Machine Learning Researcher at Uber A.I Labs, this involves using a probe and measuring which image stimulates the neuron the most. This allows us to deduce what the AI looks for the most through a process of deduction.

These methods, though, are proving largely ineffective; as Guestrin says, “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain. We’re a long way from having truly interpretable AI.”

And Why It Is Important to Know More

It is important to understand how these systems work, as they are already being applied to industries including medicine, cars, finance, and recruitment: areas that have fundamental impacts on our lives. To give this massive power to something we don't understand could be a foolhardy exercise in trust. This is, of course, providing that the AI is honest, and does not suffer from the lapses in truth and perception that humans do.

At the heart of the problem with trying to understand the machines is a tension. If we could predict them perfectly, it would rob AI of the autonomous intelligence that characterizes it. We must remember that we don’t know how humans make these decisions either; consciousness remains a mystery, and the world remains an interesting place because of it.

Daniel Dennet warns, though, that one question needs to be answered before AI is introduced: "What standards do we demand of them, and of ourselves?” How will we design the machines that will soon control our world without us understanding them — how do we code our gods?


Share This Article