Better Than Doodles
Back in June, an image generator that could turn even the crudest doodle of a face into a more realistic looking image made the rounds online. That system used a fairly new type of algorithm called a generative adversarial network (GAN) for its AI-created faces, and now, chipmaker NVIDIA has developed a system that employs a GAN to create far most realistic-looking images of people.
Artificial neural networks are systems developed to mimic the activity of neurons in the human brain. In a GAN, two neural networks are essentially pitted against one another. One of the networks functions as a generative algorithm, while the other challenges the results of the first, playing an adversarial role.
As part of their expanded applications for artificial intelligence, NVIDIA created a GAN that used CelebA-HQ's database of photos of famous people to generate images of people who don't actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.
First, the generative network would create an image at a lower resolution. Then, the discriminator network would assess the work. As the system progressed, the programmers added new layers dealing with higher-resolution details until the GAN finally generated images of "unprecedented quality," according to the NVIDIA team's paper.
Human or Machine?
NVIDIA released a video of their GAN in action, and the AI-created faces are both absolutely remarkable and incredibly eerie. If the average person didn't know the faces were machine-generated, they could easily believe they belonged to living people.
Indeed, this blurring of the line between the human and the machine-generated is a topic of much discussion within the realm of AI, and NVIDIA's GAN isn't the first artificial system to convincingly mimic something human.
A number of AIs use deep learning techniques to produce human-sounding speech. Google's DeepMind has WaveNet, which can now copy human speech almost perfectly. Meanwhile, startup Lyrebird's algorithm is able to synthesize a human's voice using just a minute of audio.
Even more disturbing or fascinating — depending on your perspective on the AI debate — are AI robots that can supposedly understand and express human emotion. Examples of those include Hanson Robotics' Sophia and SoftBank's Pepper.
Clearly, an age of smarter machines is upon us, and as the ability to AI to perform tasks previously only human beings could do improves, the line between human and machine will continue to blur. Now, the only question is if it will eventually disappear altogether.