Synesthesia is the rare condition when our senses melt together — some say they can hear colors, others that they can taste words.
But what if we let the senses of an artificial intelligence overlap instead? Belgium-based machine learning researcher and educator Xander Steenbrugge has developed a neural network that can turn music into trippy visualizations.
It’s an impressive example of the synthesis between human-created artforms and AI algorithms.
Steenbrugge’s project, called “Neural Synesthesia,” makes use of a generative adversarial network.
That’s a type of machine learning system that specializes in generating new data from a given training set. They’ve been used in a wide variety of ways: from cooking up photorealistic images of faces, grotesque cats, and even anime character generators.
“This project is an attempt to explore new approaches to audiovisual experience based on Artificial Intelligence,” wrote Steenbrugge in a description of his project. ” I do not create these works, I co-create them with the AI models that I bring to life.”
First, Steenbrugge feeds an AI algorithm a basic dataset of images, then trains the model to replicate their visual style. Finally he allows the AI to twist and blend the visuals based on parameters Steenbrugge pulls out of different audio sources using a “custom feature extraction pipeline.”
“The AI does not fully create the work, and neither do I. It is very much a collaboration,” adds Steenbrugge.