Generative AI relies on a massive body of training material, primarily made up of human-authored content haphazardly scraped from the internet.
Scientists are still trying to better understand what will happen when these AI models run out of that content and have to rely on synthetic, AI-generated data instead, closing a potentially dangerous loop. Studies have found that AI models start cannibalizing this AI-generated data, which can eventually turn their neural networks into mush. As the AI iterates on recycled content, it starts to spit out increasingly bland and often mangled outputs.
There’s also the question of what will happen to human culture as AI systems digest and produce AI content ad infinitum. As AI executives promise that their models are capable enough to replace creative jobs, what will future models be trained on?
In an insightful new study published in the journal Patterns this month, an international team of researchers found that a text-to-image generator, when linked up with an image-to-text system and instructed to iterate over and over again, eventually converges on “very generic-looking images” they dubbed “visual elevator music.”
“This finding reveals that, even without additional training, autonomous AI feedback loops naturally drift toward common attractors,” they wrote. “Human-AI collaboration, rather than fully autonomous creation, may be essential to preserve variety and surprise in the increasingly machine-generated creative landscape.”
As Rutgers University professor of computer science Ahmed Elgammal writes in an essay about the work for The Conversation, it’s yet another piece of evidence that generative AI may already be inducing a state of “cultural stagnation.”
The recent study shows that “generative AI systems themselves tend toward homogenization when used autonomously and repeatedly,” he argued. “They even suggest that AI systems are currently operating in this way by default.”
“The convergence to a set of bland, stock images happened without retraining,” Elgammal added. “No new data was added. Nothing was learned. The collapse emerged purely from repeated use.”
It’s a particularly alarming predicament considering the tidal wave of AI slop drowning out human-made content on the internet. While proponents of AI argue that humans will always be the “final arbiter of creative decisions,” per Elgammal, algorithms are already starting to float AI-generated content to the top, a homogenization that could greatly hamper creativity.
“The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional,” the researcher wrote.
It remains to be seen to what degree existing creative outlets, from photography to theater, will be affected by the advent of generative AI, or whether they can coexist peacefully.
Nonetheless, it’s an alarming trend that needs to be addressed. Elgammal argued that to stop this process of cultural stagnation, AI models need to be encouraged or incentivized to “deviate from the norms.”
“If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs,” he concluded. “The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.”
More on generative AI: San Diego Comic Con Quietly Bans AI Art