If you've been on the internet pretty much at all over the last few days, it's very likely that you've seen a rush of people posting fantastical, anime-inspired digital portraits of themselves.

These "Magic Avatars" — as their creator, a photo-editing app called Lensa AI, has dubbed them — have taken the internet by storm, their virality hand-in-hand with that of ChatGPT, OpenAI's next-gen AI chatbot.

Indeed, it seems a fitting way to end what's been a banner year for artificial intelligence. Text-to-image generators, most notably OpenAI's DALL-E and Midjourney's Stable Diffusion, have disrupted creative industries; a record label unveiled — and quickly did away with — an AI rapper; machine learning has been used to generate full-length fake "conversations" between living celebrities and dead ones; and who could forget LaMDA, the Google chatbot that a rogue engineer said had gained sentience?

While experts have been tinkering with the foundational tech for years, a few substantial breakthroughs — combined with a lot of investment dollars — are now resulting in an industry rush to market. As a result, a lot of new tech is getting bottled into consumer-facing products.

There's just one problem: neither the products — nor the public — are ready.

Take those "Magic Avatars," which on face value seem relatively harmless. After all, there's nothing wrong with imagining yourself as a painted nymph or elf or prince or whatever else the app will turn you into. And unlike text-to-image generators, you can only work within the boundaries of pictures that you already have on hand.

But as soon as the "avatars" began to go viral, artists started sounding the alarm, noting that Lensa offered little protection for the creators whose art may have been used to train the machine. Elsewhere, in a darker turn, despite Lensa's "no nudes" use policy, users found it alarmingly simple to generate nude images — not only of themselves, but of anyone they had photos of.

"The ease with which you can create images of anyone you can imagine (or, at least, anyone you have a handful of photos of), is terrifying," wrote Haje Jan Kamps for Techcrunch. Kamps tested the app's ability to generate pornography by feeding it poorly photoshopped images of celebrities' faces onto nude figures. Much to his horror, the photoshopped images handily disabled any of the app's alleged guardrails.

"Adding NSFW content into the mix, and we are careening into some pretty murky territory very quickly: your friends or some random person you met in a bar and exchanged Facebook friend status with may not have given consent to someone generating soft-core porn of them," he added.

Terrible stuff, but that's not even as bad as it gets. As writer Olivia Snow discovered when uploading her childhood photos of herself to the "Magic Avatars" program, Lensa's alleged guardrails failed to even protect against the production of child pornography — a horrifying prospect on such a widely-available and easy-to-use app.

"I managed to piece together the minimum 10 photos required to run the app and waited to see how it transformed me from awkward six-year-old to fairy princess," she wrote for Wired. "The results were horrifying."

"What resulted were fully-nude photos of an adolescent and sometimes childlike face but a distinctly adult body," she continued. "This set produced a kind of coyness: a bare back, tousled hair, an avatar with my childlike face holding a leaf between her naked adult's breasts."

Kamps' and Snow's accounts both underscore an inconvenient reality of all this AI tech: it's chronically doing things its makers never intended, and sometimes even evading safety constraints they attempted to impose. It gives a sense that the AI industry is pushing faster and farther than what society — or even their own tech — is ready for. And with results like these, that's deeply alarming.

In a statement to Techcrunch, Lensa placed the blame on the user, arguing that any pornographic images are "the result of intentional misconduct on the app." That line echoes a wider industry sentiment that there are always going to be bad actors out there, and bad actors will do what bad actors will do. Besides, as another common excuse goes, anything that these programs might produce could just as well be created by a skilled photoshop user.

Both of these arguments have some weight, at least to an extent. But neither changes the fact that, like other AI programs, Lensa's program makes it a lot easier for bad actors to do what bad actors might do. Generating believable fake nudes or high-quality depictions of child sexual abuse imagery just went from being something that few could do convincingly to being something that anyone armed with the right algorithm can easily create.

There's also an unmistakable sense of Pandora's box opening. Even if the Lensas of the world lock down their tech, it's inevitable that others will create knockoff algorithms that bypass those safety features.

As Lensa's failures have so clearly demonstrated, the potential for real people to experience real and profound harm as a result of the premature introduction of AI tools — image generators and beyond — is growing rapidly. The industry, meanwhile, appears to be taking a "sell now, ask questions later" approach, seemingly keener on beating competitors to VC funding than to ensuring that these tools are reasonably safe.

It's worth noting that nonconsensual porn is just one of the many risk factors, here. The potential for the quick and easy production of political misinformation is another major concern. And as far as text generators go? Educators are shaking in their boots.

As it stands, a tool as seemingly innocuous as "Magic Avatars" is yet another reminder that, while it's already changing the world, AI is still an experiment — and collateral damage isn't a prospective threat. It's a given.

READ MORE: 'Magic Avatar' App Lensa Generated Nudes from My Childhood Photos [Wired]

More on AI: Professors Alarmed by New AI That Writes Essays about as Well as Dumb Undergrads


Share This Article