Future Tense

Who Painted That New Cosmic You?

Lensa’s trippy, A.I.-generated avatars are a viral hit. But their magic comes from an unsettling place.

Three sci-fi-looking cartoonish avatars.
The author and some colleagues. Photo illustration by Slate. Photos by Lensa AI.

Over the past week, many of our Instagram and Twitter feeds have been filled with avatars of our friends looking like heroes in a cyberpunk thriller. With their Day-Glo pink hair, shimmery skin, and Mad Max–meets–Ren Faire corsets, many of these profile pics looked so good that it was tempting to download the Lensa app and splurge on 100 A.I.-generated portraits of one’s own.

But by the time TMZ built its slideshow of the best celebrity Lensa looks, the tide had turned.

Advertisement

Yes, the app, which continues to top Apple’s free apps chart (users are prompted within Lensa to pay for the avatars), was succeeding in making hot people hotter and helping the selfie-averse find their superhero inside them. But there was also the matter of theft. By Wednesday, hundreds of artists had accused the app—and more broadly A.I. art—of pilfering their style and threatening their livelihood. More specifically, many took issue with the fact that Lensa’s for-profit app was built with the help of a nonprofit dataset containing human-made artworks scraped from across the internet. Though Lensa does not pull from that dataset directly, it reimagines photos in styles like “fantasy” by utilizing an A.I. tool built by analyzing that dataset.

Advertisement
Advertisement
Advertisement
Advertisement

For years ethicists have been debating how to determine when an A.I. creation has gone too far in borrowing from humans. One widely consumed Twitter thread seemed to offer an answer: when you could still spot the original artist’s signature.

Advertisement
Advertisement

There they were, it seemed, in Lensa portrait after portrait.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

It was as if Lensa had been caught with the A.I. version of a security tag in its algorithmic sleeve.

And yet Prisma Labs, a California-based photo and video editing company that created Lensa, did not seem embarrassed. In its own Twitter thread, the company (which generated another viral image craze in 2016, back when it was based in Moscow) defended its techniques. A machine has to make “120 million billion (yes, it’s not a typo) mathematical operations every time” to produce its “unique” images, someone, likely a human, wrote, a process that would seem to greatly distance the end result from the source images. Helping its case, as the days passed, was the fact that no artist emerged to claim their signature in one of the Lensa creations, raising questions about whether these trademarks could truly have been made by humans.

Advertisement

All of this could leave you confused about what exactly Lensa did wrong. Though Lensa is indeed creating photorealistic portraits and illustrations that have not previously existed, there’s still a problem with the framing that the company has used to usher its app toward virality:  “These AI avatars are generated from scratch but with your face in mind,” Lensa wrote in an Instagram post.

Advertisement

It’s certainly a valid view that it’s OK to use A.I. for creative purposes. But terms like “from scratch” obscure responsibility for the ingredients it takes to bake a cosmic avatar. And Lensa’s recipe doesn’t go down easily.

I recently spoke with an artist who goes by the name Lapine, who had concerns about the Lensa portraits as soon as she saw one. (She did not want me to use her legal name for safety reasons.)

Advertisement

Lapine was an early tester of Stable Diffusion, the artificial intelligence tool that Lensa uses—but does not own—to create the avatars. During the pandemic, Lapine, who has a rare genetic condition, found that creating A.I. art was a superb way to be creative, even if she didn’t have a lot of energy. She enjoyed thinking of whimsical phrases, putting them into A.I. tools, and marveling at the results. A few weeks before Stable Diffusion went public in August—becoming a viral sensation in its own right—a Stable Diffusion team member invited Lapine to be a beta tester. She was thrilled.

Soon after joining the Stable Diffusion Discord channel, however, Lapine realized there were issues: Namely, other testers were creating child porn and other violent images. All of this prompted her to poke around in LAION-5B, the open-source dataset that Stable Diffusion had been “trained” on. At the simplest level, neural networks like Stable Diffusion learn how to produce new images by studying lots of existing images and their text labels. LAION-5B, which is managed by a nonprofit research group, is a massive vault of images which has informed how many A.I. tools visually interpret words. It was created by scraping the internet for images.

Advertisement
Advertisement
Advertisement

Lapine took a selfie and uploaded it to Have I Been Trained, a site that makes it possible to get a sense of  what’s in LAION-5B. She happened to be wearing a feeding tube, and the tool showed her images of other thin women with feeding tubes, some of which linked to cancer journey blogs. This made her uncomfortable. But it got worse. She uploaded an image of herself taken after a surgery in 2013. Much to her surprise, she found the exact same image in the dataset. (After she tweeted about it, Ars Technica wrote a story about what happened.)

Advertisement
Advertisement

“This is more of a mess than I thought it was,” she said she realized.

Because the doctor was dead, Lapine still does not know how it ended up online. But what was clear was that it was not going to be easy to remove it. (LAION-5B has been working with Have I Been Trained to make that process easier, said Mathew Dryhurst, one of the founders of Spawning, an organization which created the search tool to help artists.)

Advertisement

The point here is not that your mutuals’ Lensa avatars include replicas of Lapine’s reshaped jaw. Rather, such an image has informed Stable Diffusion’s understanding of what a woman looks like after a surgical procedure. Similarly, the many copyrighted artworks by illustrator Greg Rutkowski in the dataset have informed its understanding of what a dragon looks like. Without having studied these images, the A.I. tool would not be capable of producing anything new. Nor would the company Stability A.I., which produced Stable Diffusion, be able to secure $101 million in funding—as it did recently.

“Everyone is profiting except the artist,” said Lapine.

Let’s now return to the matter of signatures. Sarah Fabi, a machine-learning researcher, said that it’s highly unlikely that the signatures belong to specific artists, because Stable Diffusion doesn’t spit out exact replicas. Rather, it learns that images in a given style typically have signatures in certain places, so it invents them. Still, the presence of signatures highlights that there are artworks with real human signatures in the database.

Advertisement

This is part of why Jon Lam, a storyboard artist for Riot Games, a video game developer, finds Lensa’s use of “from scratch” so offensive.

“It’d be like buying five pizzas, smashing them together and telling your friends you were the chef,” Lam said.

Advertisement

Dryhurst, one of the creators of Have I Been Trained, said he didn’t think that Lensa was trying to be deceptive. (Lensa did not respond to requests for an interview.)

“In some sense I get what they are trying to say, this is a very different process to, lets say, reproducing an image with a ‘painting’ effect,” he wrote over email. Still he also objected to Lensa’s elevator pitch, given that it implies that something is “magicked into existence, which is not strictly true either,” he said. Lensa may produce some impressive wizards,  but underneath their spell is something very human. For now.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement