My Blog
Technology

Lensa Apps Magic Avatars AI stolen data and compromised ethics

Lensa Apps Magic Avatars AI stolen data and compromised ethics
Lensa Apps Magic Avatars AI stolen data and compromised ethics



Comment

If your Instagram account is overwhelmed with otherworldly, cosmic or Kawaii-inspired portraits of your friends, you are not alone. Over the weekend, the photo editing app Lensa introduced “Magic Avatar,” an add-on that generates 50 fantasy portraits of you if you can provide a minimum of $3.99, 10 selfies and 20 minutes of your time.

The avatars deeply resonated with users and continue to trend.

“I saw a lot of people finding their best selves through the avatars,” said Jon Lam, a digital artist.

However, some artists, including Lam, have described Lensa’s creation process as “stealing.”

In the past few months, artificial intelligence image generators have thrust themselves into people’s lives in unexpected and at times harrowing ways, outpacing laws and potentially hurting marginalized communities. Technology like Magic Avatars has repeatedly been accused of stealing artists’ techniques without consent. Days after South Korean artist Kim Jung Gi died, his work was fed into an AI model and regurgitated. Polish artist Greg Rutkowski has seen thousands of AI-generated images using his style; so far it does not look like he will be compensated for that.

Lensa’s avatars remove the tech hurdles for users and grant many the instant gratification of seeing themselves exactly as they desire, making it all the more popular. Artists accept that AI has arrived but describe it as a bandit whose images mimic their contemporaries’ styles, leading them to ask for accountability.

Artist Lauryn Ipsum says that artificial intelligence may have created these original avatars, but the smaller elements that feed the creation — color palettes, brushstrokes, textures, individual styles — were taken from artists like herself without consent, credit or compensation.

“It felt like a punch in my gut to see these avatars,” Ipsum said. “It’s like fast fashion for art.”

Lensa’s parent company, Prisma Labs, says the avatars are created through an open-source neural network called the Stable Diffusion model. This model trains to learn general how-to principles that are then applied to generate content, the company told The Washington Post via email.

The Stable Diffusion model is fueled by a database called LAION-5B, built by AI researchers by casting a net across the internet.

In essence, the database takes data, images and artwork from websites, including millions of images owned by artists, Lam said. These images — 5.85 billion of them, to be exact — are paired with text. These image-text pairs then “train” the Stable Diffusion model on how to create content such as the Magic Avatars.

Stability AI, the company that owns the Stable Diffusion model, did not immediately respond to questions from The Post.

Ipsum compared the artificial intelligence behind avatars to a bandit-baker. A regular baker purchases the flour, salt, sugar, yeast and water, she said. A bandit-baker steals the ingredients, then bakes and sells the bread for a profit.

“The Magic Avatar is like that second baker, a bandit,” she said. “The machine generated the portraits, but each element in those avatars is stolen from an actual artist who may have taken years to perfect that technique.”

It is legal for LAION-5B and the Stable Diffusion model to absorb the images, despite copyrights, because the former is a nonprofit research entity, and the latter is free for all and open source. For Lensa, which is monetizing the avatars, it is more complicated.

Prisma Labs says it is charging for the user-friendly “working toolkit” rather than a random database of artwork.

Lam thinks they have stepped into a “legal gray area” because technology moves faster than the law and currently there is no legal precedent against AI using copyrighted data to create and sell a brand new image.

Still, Ipsum said, “It’s a very crummy feeling to see these images.” She hasn’t come across an avatar that reminds her of her own work, but she can recognize other digital artists’ styles. “This is such a personal loss for the art community,” she said.

Earlier this week, Ipsum searched for her artwork on a website that tells you if an image you created has helped to train artificial intelligence in creating new content; she found book covers that she had designed and a hand-drawn graphite illustration of a nude woman that still hangs in her living room.

“I was so upset,” she said. “Why does this machine have access to my work without my permission? And how can companies be making money off my art without my consent?”

There is a reason that Lensa is the top downloaded app in the Apple Store’s Photo & Video category: It is helping people visualize themselves the way they do in their fantasies. It is also helping artists conceptualize things they want to draw or write about.

“The Magic Avatars are so accessible and evocative, it’s clear that this was a tester to see how the general public will react to these computer-generated images,” Lam said. “What’s not to like when you see yourself as everything you ever wanted to be?”

It’s like an episode of “Black Mirror,” he added.

Artists have also been careful to remind people, on social media, that while all artists are affected by such content generation, marginalized artists are even more vulnerable.

“Marginalized artists are so important for our community,” artist Megan Schroeder said. “Their life experiences, stories and images need to be seen, and such technology makes it harder for their voices to be heard.”

For years it was men who ruled the art world, Ipsum said. “Now that women and people of color and other marginalized people are finally here, AI is stealing from them,” she said.

Prisma Labs says the model it uses functions similarly to the way “a human being is capable of learning and self-training some elementary art principles by observing art, exploring imagery online and learning about artists to ultimately attempt creating something based on these aggregated skills.”

But artists such as Lam think comparing the model to human artists is a false equivalency. “They can try and create loopholes to steal our art, but such technology is still stealing artists’ identities that are contained within their work,” he said.

Ipsum said the avatars dehumanized her and other artists.

“I think you can only openly steal from someone to make a profit if you think of them as dispensable; you have to believe that the general public doesn’t care about artists to do this,” she said.

At the same time, she said she remains hopeful.

“We have seen what AI can do, and frankly it’s clear that they have no outputs without stealing inputs from us,” she said. Artists have been coming together to discuss what the future can look like, in community and Twitter spaces, and they think it’s important to start making demands.

“I am not scared of this technology; none of us think AI can displace artists. All we want is the choice to opt in, credit that we have earned and the payment that we deserve.”

Related posts

ChatGPT Explained: Why OpenAI’s Chatbot Is So Mind-Blowing

newsconquest

iOS 16.5 Could Bring These New Features to Your iPhone Soon

newsconquest

Snapchat tried to make a safe AI. But tests reveal its conversations can be unsafe for teens

newsconquest