A new ‘generative fill’ AI capability can create joyful Photoshop edits — and frightening deepfakes
AI is ushering Photoshopping into a new era, where you can alter an image in sophisticated and sometimes scary ways without mastering complex software. It’s far from perfect: You can remove an ex from a photo in just two clicks, though the hit-or-miss AI might replace that person with a different one whose face looks like it’s melting.
But this AI leap means anyone can pull off at least a goofy Photoshop job now. The same AI tool can both transform pictures into joyful fun and be used to manipulate or even exploit. And that adds new urgency to a question first raised by Photoshop more than 30 years ago: How much longer will we be able to trust what we see?
For a glimpse of what’s coming to your photos — both the ones you see and the ones you take — we’ve been testing a beta version of Photoshop with its new AI function called “generative fill.” It’s a response by Photoshop’s maker Adobe to a flurry of new AI image-creation tools that threaten to make it redundant, including DALL-E 2, Midjourney and Stable Diffusion.
While other AI services have generated buzz for inventing entirely new images, generative fill offers a remarkably user-friendly way to modify portions of existing images. It’s called “inpainting”: Select what you want to replace and type what should go in it. That leap brings AI into closer proximity with real photos — and a $10 monthly Photoshop subscription puts it into a lot more hands. This goes far beyond correcting blemishes and sprucing up vacation photos with existing AI tools from Google and others.
But is AI turning Photoshop into an ultimate “deepfake” machine? The same AI tech that helped us remove pigeons from snapshots also allowed us to generate a very convincing image of an entirely fictional fire at the Pentagon. (When such an image recently went viral on Twitter, it briefly moved the stock market.) Still, we found significant limitations — both inherent to the technology and intentionally built into Photoshop — that prevent some potential worst-case uses of the technology. At least for now.
What kind of Photoshopping can AI do, and not do? Let us show you.
What Photoshop’s AI does well
When you use generative fill, you have to be online. That’s because Photoshop sends three pieces of information to Adobe’s AI for processing: the text of a prompt you enter, the area you’ve selected to replace and a portion of the rest of the image that it tries to blend in with.
All that information makes generative fill extremely good at removing objects from backgrounds, such as the radio towers in this sunset scene:
The AI is able to invent clouds, trees or even whole cityscapes to blend into your original image. You can use generative fill to delete your memory of battling crowds at a vacation destination — or even the person you traveled with.
Removing people from images is a snap even on top of complex backgrounds.
The same AI can also add new objects into images that were never there, such as the giant robot you see in our sunset shot below.
When it works well, this saves untold hours of effort: Photoshopping things into images before AI required being good at cutting out objects, knowing how to use layers, finding a source image to add in, and adjusting lighting and color.
Maybe you can’t imagine needing to add a giant robot to any of your photos. But artists or even just your family’s official photographer might see it as a creative jump-start that lets them try out a wild idea. We also found the wonder of this instant imagination genie particularly thrilling to kids, three of whom gathered around our computer issuing commands to test.
What’s most impressive is that the AI can make sure your images blend in with the angle of the sun, shadows, or even a reflection visible in water. It even gives you three variations to choose from — or you can keep tweaking your prompt and selection area until you find one you like.
Another fascinating use: Having the AI expand the original frame or crop of an image by inventing what the rest of it might look like. On social media, people have been having fun applying this technique to works of art, album covers and photos. Here’s how the AI expanded Vincent van Gogh’s famous bedroom painting:
What Photoshop’s AI does poorly
For every “Whoa that’s pretty good!” we got about five “Oh that’s so bad” responses from Photoshop’s AI. But with patience, we could eventually produce what we were looking for.
The core problem: Photoshop’s AI just isn’t great at generating certain kinds of objects, which can come across looking goofy, half-drawn or just totally fake.
Take, for example, the same image of a cow in a field from above. When we asked the AI to “add a cowboy,” we got … whatever this is:
This bad response is partly a function of randomness of current generative AI technology — and a reflection of the fact that Adobe has focused more on training its systems to make natural-looking images.
“There are certain objects I think it does better at versus other ones — but when you start to go a little bit more surreal, a little bit crazy, it’s not quite there yet,” Adobe’s vice president of digital imaging, Maria Yap, told us.
Adobe also said its AI model is “not designed to skew dark.” Yet across many tests, we noticed a tendency of the AI to go for the macabre. Here’s an example of an original photo taken from a plane where we asked the AI to add an alien spaceship. Instead we got a flying monster.
The area you select in the original image makes a big difference in your output. Adobe’s AI is designed to respond to the shape of the selection — and anything that’s in it will be completely replaced. That means if you want to add a hat to someone, you need to select just the area where the hat would go — ideally in the general shape of a hat.
Here’s what happened when we asked the AI to give funny hats to members of The Post’s Help Desk team by simply drawing a box around all five of our heads. Instead of hats, we got new ping-pong ball heads.
And speaking of people, Photoshop’s AI is just awful at them. Here’s one where we asked the AI to add an additional member to another photo of the Help Desk team.
That poor lady. We couldn’t ever get the AI to create a person who looked like they had a normal face.
Will it be used for evil?
Tech can bring out the best and the worst in people. And already, some are using more complicated AI image tools to make pictures to deceive and exploit: Political campaigns have run ads featuring AI-generated scenes that never occurred. The FBI recently issued a warning about deepfake “sextortion” schemes.
Is Photoshop’s AI putting the tools of fakery into many more hands? Yes and no.
As a test, we took a photo of a hike in an arid field and typed “add wildfire.” It transformed it into a very believable photo of a completely fictional fire. (In fact, it was easier to get the AI to add a realistic-looking wildfire to the image than it was get it to add realistic wildflowers.)
We were even able to use the AI to remove our own watermark on images flagging that they were altered by AI. That same capability could be used to remove the watermarks on paid professional images, such as those sold by Getty Images or marathon finish-line photographers.
But in practical terms, there are also some limitations on the potential misuse of AI Photoshop today. First, the resolution of any areas it fills is limited to a clarity of 1,024 by 1,024 pixels, though Photoshop will scale it to fill the hole. That means images will look noticeably blurry if you try to manipulate a large portion all at once.
Adobe says it is also building some limits into its AI products. First, once this version of Photoshop comes out of beta, it will automatically include so-called content credentials in the files it produces that would flag if an image has been altered with AI. However, that wasn’t available in the version we tested.
Adobe says the images its AI produces are “pre-processed for content that violates our terms of service,” such as pornographic content. When it spots violating content, it blocks the prompt from being used again.
We found this to be hit or miss: Sometimes it was overly sensitive — like stopping us when we asked to add a UFO to an image. Other times it seemed not sensitive enough — like when we asked to add the face of a baby to an infamous photo of Kim Kardashian that “broke the internet.”
We’re glad Adobe says it’s taking the threat seriously through an industry effort called the Content Authenticity Initiative, but the jury is out on whether that will be enough. One view of new AI technology is that, oh well, you get the bad with the good. We don’t have to accept that: As humans, we get to decide how to maximize the good and minimize the harm in the tools we create.