My Blog
Technology

How Google’s Best Take uses AI to edit your photos and fix smiles


At my son’s second birthday party last weekend, I took photos of a dozen squirming kids. Amazingly, they’re all looking straight into the camera and smiling.

I’m no miracle photographer. I had help from artificial intelligence.

I “fixed” the kids’ faces with new software from Google called Best Take. Built into the camera on its $699 Pixel 8 smartphone, arriving in stores Friday, the AI helps you replace frowns, closed eyes, even people looking the wrong way to produce the photo you wished you’d taken. It does that by grabbing faces from other shots you’ve taken and swapping them in.

Best Take is a nifty superpower for the family photographer. But it also uses AI to create photographs of scenes that never actually happened, at least not all at the exact same moment. First we had alternative facts — now, alternative faces.

Testing this technology, my mind kept swinging between two questions: How far can AI go to rescue bad photos? And also: Is this a line we want to cross?

We’ve had Photoshop, beauty and even face-swap filters for years, but Best Take gives us something new to wrap our brains around. As much as I enjoyed using it, there’s an uneasy casualness about letting AI edit the faces in the smartphone photos we rely on to archive our memories. It’s allowing AI to help standardize ideas about what happiness looks like — an escalation of the cultural pressure we’ve been grappling with on social media to curate smiling faces and perfect places that don’t always reflect reality.

To use Best Take today you’ll need to be on a new Pixel 8 phone, though it can also edit older photos taken with other cameras that meet certain criteria. Google wouldn’t comment on future plans, but I wouldn’t be surprised if eventually expands Best Take to other users of Google Photos — and other companies debut face-fixing AI of their own.

Here’s how it works: The AI in Best Take is not actually inventing smiles or other expressions. Instead, the software combs through all the shots you took over a several-second interval to propose a few alternatives for each face it can identify. Based on what you select, it pulls the face out of the alternative and uses AI to blend it into your original. It’s an instant AI version of using Photoshop to cut someone’s head out of one photo and stick it on another.

As an Instagram dad, there was an emotional payoff both to seeing my son and his friends looking their absolute most adorable, and a kind of power trip to choosing from a menu of faces which is the exact right one for that moment.

(I did wonder: Is this another way for Google to get our data? Google says Best Take doesn’t store faces for any purpose, including for AI training.)

But the system has a few quirks. Since Best Take relies on photos taken around the same time, you have to pretend you’re at a fashion shoot and keep on snapping to increase your options. Unfortunately, it doesn’t make use of the camera’s video function to do the continual snapping for you. If you want a smile in your final shot, you still need to get your subject to smile in at least one shot.

And, bummer, Best Take doesn’t work on pets.

Occasionally in my tests, Best Take’s results were spectacularly bad, replacing heads in a way that made faces look too big, or cut off hands and glasses. Once or twice, it twisted heads to the wrong angle, “Exorcist”-style.

“Best Take may not work or may partially work if there’s too much variation in pose, including varied distance between the subject and the camera,” Google product manager Lillian Chen said in an email.

These problems aside, Best Take mostly does what it claims. So now the question is: How should we feel about that?

Let’s be clear: We already take fake photos. The algorithms in our smartphones brighten eyes and teeth, smooth skin, punch up a sunset and artfully blur backgrounds. It’s not reality, it’s beauty.

As recently as 2018, smartphones couldn’t really take decent photos in the dark. Then Google debuted another AI tech called Night Sight that allowed it to combine a whole bunch of individual shots into one that looks fully lit, with candy-colored details that no human eye could have seen in that moment. Other phone makers quickly followed with their own night modes.

Your phone has really high-tech beer goggles, I wrote at the time.

This isn’t necessarily a bad thing. Photography used to require a lot of specialized skill. I did a face-swap of my own on the holiday card I sent out last year, cutting my head with better lighting from one shot and pasting it into another. But this required access to and knowledge of Photoshop.

So then what makes me uneasy about face swapping arriving on phone cameras? It’s the power we’re handing over to AI over something as fundamental as our memories.

Many people — particularly women — are already rightly tired of society telling them to “smile more.” Now a computer gets to help decide what faces are worth changing and what faces are worth keeping.

Google’s Chen said the automated face suggestions in Best Take are “based on desires we heard from users, including eyes open, looking towards the camera, and expression.” She noted users still are presented with choices for which expression they want to apply.

Google also argues that the photos created by Best Take aren’t entirely fake. The faces included in the final product were all made by those people within a few seconds of each other. That’s a kind of guardrail to make sure the final image reflects something close to the original context of the moment. “At a high level the main goal is to capture the moment the user thought they captured,” Chen said.

Yet Google’s whole approach feels like a slippery slope. Despite its promise to watermark AI-generated images, Google says it isn’t doing anything to flag Best Take images. They just live in your photo collection and get shared like any others, alongside the original photos they replace.

Google has also been quick to release other AI photo-editing tools such as Magic Eraser that can remove whole people and objects from photos.

What’s to stop Best Take 2 from opening up to faces captured any time, instead of just in those few seconds? People have filled up their Google Photos collections with years of source material. Then how much harder would it be for Google to offer entirely synthetic versions of the people in your photos, like you can already get in AI selfie apps like Lensa? Next stop: “Hey, Google, make all the people in this photo look more in love/surprised/happy.”

Lost along the way: What’s a photograph, after all? If not a record of a moment, then perhaps we have to figure out how to stop treating it like a memory.

Related posts

Elon Musk’s Twitter accused of unlawful staff firings in the UK

newsconquest

Save on Tax Software and Get Your Maximum Refund With These Deals

newsconquest

Boeing Starliner Making 2d Strive To Succeed in Global Area Station

newsconquest

Leave a Comment