My Blog
Technology

Google’s new Pixel 8 phones come with AI tools for photos, websites


Google has been trying to sell its vision of smarter devices since its first Pixel phone in 2016. Seven years later, the company is still fighting to make its mark.

Last quarter, Apple and Samsung accounted for 78 percent of all smartphones shipped in North America, according to research firm Canalys. And Google? Just 4 percent. So, how is one of the biggest names in Big Tech supposed to win over the jaded American phone shopper?

By leaning more into artificial intelligence. Google on Wednesday unveiled a pair of new smartphones — the $699 Pixel 8 and $999 Pixel 8 Pro — which come with some features that rely on the kinds of generative AI and large language models that have become all the rage among start-ups and tech titans this year.

No, that doesn’t mean Bard lives on a phone now — that’s coming later. But it does mean your next phone may be able to, among other things, instantly summarize webpages, craft perfect photos from a few less-than-stellar ones and automatically translate text in foreign languages before reading it aloud.

Granted, there’s more to these devices than just Google’s focus on AI. From now on, the company is committing to seven years of operating system updates to its Pixel phones — that’s a nearly unprecedented amount of time to ensure gadgets have up-to-date software. (iPhones, by comparison, typically get about five years of major iOS updates after they’re released.)

The larger Pro model also comes with an unusual temperature scanner, which the company hopes will become an indispensable health tool once it receives clearance from the Food and Drug Administration.

The drive to push AI even deeper into its phones may offer Google a way to catch up to rivals that consistently outsell them, not to mention a potentially concrete way to hook consumers on the value of AI tools. But that only works if the tools themselves are worth using.

Here’s what to know about the AI features you may get to use soon.

Sifting through a long webpage when you really just want the gist? With help from some of Google’s large language models, Google Assistant can quickly scan and summarize articles, recipes and other long swaths of online text.

That said, the company isn’t guaranteeing that every auto-generated summary is bulletproof. In the event that the system hallucinates and spits out erroneous info, you’ll be able to rate summaries with a thumbs down and offer some additional feedback. (Likewise, you can give a thumbs up to particularly good ones.)

The catch: This feature only works when you’re viewing websites in the Chrome browser or in the Google app. That means you can’t ask the Google Assistant to, say, summarize a lengthy PDF file before your next meeting. You also can’t summarize news articles that live behind a paywall — if you try, the summarize feature just balks at your request.

Who among us hasn’t taken a group photo, only to discover some people blinked or looked away at the crucial moment?

Google’s Best Take feature offers a solution for these situations: When you view a less-than-ideal picture in the Photos app, you’ll be given the option to select more appropriate faces for people from photos taken around the same time. Google Photos will then stitch those selected faces into that picture, giving you a more perfect image to slap onto your socials.

The catch: Where do we start?

Rather than just improving the clarity of a captured moment, Best Take weaves together a distinctly new one, one face at a time. That means a photo you eventually come to cherish may depict a moment in time that never happened in the first place.

To be clear, Google says, the feature will never create a smiling face from scratch — it can only detect and swap in faces from across six photos if they were taken within seconds of each other. But to me, at least, there’s still something viscerally unnerving about that.

Unsurprisingly, Google doesn’t see things this way. Shenaz Zack, director of product management for Google’s Pixel phones, told The Washington Post that she thinks of the feature as a way to “re-create” moments she wasn’t fast enough to shoot in real time. Fair enough, but since we haven’t been able to thoroughly test this feature yet, consider the jury undecided on how weird this is for now.

A more flexible Magic Eraser

Google first introduced its Magic Eraser tool — which lets you erase stray people or objects from photos — on Pixel phones two years ago, before letting anyone with a Google One subscription use it.

Now, though, you can do more than just erase photobombers from your pictures; with a tap-and-hold, you can select subjects in those images and reposition wherever you like. (As you do that, Google Photos will do its best to fill in the gap left behind with, well, whatever it thinks should go there.)

The catch: Sure, there’s always the potential for creating images that are, on some level, disconnected from reality. But on a more practical level, some early examples we saw of the new Magic Eraser in action just didn’t look very good — we spotted some unnatural textures where the tool filled in spots previously occupied by a person. We’ve seen these kinds of results before, and honestly, we were hoping for a little better.

A more natural-sounding call screener

For years, Pixels have had a Call Screen feature that answers the phone, asks whoever’s calling what they want and transcribes the response for you. This time, though, Google’s updated tool sounds nearly indistinguishable from an actual human. (If it didn’t identify itself in a demo as a “Google virtual calling assistant,” we wouldn’t have been able to tell.)

That assistant is also better at analyzing what the person on the other end is saying. If it can tell they need some kind of specific feedback — say, confirmation for a doctor’s appointment — you’ll get a small on-screen bubble that, when tapped, prompts the assistant to pass along that confirmation.

The catch: For now, it seems this feature is the only place you’ll get to hear that almost startlingly natural AI voice. Google hasn’t yet mentioned if the virtual, voice-activated Assistant baked into the phones will get a similar upgrade.

Some folks who use Google Docs for work can already lean on an AI to help smooth out trouble spots in their writing, so it’s no surprise to see something similar land on smartphones, too. What’s different here is where that AI lives — on Pixel phones, it’s in Google’s default Gboard keyboard.

As you type, you’ll see the keyboard auto-correcting obvious errors same as always. But when you finish pecking out that sentence, you’ll see an option to “fix” spelling and grammar errors (like those pesky there/their/they’re mix-ups) all at once.

The catch: We know that in some cases, Google uses what we type to train its AI. We’re told that’s not happening here — there’s “no training of models based on your input,” said Brian Rakowski, Google’s vice president of product management for Pixel phones. Still, if you want to make sure you’re not inadvertently doing work on Google’s behalf, the safest move might be to avoid features like these where you find them.

Related posts

For celebrities, Twitter is not where to be. Can Elon Musk deliver them again?

newsconquest

New York Occasions Spelling Bee: 10 Pointers and Methods to Lend a hand You Win

newsconquest

Google Loses Antitrust Court Battle With Makers of Fortnite Video Game

newsconquest

Leave a Comment