My Blog
Technology

Apple Intelligence Is the Future: So Why Isn’t It on Apple’s Most Futuristic Product Yet?

Apple Intelligence Is the Future: So Why Isn’t It on Apple’s Most Futuristic Product Yet?
Apple Intelligence Is the Future: So Why Isn’t It on Apple’s Most Futuristic Product Yet?


Apple’s latest developer conference, WWDC 2024, was half full of the future of advanced AI services coming this fall to Macs, iPads and iPhones. Notably absent from the mix: the Apple product unveiled a year ago at its last developer conference. 

Although technically, the Apple Vision Pro is only four months old, it’s been a full year since we’ve been aware of it. Apple considers the Vision Pro the future of computing, and as a fully self-containeddevice with a robust Apple M2 chip, it certainly seems like a possible heir apparent to our current world of phones, laptops and tablets.

And yet, the Vision Pro isn’t getting generative AI capabilities this year. Apple Intelligence — Apple’s hardware- and cloud-powered set of AI services — will work on A17 Pro chips in iPhone 15 Pro models plus M-series iPads and Macs. The Apple Vision Pro, which again has an M2 chip, should be one of the devices to work with Apple Intelligence. Not yet, though: instead, VisionOS 2 has a number of smaller upgrades, while generative AI — a feature I thought would be key addition for Vision Pro — is still not part of them.

Watch this: Apple Intelligence: What to Know About Apple’s Gen AI

According to Apple, more platforms will get Apple Intelligence in the future. The Vision Pro and Apple Watch, the two most conspicuous absences from the list right now, could be next. I’m particularly let down that Vision Pro isn’t one of the first devices because the advanced headset is an early-adopter product for testing new ideas in mixed reality. It should be an experimental AI device, too.

But maybe the Vision Pro’s complexity and limited footprint are what makes it a next-wave device for getting AI services onboard. 

Apple is only now getting to making the Vision Pro available outside of US markets, with eight more countries being added to the list in June and July. Maybe it’s that limited availability, and its relatively much smaller sales numbers, that make it a product that can wait on getting Apple Intelligence.

Or maybe it’s that the Vision Pro’s different types of inputs — hand tracking, eye tracking and an array of inner and outer cameras — make for a different level of challenges for figuring out useful AI. A smarter Siri sounds like it would be a huge help for Vision Pro, because I’m already using Siri more there than I am with my iPad or Mac. I open apps with my voice, enter text with my voice and search with my voice all the time; it’s faster than trying to use my eyes and hands.

The complexity of Vision Pro could also be putting a different load on the processors that run AI. The neural engine on the Vision Pro’s chips also helps process constant room scanning and eye and hand tracking inputs and overlaying virtual graphics onto overlays of live camera feeds. There’s a lot going on at the same time with Vision Pro, and third party apps don’t even have general camera access in-headset yet.

To me, the more interesting future of generative AI is multimodal: using cameras and microphones to be aware of real-time feeds of what I see and say. Early wearable AI devices like Meta’s Ray-Ban glasses and the Humane AI Pin can “see” the world with their cameras, but only by taking still snapshots and then quickly analyzing them. Getting descriptions of my world, or even advice, is fascinating… but right now, it’s extremely rough around the edges.

Apple also needs to unleash camera access on the camera-studded Vision Pro headset. Third-party apps on Vision Pro still can’t use these cameras to truly see the world around you unless they’re built using Apple’s new enterprise-focused API. That level of limited access might suggest that Apple’s aiming to manage the load on the Vision Pro’s processing, and on-tap generative AI would be another layer of complexity.

Would these complexities mean a next-gen Vision headset with a more advanced processor – which could arrive, based on reports, in late 2025 – would be the real recipient of fully-fledged Apple Intelligence? It’s all guessing now, but the current Vision Pro seems like it should still be more than powerful enough to run generative AI.

Inevitably, Apple will open up those Vision Pro camera permissions more. So will other VR/AR headset manufacturers, like Meta. That might be when generative AI truly becomes transformative for mixed reality… but I still see great uses in the meantime. 

A better Siri would be huge, and so would generating creative content or coding. Meta’s Andrea Bosworth sees generative AI being a factor on Quest headsets soon. Apple should move on Vision Pro soon, too — if the future really flows through Vision Pro, it’s going to need the future of how all of Apple’s software services work, too.



Related posts

On the TikTok Beat, Trends Dance With National Security

newsconquest

CBN Explained: Here’s What to Know About the Cannabinoid Sleep Aid

newsconquest

Google: Android Apps Must Let People Delete Their Accounts, Data

newsconquest