How AI Could Supercharge AR and VR

Meta CEO Mark Zuckerberg has spent billions of dollars year after year to develop augmented reality (AR) and virtual reality (VR) technologies, without much financial return to show for it. The company has spent north of $80 billion since 2014 on the technologies since acquiring Oculus, a VR hardware startup, including $20 billion in 2024 alone. In that time, it has lost billions on AR/VR, with products like its Quest headset consistently operating at a loss.

Meta’s uphill battle to make people care about and buy AR/VR en masse is reflective of the broader market. Despite increasing interest, AR devices that overlay digital content onto the real world via glasses or screens, or VR technology that fully immerse a user into a completely virtual world via a headset, are still niche products. Today’s devices, while impressive, are expensive, unwieldy, and lack utility, being largely restricted to a handful of games and experiences.

Those days could soon be over. Experts anticipate that AR/VR technologies are about to explode in popularity outside the confines of gamers, enthusiasts, and early tech adopters. And it’s all thanks to another technology that Meta and every other leading tech company on the planet—including Google, Apple, Microsoft, Samsung, and OpenAI—is also spending billions on: artificial intelligence (AI).

That’s because today’s AI technology (and tomorrow’s innovations) is expected to unlock unprecedented experiences and use cases across AR and VR devices. These include everything from super-smart hands-free AI assistants that help users navigate the world to AI-powered VR-scapes where users procedurally generate content on command just by speaking out loud. AI may even give AR/VR the ability to stimulate other senses (including smell), and to translate muscle signals into digital commands.

“For the first time, AR/VR could seamlessly integrate into everyday life, whether through entertainment, education, or professional work,” said Fabio Arena, a professor who studies VR at Italy’s Kore University of Enna.

The Tipping Point for AR/VR Adoption

Right now, there are a number of commercial AR/VR systems jockeying for market dominance. Meta is the clear leader at the moment, due to the popularity of its Quest VR headsets. Apple’s Vision Pro “spatial computing” headset launched in early 2024 and drew initial buzz. ByteDance’s Pico 4 headset is gaining share in Asian markets. And Samsung and Google are said to be working on a mixed reality headset due out in 2025.

Despite this traction, AR/VR isn’t what would be categorized today as “mainstream.” These systems still suffer from relatively niche adoption among gamers and enthusiasts. Yet thanks to AI, AR/VR is on the cusp of an “epochal” transformation in the coming years, said Arena. That’s because powerful AI is starting to be baked into existing AR/VR devices, greatly expanding what’s possible with AR/VR well beyond gaming.

For instance, thanks to generative AI capabilities, users will soon be able to generate their own content to navigate in VR and AR environments, said Tammy Lin, a professor who studies VR and AR at Taiwan’s National Chengchi University. Instead of relying on an ecosystem of developers to create content that consumers actually want, Lin anticipates users being able to summon their own preferred 360-degree immersive content into being simply by speaking to AI via a headset or wearable device.

Second, AI makes levels of interaction and personalization possible within VR/AR that were “previously unattainable,” said Arena. He anticipates AI within VR/AR devices that can dynamically adapt content to a user’s facial expressions, or that can learn to look at (and reshape) virtual worlds and content to individual preferences. For example, AI-powered AR apps could automatically adapt content to the user’s surroundings, making each experience far more intuitive and useful which, in turn, spurs both consumer and professional adoption.

“By tracking behavioral patterns and contextual data, AI could personalize immersive experiences for each individual,” Lin said.

Third, AI within VR/AR opens up tons of new mainstream use cases for the technology beyond just gaming or novelty experiences. AI-driven advances such as real-time speech recognition, human gesture tracking, and natural language processing will significantly improve interaction with virtual environments and unlock new ways to use the technology.

Education is one area where this already is playing out, said Lin. For example, in one educational experiment in Taiwan, Japanese students joined with Taiwanese students in a metaverse environment to learn together. Thanks to VR/AR, they were able to learn together in real time despite being in different physical places. Also, thanks to AI-powered translation, they were able to communicate with each other in real time despite speaking different languages.

“The tipping point for mass adoption will likely occur within the next five years,” said Arena, as these use cases continue to develop thanks to increasingly abundant machine intelligence.

The ‘Killer App’ for AI-powered AR/VR

What does it actually look like when AI-enabled VR/AR devices go mainstream? What’s the “killer app” that makes everyone sit up and pay attention in the next few years?

It’s likely to be AI-powered AR glasses, said Lin.

Right now, early AI-powered glasses are available from several companies. Meta has smart Ray-Ban glasses, which use AI to act as a hands-free voice assistant. Social media company Snap also offers AR glasses with their own AI assistant.

But the real magic will come from embedding the next generation of AI being developed by companies like Google or OpenAI right into lightweight wearables—layering AI and AR right over everything you see.

Soon, AR glasses running the latest AI models will act as ever-present, hands-free virtual assistants that can help users and take actions using data collected in real time in the real world.

“VR/AR and spatial computing devices are interfaces for users to interact with the next generation of Web, while AI is positioned as the way to communicate with the Web,” Lin said.

This is the real promise of AI plus AR: It becomes the new way to interact with the digital world. Instead of using a mouse and keyboard to interact with a two-dimensional interface, the user speaks to an AR overlay via glasses to accomplish tasks in life and work.

Nascent versions of this are already at play in the latest AI apps.

Google is getting ready to launch its Project Astra, which streams real-world visuals seen through a phone into its Gemini AI models, putting a multimodal AI assistant on-hand at all times that understands images, videos, and sounds. (Project Astra is also being tested on prototype AR glasses.) OpenAI’s ChatGPT offers Video Mode, which similarly allows AI to understand video in real time and to screen share with users to accomplish tasks.

“Applying multimodal AI to glasses serves the convenient combination for users to free their hands and command the AI agent for assistance of recognition, summarizing, referencing, recommendation, and memorization,” Lin said.

The next generation of these technologies could see hyper-intelligent AI available 24/7 to help users seamlessly navigate, understand, and take action within both physical and digital environments. That, in turn, opens up a wealth of valuable use cases for businesses, said Arena.

Corporate training and education, business productivity, and healthcare could all be transformed by true AI-enabled VR/AR wearables. That includes using AI-powered VR/AR to facilitate personalized learning at scale. It also could dramatically enhance productivity by helping employees view and manipulate 3D data in real time from anywhere in the world, while AI generates intelligent suggestions and predictive insights. In healthcare, it could reshape medical training and patient care by giving clinicians digital content and intelligence layered over their real-world work environment, minimizing the chance of errors and improving outcomes during procedures.

“Each of these applications depends on AI’s ability to personalize and adapt experiences, which will make VR/AR not just immersive but truly revolutionary,” Arena said.

Beyond Sights and Sounds

AI-powered AR glasses are just the beginning.

Thanks to AI, AR/VR devices could eventually serve as always-on memory augmentation, said Arena. VR/AR glasses or wearables could store and recall every single thing seen in a lifetime, then help the user recall experiences and information on demand with perfect clarity. This could help people with cognitive disabilities or age-related memory loss. And this data could even be combined with AI’s ability to make predictions to help improve cognitive performance in real time.

Multisensory features are another area that’s being explored by major labs. Right now, VR/AR is limited to increasingly immersive sights and sounds. But soon, other simulated senses could be incorporated into AR and VR experiences, including smell.

“Imagine experiencing a virtual rain forest not only visually, but also through the fresh smell of wet earth and greenery,” said Arena. That could make VR/AR even more irresistible to people for both entertainment and therapeutic uses (like creating relaxation therapies).

Agentic AI is also likely to give users ‘superpowers’ when worn with AI-powered wearables, by being able to take actions within virtual worlds or using the Internet—both of which may then impact a user’s physical reality (such as making purchases, booking appointments, etc.).

These features also have security and privacy tradeoffs. Will users be comfortable sharing all they see with big tech companies? How will ever-present AI wearables recording everything that’s done change how we interact with others?

As of today, there are plenty of open questions. But one thing is clear: like it or not, AI is giving AR and VR a new lease on life.

Further Reading