Meta is set to introduce its AI assistant to Quest virtual reality headsets next month, bringing advanced voice control and visual recognition capabilities to its VR ecosystem.
Key highlights of the Meta AI integration:
- Launching in August as an experimental feature for Quest headsets
- Will replace existing Voice Commands as the primary voice interface
- Utilizes Bing search to provide real-time information and answers
- Offers visual recognition on Quest 3 and Quest Pro via color passthrough cameras
- Can identify objects, translate text, and provide contextual information about the user's surroundings
- Supports general knowledge queries on topics like history, literature, and local recommendations
The AI assistant builds on Meta's existing work with Ray-Ban smart glasses, expanding functionality for VR use cases. Users can enable the experimental feature through the headset's settings menu.
Initially, Meta AI with Vision will be limited to Quest 3 and Quest Pro devices in the United States and Canada, with English-only support. The company hints at potential future support for Quest 2, albeit without the full visual capabilities due to hardware limitations.
This move puts Meta ahead of competitors like Apple in integrating AI assistants into mixed reality devices. However, the experimental nature of the rollout suggests Meta is taking a cautious approach as it refines the technology.
As AI becomes increasingly central to Meta's strategy, questions remain about data privacy, the deprecation of existing voice commands, and how the assistant will evolve to support virtual objects and environments beyond passthrough mode.
For VR enthusiasts, the addition of AI assistance could significantly enhance the Quest platform's utility and ease of use, potentially bridging the gap between virtual and augmented reality experiences.
A young woman fully engages in a virtual reality experience, highlighting the immersive opportunities provided by Meta's new AI assistant for Quest headsets |