THE QUEST FOR MIXED REALITY

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

Indications of a future of a mixed reality using Meta Quest 3 and RayBan Stories as evidence of progress.


 

The first augmented reality (AR) headset was developed in 1968 by Ivan Sutherland. The device was called the ‘Sword of Damocles’, so called because if someone fell while wearing it, it would kill them. The headset was bulky, clunky, impractical and impossible to wear in public.

In 2012, Oculus Rift developed an early prototype of a virtual reality (VR) headset. While it was a lot smaller than the Sword of Damocles, it was still too heavy and clumsy to wear in public. A few developers and testers still tried them out, but the concept was ahead of its time.

Today, some tech companies are making AR and VR headsets for the general population, but their adoption rate is slow. XREAL, Intel, Apple and Meta are some of the major players in the market, but their products have been able to get some enthusiasm only from early adopters like tech reviewers.

But there is a trend here similar to the one observed for personal computers. The first computers were the size of small rooms, capable of performing a single task only. The tech rapidly shrunk down to fit desktop computers, graduating further to laptops, and finally became small enough to fit in our pockets which gave us the smartphone.

Dinesh Punni, a software engineer who researched, developed and delivered multiple VR and AR projects between 2015 and 2021 explained the current state of Mixed Reality (MR), which includes both VR and AR technology. In 2021, during a TEDx talk, he predicted that MR technology is on its way to becoming widely adopted just like the personal computer found its way to every household. Punni said that the general public could expect to see MR headsets hit shelves in the not-too-distant future.

Today, just two years after Punni’s TEDx talk, that future is now a Fata Morgana. Apple announced the Apple Vision Pro, a spatial computing device that takes mixed reality to new immersive levels. However, at a price point of USD 3500, this one might be too steep for the masses. Enter Meta with the latest rendition of its Quest headset, the Quest 3.

The Quest 3 is priced at USD 500, and although nowhere near the fidelity that Apple has achieved passthrough (the picture quality of the real world in the headset screens), it is the first affordable MR headset which operates at a latency low enough to be adequately functional.

Meta CEO, Mark Zuckerberg firmly believes that someday we will all live in a world where we have wearable computers on our faces all the time. The Quest 3 is Zuckerberg’s first sincere attempt at giving a concrete example of that future.

The question of whether this would be wearable in public remains as some tech reviewers have excitedly supported the Quest 3, while others have applauded the progress albeit hesitant to say it is outdoor ready.

At present, the most discrete headsets are perhaps smart glasses. These are normal-looking pairs of spectacles or sunshades which have onboard computers. Meta announced its own lineup of next-generation smart glasses called the RayBan Meta Smart Glasses. Since that is a mouthful, it is likely going to be called RayBan Stories (a reference to Facebook Stories).

As the state of mixed reality stands today, we are in a transition period. The tech is getting better, and more miniaturised, meaning MR headsets will keep getting smaller and better. At the same time, the tech is getting more powerful, meaning smart glasses will start getting packed with more features.

The biggest hurdle for MR headsets is therefore space. Too big, and it is not public-ready. Too small, and you would have to compromise on features. Right now, headset technology is on a spectrum. On one end of the spectrum is the Quest 3 – still fairly big, but packed with features. On the other end of the spectrum is the RayBan Stories – the ideal size, but able to do a few tasks only.

Meta’s efforts are good examples of the race to meet in the middle. The company is now investing heavily in both technologies to develop these devices and figure out which strategy will hit first.

On the Quest side, Meta used to make VR headsets; the Quest 2 did not have passthrough and users had to interact with the interface using wireless controllers. The Quest 3, an MR headset is a natural progression from VR as it is more comfortable for people to wear every day like glasses. It has multiple sensors arranged on the outside of the headset, taking in information and feeding it to built-in screens.

The Quest 3 has colour, stereo, very low latency and decently high resolution. Having passthrough this good immediately has benefits. It makes the setup intuitive and immersive. It also makes booting up the system faster. The added sensors on the Quest 3 allow the device to map the floor space immediately if you look down while wearing it. It also knows where physical objects are around the room, letting you walk around and place menu screens all over the room.

In previous Quests, you would have to designate a ‘play area’ where you would be able to see menus and windows. But the Quest 3 has an RGB camera and LIDAR sensor letting you plant the menu anywhere in your field of vision; on a table, or floating mid-air, and you can still see everything around you.

Regarding resolution, Marques Brownlee (MKBHD), a YouTube Creator, explains it as wearing glasses that are not exactly your prescription. There is also a small amount of distortion for objects that are very close to the headset, or on the peripherals of your visions. Some tasks are difficult still, for example, looking at your phone screen through the Quest 3 headset because the fidelity is not as crisp for that.

Most people are likely to compare the Quest 3 to the Apple Vision Pro, but the price discrepancy of USD 3000 makes that an unfair comparison. Given the high resolution of the menus and the accuracy of the controllers, the Quest 3 is the gateway device for consumers to have a go at mixed reality.

The controllers this year are better than before. They are smaller and have higher accuracy, so they work better. The haptics on the controllers are the best so far in any Quest, giving physical feedback to let you know what you are interacting with. At the same time, hand tracking has gotten significantly better than previous versions, and is intuitive and responsive enough to forgo the controllers entirely.

You can use a pinch gesture to pull up the menu and move around to select things. You can scroll around the menu by touching the digital menu and you can fling it through space with your fingers. You can grab a window, move it around and stick it somewhere in space. Even if you walk around the room you will still see the menu positioned where you left it.

The only thing not natural so far is typing on the Quest 3. In typing mode, a virtual keyboard appears in your vision. But to type, you would still have to poke around through thin air with no haptic feedback. It requires some practice before you get completely used to it.

The processor on the Quest 3 is the Snapdragon XR2 chip. It produces higher resolution per eye, processes a wider, more immersive field of view, and responds to gestures with very low latency. The Quest 3 also has good weight distribution compared to previous versions, thanks to a new strap that makes wearing it comfortable.

All the tech in Quest 3 will get better next year, letting Meta shrink it down even more. The next tech will get us one step closer to smart glasses without compromising on features.

Speaking of smart glasses, the Meta Smart Glasses; the RayBan Stories, does enough to make life easier when paired with a smartphone. Right now, The RayBan Stories come equipped with a 12-megapixel camera and an entire computer system inside the frames. There is built-in storage, a touchpad on the side, a microphone, and directional speakers. The speakers play audio directly into your ears, and there is very little sound that escapes into your surroundings.

The RayBan Stories can be paired directly to your phone and operated using the Meta View App. This is when the glasses become surprisingly capable. You can use the camera to take pictures or shoot videos in first-person view. There are certain limitations though; videos can only be in portrait mode, and can only be 60 seconds or less. The videos are shot in good resolution. As a first rendition, the execution is done well, and there is a lot of promise for the future.

The way the camera has to be accessed could be considered odd as it is voice-activated. You would have to say, “Hey, Meta. Take a picture” in order to access the camera. Most consumers are hesitant to use voice commands in public as they can be disruptive or make people uncomfortable knowing there is a picture being taken.

Messaging is possible, but only through Meta-owned apps like Facebook, Messenger, WhatsApp and Instagram. There is also a voice assistant that runs on Meta’s Llama 2 language processing AI model which is fluid enough to feel natural but has nowhere near the capabilities of ChatGPT.

This is about as much tech as you can fit into a normal-looking pair of smart glasses. Each year from now, more features will be added to smart glasses and MR headsets will start getting smaller with improved performance.

The question is, which side would get mass adoption the fastest? Maybe companies like Meta, Intel, and XREAL will have a breakthrough early enough that smart glasses start getting headset features. Or perhaps Apple will steal the market with its investments in Apple Vision Pro forcing companies to push money towards mixed reality.

Either way, consumers will win; Punni’s prediction will come true as we are barrelling towards a future where we all have computers on our faces.

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn
On Key

Related Posts

Leave a Reply

Your email address will not be published.