Skip to content

The Real-World Metaverse belongs in the automotive cockpit

Chen-Ping Yu explores potential use cases of the 'Real-world Metaverse' in the vehicle cockpit as well as the tech that's still missing to make it a reality

“Metaverse” is one of the hottest topics these days, spearheaded by Meta Platforms Inc. (then Facebook) riding on the successes of its Oculus virtual reality (VR) headset. While the Metaverse is intimately related to VR, there is also the real-world version of Metaverse that has been generating quite a bit of excitement recently. Here we’ll explore the importance of the Real-World Metaverse, a critical technology that it needs but has been largely overlooked, and how automotive is an optimal platform to deploy the Real-World Metaverse.

What is the Real-World Metaverse (aka augmented reality)?

The Metaverse is thought of as a comprehensive VR world where users would be able to do just about anything virtually—live, interact, create, entertain, work, and more, much like the virtual entertainment universe OASIS from the movie “Ready Player One.” A virtual universe like the Metaverse can mean unlimited possibilities and creations that are bound only by our imaginations, and the concept of the Metaverse has generally been very well received. However, it is also limited in that the Metaverse, being a virtual universe, will be completely separated from the real world in which we physically live. That means that beautiful convertible sports car which you ride in the Metaverse and bought with US$100,000, only works when you’re in the Metaverse, and can’t take it out for a spin on a real coastline by the beach.

But what if we could combine the Metaverse with the physical, real-world in which we live, so that we can mix/augment our physical world with virtual entities and information? Augmented Reality (AR) is exactly about adding virtual information onto physical surroundings, and recently being called the Real-World Metaverse as the larger scale, more general version of AR.

VRFocus
Maybe this is what the Metaverse would look like. Image source: VRFocus.

The issue with AR today

AR started to gain major attention a few years ago, with the help of Apple ARKit’s launch in June 2017 that allowed developers to build AR applications for iPhone users. ARKit was a pioneering SDK for AR, and really helped the world to get a glimpse of what would be possible with AR. However, ARKit also revealed many limitations and challenges of current AR technologies and use cases, as most AR applications were developed into gimmicky apps—placing a dancing Jedi onto your desk, which is fun for a few hours and then you’ll never return to that app again. With most of the current AR apps being nice-to-haves but not must-haves, how are we going to further expand such small-scale AR into a world-scale AR, the Real-World Metaverse?

Starmark image
An AR boxer on your desk. Pretty neat, then what? Image source: Starmark.com

To think about expanding AR gimmicks into more useful AR applications addressing real-world pain points, the main culprit is in the various AR SDKs and development tools themselves—they lack important technologies that allow developers to build more powerful and meaningful applications. Most of the AR software technologies today focus on surface detection (for placing that dancing Jedi), visual-inertial odometry (to estimate how the user is moving around the environment), and depth estimation (to determine when to occlude some parts of the AR elements). While these are very important foundations in making AR to even work at all, what’s critically lacking is for the AR system to have a sense of its surrounding environment: an AI that can provide perception capabilities.

To augment the world, first we must understand it

The main difference between a virtual reality world of the Metaverse and a Real-World Metaverse is that in VR, everything is virtual and simulated. There is no need for the VR system to know what is what and where around the user other than the safe movement boundary. A Real-World Metaverse wouldn’t be very useful if all it can do is to detect the surface of your table, floor, and the walls without any other context or information. It needs to have a capable perception AI that allows it to detect that there is a door three feet away from you, the door is red and there is a digital keypad on it, so that proper information can be augmented to the door using a color that contrasts well against red, and highlighting the keypad with AR elements to help the user understand how to operate it. If someone is outdoors, the perception AI would allow the AR system to know the extent of the sidewalk they are on, that there is a Starbucks coming up in 50 feet with special deals, and that there are buses driving by that may take the person where they need to go.

Hyperreality
AR overlays on various items in a grocery store. Image source: Hyper-Reality

All the above examples of AR use cases would require a perception AI to make sense of what is what and where, for the AR system to know what and where to augment meaningful information. Hence, to augment the world first we need to understand it, and the Real-World Metaverse/AR requires a powerful perception AI as an integral part to be meaningful and useful. Some examples of AI perception technologies that are needed may include object detection and tracking, semantic segmentation, 3D reconstruction, gesture recognition and depth estimation.

AI perception technologies are equally important for other non-AR verticals. For example, self-driving cars also need those same AI perception capabilities to operate the vehicle. However, a self-driving car typically has many sensors and compute servers to power its AI needs, while AR devices such as smartphones and glasses usually sport just a few camera sensors with a much weaker mobile-grade processor. Therefore, how to make AI perception run extremely efficiently yet accurately on mobile-grade devices is also an imperative component to be resolved before the Real-World Metaverse can become a reality.

Google AI
Scene understanding AI, showing panoptic segmentation (top-right), depth estimation (bottom-left), and reconstructed 3D points (bottom-right). Image source: Google AI

Where is the Real-World Metaverse now?

In addition to the software side of technologies that still needed to be developed, there are also many hardware limitations and challenges that are hampering the adoption and applications of AR. Smartphones are just not ideal for AR as the screens are too small, and users typically need to use both hands to hold the phone for AR, which severely limits how much interaction the user can have with the AR content. AR glasses are the better delivery devices but currently they are still too chunky and need to be tethered to an external battery pack and/or compute device, not to mention the field-of-view (FOV) of all the AR glasses out there is still very small, which greatly degrades the user experience. While this list goes on and on, fortunately there are megacorps (ie Meta, Apple, Google, Amazon, Microsoft) already investing heavily into resolving these hardware obstacles. Many of them have also publicly announced their plan and timeline in releasing their next-gen AR glasses. While highly promising, these developments are still many years away from delivering a set of AR glasses that can deliver the needed user experience.

Automobiles, on the other hand, are an excellent platform for delivering AR content to the driver and the passengers with a very good user experience.

On the hardware side, most of cars today come with large infotainment displays in the cockpit, which is an ideal screen for AR and even hands-free, not to mention that many automakers already have heads-up displays (HUD) available as an option, and which can further improve AR user experience for the drivers. Also, whether AR is displayed on a HUD or an infotainment screen, the driver’s eyes are never off the road because AR is showing live camera feed in real-time, unlike current 2D maps that are cartoon graphics and completely take the driver’s eyes off the road.

Furthermore, many additional AR use cases can be built on top of the navigation use case, to help the drivers to operate their vehicles safer such as lane highlighting, as well as useful passenger experiences for example highlighting nearby point-of-interests and businesses and special deals. Such enhanced in-cockpit AR experiences will be even more important when a car is driving itself, to connect the drivers and the passengers to their surrounding environment.

Therefore, the automotive in-cockpit experience is where the Real-World Metaverse is especially well suited for its initial deployment. Automotive makers should start embracing the Real-World Metaverse to transform how people navigate, interact and experience the world.


About the author: Chen-Ping Yu is Co-Founder and Chief Technology Officer at Phiar Technologies, Inc

Related Content

Welcome back , to continue browsing the site, please click here