Back

Google Live View: when Artificial Intelligence meets Augmented Reality

A few months ago, Google announced the discontinuation of the Street View app, which provided street-level visualization to help users see the real look and feel of streets, places, and neighborhoods. A few years ago, in 2019, the first Beta-version of Live View was released to provide more immersive navigation based on AI.
Finally, Google announced on February 8’s the release of a more definitive version of Google Live View.

You may be wondering why a company dedicated to content for physical products, like Content2Sell, spends time talking about a digital product like Google Maps. Well, the truth is that this is an evolutionary leap that shows how the advances in Augmented Reality and image processing tools are turning out to be genuinely disruptive.
And that goes way beyond digital products.

Google Live View’s new features

This is not only the combination of billions of Google Street View and aerial images allowing to create a new digital vision of the world – which is great, nevertheless.

All this is integrated with the useful information Google provides to its daily users. Traffic, weather, the influx of people, or a store’s information – built with Google My Business and user feedback. The mere display of information will turn into a useful, interactive experience by including the phones’ ability to sense and understand their position and environment.

With sustainability becoming central in our daily decisions (partly due to increasing regulation), Live View will also include new navigation functions to find the less polluting routes from one place to another, based on real-time data.

Also, with Electric Vehicles on the rise, critical information such as regular or fast-charging charging stations in the way, and battery status monitorization will definitely contribute to their development and expansion.

IMMERSIVE is the word

The Street View app may have been discontinued, but it’s coming back in style. With Google Live View, we will be able to go back on the streets and enter places to see them from the inside, with details that were unimaginable before.

Again, all this has been in Beta-testing mode since late 2019, and every development since opens new and brighter integrations and capabilities.

Something as basic as a time slider connected with Google’s weather app adds a weather animation to the city’s general view, changing how we see what the weather is or will be like, anywhere in the world. For now, this is only available in 5 cities: London, Los Angeles, New York City, San Francisco, and Tokyo

What makes this amazing is the integration of ARCore with Google Lens’ visual recognition, and Google Maps’ database.

Intelligent Graphic Processing units

ARCore is Google’s platform to build Augmented Reality experiences on phones. It allows them to sense, and to actually understand the phone’s position and environment in the real world: from surfaces and locations to motion and lighting.

All this becomes possible thanks to photorealistic 3D representations, created by Artificial Intelligence from millions of 2D images. As we explained in our post on the evolution of product rendering, Neural Realistic Fields (NeRF) have evolved in a way that is now able to calculate how a space or product’s light changes, considering textures, location, materials, and light sources.

So, if we can now talk about the recognition and location of stores and spots just like on a regular map, we imagine the potential all this brings to physical products.

In a nutshell, the merge of Google’s Artificial Intelligence and Augmented Reality opens infinite opportunities. In many ways, this is the biggest step toward the Metaverse since we became familiar with whatever it is. Because service and function centralization are essential elements for it to exist.

But AI is not infallible

Google recently announced Bard, the AI-based competitor for ChatGPT – about to be implemented in Bing and Microsoft Office.

According to Jim Fan, from nVidia, Google Bard could rapidly reach a billion users if we consider all the online information it feeds from. Artificial Intelligence is not infallible though. It might be able to create 360º models from a single 2D image. But its undisputable potential is not risk-exempt, given the large amounts of false, incomplete, and biased data out there. 

A huge user base and the honest intention of providing true and useful information may not be enough. Remember Google ranks content based on clicks, links, and keywords people bid on.

What comes next?

It’s been some time since we can locate stores on a map, see their catalogs online, purchase them directly, drop reviews, and feed Google’s database to help other people. That’s what Google is all about, and it’s not hard to imagine what comes next.

We’ve also seen how important it is for brick-and-mortar retailers to go digital to stay alive in the phygital world. And that’s precisely what’s ahead.
Brands are now using omnichannel strategies and virtual spaces to shorten the distance between a seller and buyer; build a strong brand reputation, and make history together.

Remember DIOR’s v

Avatars and fantastic scenarios make a highly marketable image of the future. Alphabet, is getting closer to being THE western super app, because it is centralizing functions like no one else is. From payments and all sorts of free services and customer data. What we don’t know is how long it will take for the links to make a chain.
It’s just a matter of time. Firstly, for Google Live View to be rolled-out in more cities. And then, for more actors to make their move in the midst of the AI race.

Get your content audited FOR FREE!

kenneth
kenneth