Google Can Now Recreate Landmarks In 3D With Crowdsourced Photos Only

If we go back in history then humans have mostly been challenged to transform the real-world locations into 3D models, especially when it has been the matter of photorealistic accuracy. But what if we tell you that Google is now on its way to automate the same 3D modeling process with much improved results?

Well, the researchers are taking help of a neural network that will have crowdsourced photos of the desired location and the same system will then replicate landmarks and lighting in 3D. There are also neural radiance fields (NeRF) involved in the process that extracts out 3D depth data from 2D images along with figuring out the positions from where the light rays terminate. It is considered to be a mature technique that on its own can also create textured 3D models of landmarks.

Furthermore, Google’s NeRF in the Wild (NeRF-W) system goes few steps further first by using the “in-the-wild photo collections” as inputs and then expanding the particular computer’s ability to see the landmark from different angles. Once it’s done, the system identifies structures which further help in separating out photographic and environmental variations including image exposure, scene lighting, post-processing, and weather conditions as well. The system will also catch shot-to-shot differences - for instance, if there are people in one picture and the other is empty. The end result is then made up of a mixture of static elements with transient ones that is bound to create volumetric radiance as well.

So, as one can see NeRF-W’s created 3D models of landmarks from multiple angles and it won’t look artificial, the lighting system inside also provides radiance guidance to fix the lighting and shadowing of the scene.

NeRF-W also holds the potential to treat image-to-image object differences like an uncertainty field, but it will either eliminate or de-emphasize them. However, the standard NeRF system, on the other hand, will show the differences as cloudlike artifacts and it doesn’t separate them from structures when the image is being ingested.

Looking at the comparison video of both NeRF results against NeRF-W one can also see that the way this new neural system has created landmarks in 3D, it can enhance the experience for virtual reality and augmented reality fan as after this, they would finally be able to see complex architecture just the way it looks in real, including the weather and time variations.

Google is not the only company vouching to use actual photos for 3D modeling. The idea has previously been tried by Intel researchers who are working on generating synthesized versions of real-world locations with the help of multiple photos of the landmark and a recurrent encoder-decoder network to insert the angles which don’t get captured.

Nevertheless, while Intel’s system has outperformed the standard NeRF with pixel-level sharpness and temporal smoothness, it still falls short of offering the variable lighting capabilities of NeRF-W or the ability to use randomly sourced photos to recreate the same real locations.

Google’s NeRF-W has been discussed in detail in a paper which you can access here.



Read next: Facebook's New Algorithm Can Play Poker And Beat Humans At It
Previous Post Next Post