Google’s AR Search to Get Depth Perception, Object Blending

If you have one of Google’s new ARCore supported phones, you have probably tried using the AR search technology that allows you to use a camera that will place AR objects into the frame. For example, you could use the Santa Search feature to start seeing a cartoon depiction of Santa Claus around the objects that you are pointing your camera at.

Anyone that looks at this would agree that it is pretty cool, but there was still a lot that was left to be desired. For example, when you added something to your AR image, it would float on top of objects and just generally not seem like it was a part of the real world in any meaningful way, shape or form.

Now, however, things are going to start looking very different when you use the AR search feature. Using an algorithm that ascertains depth based on the movement of certain objects within the frame as well as the camera itself, you would now get significantly improved depth perception while using the AR feature. The really amazing thing is that you can get this using a single camera, and what’s more is that this has led to object blending finally being considered possible.

Screenshot: 9to5google.

Also referred to as occlusion, this basically means that any object you render using this technology is going to blend into its surroundings perfectly. Instead of appearing in front of objects it will appear behind them, and when it moves out from behind the object that will look a lot more natural as well. This is the sort of thing that has the potential to enable you to think about just how advanced technology is becoming, and how soon we may start living in realities that are completely simulated and we won’t even be able to tell the difference.

Read next: Google Translate will soon be offering real-time translation like Google Assistant
Previous Post Next Post