How Adobe Wants to Turn Flat 360-Degree Videos Into True Virtual Reality (EXCLUSIVE)

virtual reality
kretsudmitry / Shutterstock

Hardly a day has gone by this month without the announcement of a new virtual reality (VR) camera system. Facebook, Google and GoPro all aim to make VR more immersive with new cameras, some of which won’t be commercially released for the foreseeable future. However, researchers at Adobe believe that you may not need new camera hardware at all for a big leap in immersion.

Adobe’s head of research Gavin Miller is going to present new cutting-edge technology at NAB in Las Vegas this Tuesday that could one day be used to turn flat, monoscopic 360-degree videos shot with consumer-grade spherical cameras into fully immersive VR video, complete with the ability to lean into the video — something that’s being called six degrees of freedom (6DoF) among industry insiders.

The difference between monoscopic 360-degree video and VR experiences offering six degrees of freedom is especially important for users of high-end VR headsets like the Oculus Rift and HTC Vive. These headsets offer room-scale tracking, which means that the headset knows where in the room the viewer is, accurately translating a motion like “leaning forward” into corresponding visuals.

Doing this is relatively easy with computer-generated imagery, but giving the viewer the freedom to actually move around in a recorded video requires cutting-edge image capture technology. Light field camera systems for example, like the one developed by Lytro, potentially cost hundreds of thousands of dollars.

Popular on Variety

However, Adobe’s scientists have figured out a way to deduct crucial information about a room by analyzing the movement of a camera through something they call a “structure-from-motion” algorithm (for the technically inclined: here’s a research paper on the approach).

That data can then be used to generate new perspectives to account for different viewpoints, giving viewers the ability to truly lean in to a video that previously wasn’t even 3D. The technology could also be used to stabilize 360-degree video, or to generate different versions of a video that work for users with varying degrees of motion comfort levels.

There’s one major caveat for this approach: Adding six degrees of freedom to a flat 360-degree video only works if the camera does actually move. “We assume the camera moves to infer the depth,” explained Miller. “If the camera rotates but does not move side to side we cannot compute depth but can stabilize the rotation.”

Still, the approach goes to show that building complex cameras isn’t the only solution for advances in VR video. Computer vision algorithms are just as important, with Miller even arguing that expensive camera hardware may not be necessary at all to generate full immersive VR video. “If it’s there it’s nice — if it’s not there it’s not the end of the world,” he said.

Miller is going to present this research Tuesday at NAB on a panel about next generation image making, which also features Light Field Lab CEO Jon Karafin talking about next-generation holographic display technology

Miller is also going to use the panel to talk about using deep learning to replace the sky in an image with that of a different image, and visual search that can find pictures not only based on their content, but on the arrangement of individual images, making it possible to search for images with dogs on the left side of their owner — two projects Adobe previously showed at its Max conference.

All of these projects are part of Adobe’s advanced research, and may not necessarily make it into one of the company’s products in this form. However, it’s easy to see how at least some of this technology could find its way into Adobe’s editing tools, especially as the company is looking to find its role in virtual reality production and monetization.