BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Adobe's Path To Entering The Virtual Reality Story

This article is more than 7 years old.

This is a three-part monthly series about the role software will play in Virtual Reality storytelling, seen through the lens of Adobe Research and creators.

Part 1: Inside Adobe | Part 2: The Mettle for VR | Part 3: Inside Your Head(set)

You stand near a white picket fence, feet away from a cliff near Pigeon Point Light Station on the San Francisco Bay. Sightseers mill and read signs in bright blue daylight, gazing at the 145-year-old matte-white lighthouse, rising over clapboard buildings. You turn around to see waves lapping on the rocky shore, gulls swooping overhead, streaky horizon clouds hovering beneath the noon sun on this pleasant California day. Wait. Nope. Now you stand on a small airport tarmac, watching four people examine a vintage prop engine biplane, bright yellow.

That jarring transition needs to be fixed. You pull off your Oculus headset and now you are, actually always were, in an Adobe SF conference room.

“What have you learned about transition?” you ask Brian Williams, senior computer scientist for Premiere Pro at Adobe.

“Uhhh,” he hesitates, as his colleagues in the conference room chuckle. Transitioning a half-globe of visual information is, it turns out, pretty tricky to do well. “Okay, so, 80 percent of most effects are dissolves, color and titles. A horizontal wipe will work, vertical wipes is going to look funky.”

“Star wipes,” jokes Laura Williams Argilla, Adobe director of Services and Workflows for Creative Cloud Video.

“Oh God,” Williams.

Bronwyn Lewis, Adobe product manager for Video Editing, brings up Corridor Digital’s “Where’s Waldo” VR video. “They had this wipe, similar to a diagonal wipe, from the sky.”

You could not have had this conversation with the Creative Cloud Video Team, or very many people at all, even two years ago.

“Making VR content actually requires a huge amount of technical skill,” Williams Argilla says. “That often conflicts with creative intent, because you have to have both skill sets. So Brian is making sure that the ability to create content that doesn’t exclude people who are more creative than technical.”

For filmmakers, the blending of creative and technical aptitude has been beneficial. But there’s a limit, one that has more elements than the kind of rigs you use, or the headsets they will eventually populate. In the center of all of that, there is software. Until recently, the pioneers of Virtual Reality storytelling, especially live action, were using the digital equivalent of baling wire and duct tape to tell their stories. For the Video Team, it was hearing multiple times that video creators were using Premiere to edit VR that sprung them into action. Turns out it was not the easiest sell.

“I think there’s a lot of hesitance to invest in new platforms,” Williams Argilla says. “People remember everyone running toward 3D TV and that never took off.”

That meant the team, especially Brian Williams, solved problems in the crevices of the workday and well beyond.

Last April, Adobe announced it would release VR editing capabilities into its Premiere Pro software. The new capabilities include auto-detection of VR, and affordances for the editor to assign properties to the sequences, track the head-mounted display and seamlessly publish to specific platforms, such as YouTube and Facebook. You can't blame any company for wondering about the future of VR. So what changed?

“I think it’s when Brian started making stuff,” she says. “What did it for me was the enthusiasm around VR and spherical content from people who aren’t held to other companies, like big media companies, it’s that kind of groundswell.”

From YouTubers to Hollywood start-ups, there is broad and independent coalition who believe you will put on a headset, or perhaps Augmented Reality lenses, to consume a new kind of story. This can scale quickly, especially because your smartphone and an affordable, and decent, headset allows you to jump into the world quickly.

But to even get this far, the team had to be comfortable with a whole list of ifs.

If audiences are going to be interested in Virtual and Augmented Reality stories, beyond the initial novelty, really good narratives must draw them in like any other media.

If filmmakers are going to create those great immersive stories, they need to put their energies into inventing new possibilities for the headset.

If that is going to succeed, an even wider range of creators, both professionals and enthusiasts, must experiment with, and ultimately deliver, content that audiences need to consume and want to discuss.

If creators are going to do that, they need intuitive software that will enable experimentation and iteration close to real-time and at high capacities.

If all of that happens, software will become, as it so often is, a quiet center of the Virtual and Augmented Reality revolution. Adobe and their partners’ would want to be there for that, of course. As one of the leaders of a powerful crossover market (enthusiast to professional editors), the chance to ease users from the flat screen to a spherical one ensures they would keep pace with a rapidly changing creative need.

But you know, that’s only a small slice of the “ifs.”

You have been reading a lot about VR. If you’re a longtime tech nerd, you’ve been reading about it, on and off, for nearly three decades now. If you think in terms of panoramic imagery, you’re looking back centuries.

The problem was that few people could experience it,” Jaron Lanier, often dubbed the “founding father” of VR, told New Scientist about the early days of the 80s and 90s. “The decent set-ups were insanely expensive.”

But Lanier, speaking four years ago, could also foresee this current set of hurdles: “There are two problems: hardware and software. The hardware problem is solving itself as the costs of components come down naturally. The software problem is of a different order. Virtual worlds have to respond quickly enough for human users, yet need to be shared by multiple people connecting over imperfect networks. It will take a while to sort that out.”

Gavin Miller, head of Adobe Research, was trying to sort some of that out in the early 90s. He was, in fact, working on some of the very issues Lanier is referring to in the quote above. But there’s more than one hill to climb when stepping into a new visual reality.

“For these media to really catch on, there needs be an authoring process that can scale up to the long tail of content,” Miller says. “There has to be a distribution medium and a business model. And there have to be devices for consuming it that is an order of magnitude better than traditional media, because traditional media is already great. And maybe even be more optimal for a small screen. This is another shot at it. And doing this for video is a natural follow on. So, 20 years later, it’s coming back for another shot.”

Gavin Miller, head of Research. Courtesy of Adobe.

Miller understands that dreaming of future worlds is fine, and has been a cottage industry for writers for generations. It’s something that the robotics hobbyist (as in snake robots, so only click if you want to merge two fears), creative writer and longtime researcher has spent plenty of time imagining. But when you work for a corporation, at some point the living, breathing customers of today must matter.

“One thing that’s hopeful is there are lots of players, rather than one or two people making an investment,” he says. “I think the twin evolution of 360 photography and 360 viewers together may mean that one survives in its current form. Ultimately, in the long run, in AR in particular, there will be these alternate realities that are geo-tied to the real world. How you experience it will be multi-modal based on what devices you have on you. ... If we cross the chasm to the point where this is really going to be huge, then there will be this “meta-verse,” or however you want to name, which is published and populated by a large number of companies and experienced by regular devices. The question is, ‘Is that time now?’”

The future of technology, so say the sages, is invisibility. You might think of the “space” as an “infosphere,” as philosopher Luciano Floridi termed it, in which we live as a “transmediated self,” in the words of J. Sage Elwell. The point being that the digital reality and the physical reality are merging. Technology has cozied up to us, surrounds us and, at higher rates, might even enter us. Virtual reality feels like the motion goes the other way: We enter technology.

Like storytelling for all times, the effect rests on a mental trick that tells your brain, “You are here.” For game developers and other Computer-Generated Imagery storytellers, the trick rests on verisimilitude: Quasi worlds must feel both real but the possibilities expand beyond our normal powers. For live action VR to take off, the trick is a little different. It’s about replicating real spaces, from the spectacular to the everyday, and rendering them real again. And then potentially doing something more than storytelling, but rather, story-worlding.

You might be surprised how much software engineers contemplate these things.

Take “presence” as an example. It is among the most common terms you hear when talking about VR. Presence, the feeling of “being” in a mediated space, is the wow factor of virtual reality. It leads to another key term, empathy, what many storytellers believe will make them stay. But there are challenges with presence that cameras and headsets cannot solve alone.

Brian Williams shows you a feature in Premiere that has been there for a long time, called Offset effect, which pans the image within a clip, changing its center. It’s a somewhat obscure effect, used usually for a cool transition when mixed with the same clip and some level of transparency. But it turns out that it works for 360-degree images very well, allowing editors to change the “True North” of the image, or the place a viewer looks first.

You are playing with the offset control on Premiere, in the Adobe SF conference room, while another person in the room wears the headset to see what you’re doing. When you rapidly change the offset, she yells, “Oh geez!” It’s like you’ve pulled the rug out from under her feet. Everybody laughs.

“I’ve never felt such power in my life,” you say.

“This is why, most of the time, the camera has to stay stationary,” Williams says.

We all have to try it now, putting on the headset and whipping the offset. It’s like being spun. This is an intense sense of presence and it reminds you, as the editor, how much everything has changed. In traditional video, a shaky picture and rapid series of frame cuts creates a sense of orderly disorientation, but in VR it’s like falling, or worse, freaking out. So VR storytelling might need to slow things down.

Even a bigger issue is “framing,” something that was once in the storytellers’ hands. True North is a way to begin the journey for the viewer, but the rest is up to you. Presence, as a novelty especially, comes with agency to look any direction. Where that agency ends, and storytelling begins, is an interesting tension for the editor.

“If there’s critical portion of the plot that are visual,” Miller says, “and you haven’t looked that way yet to experience it, we’re going to need linger or idle modes. The scene still seems live, the water still flows, the trees still sway, but it will have a cognitive model of you to say, ‘You’ve experienced that plot point, so now we can move on.’”

This leads to another tension. Presence, in the VR sense, happens with an augmentation, the headset. But story, in itself, has transportive powers too. You don’t even need visuals. A good book, blended with the powerful storyworld builder that is your brain, can take you to places you’ve never been and may not even exist. Do these two kinds of transportation compete?

“It’s going to require a retooling of some sort,” Miller answers. “Maybe the right way to think of it is that when you’re thinking of a story, it doesn’t come out linearly, right? You think of forces, and things you want to happen, then internally we translate that into this linear string. We’re going to have to come up with new representations for those original thoughts and the arc that you want the recipient to go through, in terms of seeing the characters evolve.”

But there’s that troublesome question of your own movement, as the audience member. If you see a character walking off the screen that happens to interest you, or if there’s action you’d like to be closer to, you realize then you can’t move. So, at least for now, the VR storytellers must either create a sense of forward motion or divert our attention away from the need. 

Editing matters in all of these challenges. Leaving that sphere for a flat earth editor each time you need to edit is limiting the flow of thought. For now, some companies have at least allowed editors to keep the headset on, so the toggle between flat and global is less onerous. But one researcher at Adobe has been thinking about another idea.

Adobe MAX 2016 is a spectacle in general. The giant exhibition floor, the concert by Alabama Shakes, keynote sessions with creative giants such as Quentin Tarantino, photojournalist Lynsey Addario, sculptor Janet Echelman and designer Zac Posen. The Sneaks are another order of spectacle. Yes, there was celebrity on hand, with Jordan Peele as the MC, but the glitz and glamor generally gives way to the geeks, who awe and inspire with experimental new tools and digital tricks. When one wows -- such as putting words into people’s digitized mouths simply by typing those words -- the crowd goes wild.

Stephen DiVerdi, a senior research scientist, walked before that crowd of several thousand and, in some ways, promised them an experience many never knew they might need someday. He presented #CloverVR, which gives you the power to edit inside the 360-environment.

A few months later, back in San Francisco, you are in a headset working with the same video that DiVerdi edited in sneaks. You try the paddles first, but you find them difficult. But with an old fashioned mouse, it becomes quite easy. And eventually, it seems inevitable to eventually simply use your hands. And in a way, that’s exactly the whole point. In fact, it’s the next major step in computing.

“Because the learning curve is going to be zero,” DiVerdi says. “You’re going to put the headset on, then you’re in a new world that’s just like the existing world. So the nice thing about using the hand controller is that’s trivial to learn.”

But that doesn’t translate to turning everyone into geniuses.

“Things that are still difficult to do,” DiVerdi says, “are still going to be difficult to do in VR.”

Like telling a human story, one that taps a unique vision. There is no headset, in the near future, that can change the way the human mind computes creativity.

“But it means we don’t have to have all the interfaces we have right now,” DiVerdi says. “You start to take for granted all of these levels of indirection that stand between us and what we do with the computer. But using the mouse is weird to learn. You don’t have to do that in VR.”

Intellectually, you understand it, but it’s still hard to envision this kind of freedom in digital spaces until you’ve had the real, well virtual, experience.

So you travel from the Adobe SF offices to the Dogpatch District to see the Minnesota Street Project, three renovated warehouses that makes studio space affordable for deserving artists and gallery space accessible and relaxed for art experts and novices alike. As you get a tour of the studio warehouse (not open to the public) it seems like a strange place for a story about virtual life. Painters, sculptors, woodworkers create in studio space defined by the spare and visually fascinating look of unfinished plywood walls. But there are exceptions. Digital artists work in an upstairs loft replete with high tech monitors, large scale printers and all the software needed.

And downstairs, among those artists who create new visions with mainly age-old materials, Adobe has studio space where age-old visions can be created in new worlds. Project Dali is not, at least yet, a relevant tool for live action VR video. It is a 3D painting environment in which you can use any materials that have been digitally rendered -- from a painter’s signature brush stroke to types of lumber or ductwork -- and a brush that can paint, build and even move whole images closer or farther away from you.

You are here to understand what it’s like to “touch” the digital world.

As you go into the headset, you feel that agency you’ve heard about. The ability to grasp, to walk around, to engage. The feeling of walking around a stick figure sculpture that you made to see that it has “sides,” is where real immersion hits you. The stick figure has no artistic value, but its simulation of realness makes you rethink what real means for a moment.

“For me, 3D rendering was something that was always intimidating,” says Erik Natzke, principal artist-in-residence for Adobe Research, “because it always meant I had to get my head into that three-dimensional space on a two-dimensional screen. Here, you can just do that. Then you get to that thing I love about Dali, that is, that state of play.”

That state of play, and understanding how to create a whole world, that’s the magic Adobe is trying to capture. Natzke has ideas for how Dali could eventually work in video-making. One simple example is the ability to “paint” masks over recording and lighting equipment, which can’t be naturally hidden, so that the world looks natural, even in places where lighting must be artificial. The ultimate goal of this artist space is to find and create challenges that users might face, to be a part of the problem-solving that is creation.

“It’s wonderful to be around these people, where that’s the primary focus,” Natzke says. “You get back to the genuineness of art.” He adds later: “If I’m testing [Project Dali] with artists, it means I’m going to value that process.”

If any software company is going to be part of entering virtual storyworlds, then it has to be in the head of the creators. Williams Argilla says that when her team started to investigate VR, “we were a little surprised to hear they were actually using Premiere, even though we had no special supports for it. But they were working with it in a way that you have to be psychic to know how to use it.”

Adding some basic tools to lower that barrier was a lot of hard work. To understand how software and storytelling merge in virtual space going forward, the team says they won’t need to be psychic. They just need to ask the right questions of the creators.

Next Month: Talking To VR Storytelling Pioneers & A Surprising Mentor

Follow me on Twitter or LinkedInCheck out my website