Previsualization and the Cultural Divide: How Filmmakers Are Creating Immersive VR and Leaving It On the Set

While on the second night of the incredible 5D Flux Conference on transmedia at USC two weeks ago, Henry Jenkins moderated a discussion with a number of luminaries from the previsualization and motion capture industries, including Chris deFaria (Warner Bros), Ron Frankel (Proof Inc), and Habib Zargarpour (Microsoft Studios).  The topic was on digital design and world building in the narrative media landscape – a topic that is vitally relevant to virtual reality, and couldn’t have been more timely.

I had no idea how intertwined the mocap and previz industries were within the film industry, and its incredible how much it has changed in even the past five years.  Previsualization has been around for a good while, and involves a process of using graphics software during the pre-production phase to simulate what shots will look like in a film.  In a sense, its like a superpowered version of storyboarding, and entire movies can be fully mapped out shot-by-shot using a graphics engine to simulate a real-world setting.

Yet as films grow increasingly digital, the previz process is growing closer to the actual film itself.  It is no longer simply simulating the real world for live-action movies.  Everybody knows computer graphics are nearly completely replacing actual locations, even in television.  Films don’t have to be epic in scope to warrant excessive green screens – its just that green screens, and by extension motion capture, is becoming more efficient.  One of the biggest questions at the conference was “how is previsualization different than the filmmaking process?” and surprisingly the overall answer was that they are becoming one and the same.

Andrew R. Jones, visual effects supervisor of Titanic, I, Robot, and Avatar, gave us an enlightening rundown of the virtual production on set for Avatar.  What he described was, to me, a sensorially immersive virtual reality experience with tactile feedback – exactly the kind of experience we are working to create for the Shayd VR installation at USC.  Jones is a world builder, and his team created the world of Pandora in a graphics engine for Avatar, and utilized this virtual world in real-time during shooting.  James Cameron and his crew were not only able to view the virtual world through tablets and play around with camera shots, but the actors themselves could wear one-eyed HMDs in order to see themselves and their fellow actors within the virtual space.

VP of Visual Effects at Warner Bros, Chris DeFaria, pointed out that filmmakers will create motion capture stages that fit the narrative of the film for the sake of getting in the right mood.  This isn’t an old idea – its true for film sets in general.  For instance, the set of a space movie might have a cold temperature, dead silence, and low-key lighting everywhere.  The set of The Godfather is famous for having authentic Italian food and catering for scenes in the movie – all to create a film so authentic that the audience, as Vanity Fair puts it, would “smell the spaghetti.”  Film sets have always worked to emulate the film’s world, but motion capture stages are taking this to the next level.

Actors in motion capture suits no longer have to rely on pure imagination to construct their scene – they are having it built for them virtually, and they view this world through a head-mounted display.  Not only does this help them to better immerse themselves as a character, but they can actually experience the world (for example, Pandora) unlike anyone else in the world.  Everyone else is busy writing it, shooting it, editing it, and watching it – but the actors are living it!  My biggest question is – why aren’t consumers? Why isn’t the virtual world experience just as cool, if not more so, than the film experience?

Jones referred to the Avatar set as a surprising sandbox of digital professionals that interplayed low tech with extremely high tech installments.  The “high tech” being the motion capture stage and the virtual world, and the “low tech” being a bunch of set pieces and risers that were used to generate more realistic terrain for the actors.  Although he didn’t define it as such, he was developing a procedural mixed reality system – and a revolutionary one at that.

Mixed reality involves the interplay of the natural and virtual world – or “low tech” and “high tech” according to Jones – that allows the user to experience tactile immersion.  As opposed to visual immersion or auditory immersion, tactile immersion is a sensory element that focuses entirely on touch.  Not just texture – but pressure, balance, temperature, weight, and everything that relates to physically sensing surfaces and objects.

Because the world of Pandora was not a boring flat plane, the motion data captured from the actors couldn’t simulate them moving on a flat plane either.  To fix this, the set designers had to build artificial terrain – bumps, hills, crevices, pillars, and the like – to simulate the dynamic terrain and dense rainforest environment found in Avatar.  For every new location within Pandora, there were new variances of bumps and hills – so they had to find a way to easily modify a 40’ by 40’ motion capture stage depending on the needs of the virtual environment.

So they constructed a grid.  Each square in the grid would support a plane of a certain height and angle.  Put all of them together and the individual planes on the grid would become a coherent landscape that physically matched the virtual environment almost exactly.  Every day, with every scene change, the crew could switch out each square on the grid for a new piece, allowing them to dynamically modify the set on a regular basis.  Some days it was a big hill, some days it was a pathway, some days it had a bunch of pillars to represent trees.  No matter what, the actors could be completely immersed in their virtual world in a tactile sense.

Of course, for filmmakers the purpose of this is entirely functional.  They want more realistic motion capture data.  They remain unconcerned with the possibilities of VR beyond the set.

And this brings us back to the big question: If robust mixed reality is already happening on film sets, then why aren’t we using it as a standalone product itself? It’s already there, why are we – as entertainers and artists and media makers – forced to cater to the camera? Players can just as easily wear motion capture suits just like actors do, put on HMDs, see each other in virtual space, and interact with non-player characters (NPCs).  In Shayd we want to just that – and even go so far as to have actors, who embody virtual characters, interact with players in the virtual world much like an interactive theatre.

Throughout the seminar, a backchannel was constantly updating with questions from the audience. I posted the following question that I hoped would hit the right point while not being too brazen about VR as the end-all-be-all:

Do you think there will be a time when the world-building process is not only the backbone on which a film is constructed, but the end-product itself?  When virtual environments are no longer seen through the eye of a camera, but experienced first-hand?  Do we need a camera – and a rectangular screen – to experience a robust narrative? Or can we bypass the film, and enter into the world itself to interact with a story?

It was exciting to see that a bunch of the audience members wanted to delve into that topic, as the question got voted up near the top of the list.  Unfortunately it was never selected by the moderator, but I tracked down the panelists after the event to get their thoughts.  That’s how I met Jon9 and Jason Jenn, both accomplished video artists, who are also working with virtual reality experiences in their projects.  After showing Shayd Mobile to them and a number of others, including Chris DeFaria, Mike Fink, Scott Fisher, and Ron Frankel, we discussed VR and realized something crucial:

A massive psychological gap exists between the film, virtual reality, and games industries, even though we are all playing with the same sand.  Filmmakers are already creating sensorially immersive VR experiences for their actors on set, and yet fail to realize the power of such experiences to the end-user.  Game designers are using photorealistic real-time engines, and yet neglect the power of immersive hardware (that’s not entirely true, though, as the Kinect and similar consumer devices are a step in the right direction).  DeFaria eventually said to me “This is all cool.  Now go and make a million dollars!”  As fun as that was to hear, the commercialization of consumer-facing VR is a complex discussion in itself – but the good thing is that entertainment professionals have their ears perked.

David Moren of Autodesk introduced the The Virtual Production Committee, a league of extraordinary gentleman that is specifically designed to address this intersection of mediums.  Spawned by the ASC, ADG, VES and Previsualization Society, the committee endeavors to define how “live-on-stage computer graphics” (ie, virtual reality) will change the future of movies.  Given the hyperfocus of this film industry super-squad, I’m surprised how little attention they give to virtual trends in general.  While the power of virtual production is undoubted, virtual production professionals are wired to think purely like filmmakers – and to their detriment.  However, it looks like film industry folks will soon shed this innate close-mindedness towards virtual reality, and start embracing it as an entertainment medium in itself.

To learn more about the Avatar set, definitely check out their Performance Capture Featurette:


Share This Post

Twitter Delicious Facebook Digg Stumbleupon Favorites More

3 Comments to “Previsualization and the Cultural Divide: How Filmmakers Are Creating Immersive VR and Leaving It On the Set”

  1. Nathan Burba says:

    Great post!

    It makes me wonder about how to consumerize VR.

    Consumerization requires space. Cars need roads an driveways, books need shelves, food needs refridgeration. This is what makes smartphones so brilliant. They can be used anywhere anytime.

    If the Western notion of the parlour/rumpus/game room can evolve into the VR room then I think its a possibility. Otherwise, VR experiences will remain in isolated installations.

  2. jimbo2go says:

    Thanks Nate! I completely agree – like the old concept of the kinetoscope parlor, and the more recent videogame arcade and home theatre, VR experiences need a defined space. Preferably in a person’s home (living room), or at least in a public space (arcade, laser tag). As long as its out of the lab and being engaged with on a large scale by everyday users!

  3. James says:

    Wow, This is the type of news I like to hear. As production heads into 3D hell, this is a really nice change as it seems to be an immersion into the cinematic instead of an emulation. Build a world and let the customer travel in it. Sounds cool.

Leave a Reply

Project Holodeck

Quotes

Nothing is more free than the imagination of man; and though it cannot exceed that original stock of ideas furnished by the internal and external senses, it has unlimited power of mixing, compounding, separating and dividing these ideas. — David Hume, An Enquiry Concerning Human Understanding

Good Reads

IMD Flickr Feed

Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012Other Worlds - USC IMD MFA Thesis Show 2012

Enter your email address to subscribe to this blog and receive notifications of new posts by email.