Halon’s Pre-Viz

I’ve added the second Adapt 2007 lecture notes below…

Halon: The Value of Pre-Visualisation

Dan Gregoire – Pre-viz supervisor on Star Wars Episodes II and III, War of The Worlds, X-Men 3 and Ghost Rider

This talk was especially of interest to me as we were just finishing the pre-vis work for a handful of outlandish actions and animation systems in my current project – a valuable process that not only helped prove their viability, but also how the visual look of the systems will play out.

These notes will be of interest to anyone currently planning out cutscene requirements, or teams looking to pre-visualise how certain gameplay elements or level action sequences might play out in something more advanced than simple documentation or even storyboard form.

With a background in videogames and animated television, Dan was drafted in to create pre-visualisations for effects-heavy scenes midway Star Wars Episode II, then continued this work on the Episode III, choreographing difficult sequences such as the fight between Yoda and Palpatine.

In addition to working on movie pre-viz, Halon is also involved in animated movies such as the upcoming Avatar and Speed Racer, as well as the recent Halo 3 “Believe” adverts. Near the beginning of the talk Dan asked the audience “Who uses pre-viz?”, and was met with a resoundingly lacking show of hands, to which he replied, “You should be”.

Halon’s work process is essentially as follows:

  • The pre-viz team can be brought onto a project long before every other element to work with the director. This is usually done at their own studio, with the director “moving in” over the period of a couple of weeks. Interestingly, during the War of The Worlds portion of the talk, when pressed about the Director’s freedom being constrained by pre-visualised shots created by Halon, he offered that Spielberg himself is familiar enough with Maya to be able to take Dan’s shots and move a camera around to explore with.
  • For less tech-savvy directors, the workflow essentially revolves around the director directing Dan over his shoulder to create the type of shots required. Importantly, this is a process of exploration as opposed to the director already having the shot in his head and working towards recreating it. He also referred to it as “problem solving”.
  • Sometimes Halon is even allowed free reign to create the shots alone then submit for approval, though naturally this will involve more back and forward as anyone who’s worked with outsourcing will attest.
  • Additionally, their team can be brought in on-site for shots during production as their studio requires nothing more than a few laptops and Maya. As such, some in-production shots are farmed out to Halon members off-site that can work remotely when under pressure.

Software/source material used:

  • Maya: Easy portability, as well as easy hiring from within the film VFX industry.
  • Google Earth: Used for location scouting, gathering texture, terrain, GPS and Building data for accurate real-world environment layout. One location we saw featured an accurate recreation of a pier to exact scale.
  • Sketch Up: Simple building creation.
  • Face Gen: Fast facial model creation.
  • Blue prints: Accurate interior layout.
  • Location Photographs: Presumably texture info.
  • Stock models: Used in addition to in-house creations to reduce workload for elements such as vehicles etc.

Ghost Rider

Looking at a selection of demo movies displayed, I was able to garner the following:

  • Sometimes the pre-viz mock-ups used models that were accurate enough to easily distinguish the actor they were representing, with full modeling and texturing. Others involved only roughly put-together texture-less models with default smooth skinning applied, (some moved with the character’s names as polygon-text – a nice touch). War of The Worlds on the other hand involved fully finished recreations of the tripod walkers, giving away that certain pre-viz efforts were done at a later stage on that project.
  • This level of detail is apparently at the discretion of the director, and in Dan’s words, “It doesn’t matter if it looks like crap”. VFX such as particles and explosions are also added at this stage, giving a good reference when handing over to ILM for full treatment.
  • Similarly, the animation varied from held poses to minimal-frame animations as characters moved throughout the scene, though never in T-Pose – likely to provide for more accurate framing and composition. Nevertheless, the shots from the upcoming Spiderwick Chronicles involved fully blocked-out body and facial animation.
  • Shots displayed for Ghost Rider not only composed of dynamic sequences but also some slower character-acting shots. It was later revealed that some directors wish to use pre-viz to essentially mockup most of the movie for budgeting estimates such as VFX shots required, number of actors and locations, scheduling etc. One example given was 80%-90% of total shots done, delivered in quicktime format.
  • Importantly, pre-viz focuses on the development of contiguous sequences, rather than individual shots.
  • Of note, Dan stressed that his Maya camera rigs stayed with real-life scenarios as much as possible, due to the nature of most of the shots requiring recreation in the real word. As such, he went to pains during a demo to avoid simply parenting a camera to a speeding vehicle, instead to an imaginary second vehicle from which the shooting would take place in real life. The best example of which was during the Halo 3 advert. The 36x26ft battlefield museum exhibit was built at a 1:1 scale, so he pre-visualised all his shots via a camera attached to a “crane & snorkel cam” in Maya, a virtual recreation of the device to be used on set.
  • Apparently, “Motion Control” (the process of virtual camera movements being recorded and played back on a real-life mechanical device and vice-versa) is used a lot in their workflow, but a later ILM speaker informed us that even mentioning Motion Control when planning a VFX shot will anger all but the most patient director due to the set-up time for each take and re-take.

Finally, as for tools:

  • They playblast all their work due to requiring much quicker turnaround on long scenes. I did see Depth of Field on a couple of shots, but Dan informed us this was a decision made after lengthy render times on Star Wars Episode II. The playblast was customized to auto-turn-off elements like skeletons and other helper objects.
  • They created a time-of-day tool that adjusted the sun’s position via time sliders and longitude/latitude inputs.
  • Their camera tool would display lens info and camera data such as speed, tilt and subject distance – all relevant for accurate real-world recreation. The tool also allowed easy switching between cameras and the ability to easily add a camera with all their custom presets.
  • In addition, they also created a handheld cam-shake tool to simulate camera shake based on the movement of the cameraman.
  • Extra frames are added either side of the shot as in real life to facilitate editing.
  • Of most interest to me was the actual custom camera object itself, which consisted of a plane recreating the black bars above and below a wideframe shot. I’ve found that simply using the film-gate setting in Maya also shows what’s around the camera which can be quite distracting, though previous experiments with a black-bar model fail whenever the field of view changes. Additionally, the black bars also afforded a fantastic space in which to display info such as filename and frame-count etc.