The other month we were very fortunate to have Iron Man 3 animation supervisor Aaron Gilman in the Ubisoft Montreal studio to take us through some of the trials and triumphs in bringing the third Iron Man to the big screen. While his 13 years of experience contain such VFX powerhouses as The Matrix and Avatar, it was an especially interesting talk for many present as he is best known within the studio as the former animation director of one of my all-time favourite Ubisoft games, Rainbow Six Vegas.
New Zealand-based Weta Digital were brought onto Iron Man 3 so late in the process that some of the other 13 VFX vendors had already finished. This left them just 3.5 months to create some of the more complicated shots on the show, for which they came up with the smart and graceful solutions he presented during the talk. Aaron was one of a handful of supervisors on a team capping out at 30 animators in full production, within a team of around 100 artists in total. Their task, to handle the 422 shots comprising two distinctive types of challenge:
- The Aerial Battle Finale
- Suit Transformations
Gilman informed us that it was rare to have so little time on such a large project, mostly due to the financial collapse of the main vendor Digital Domain and late changes to the script. The current state of the VFX industry was touched on only briefly, stating that smaller studios often underbid to get work, driving a race to the bottom where studios can no longer support their staff.
Aerial Battle Finale
Comprising 36 unique Iron Man suits fighting mercenaries on a collapsing crane set, this sequence was easily the most complicated and VFX-heavy part of the entire show, but despite the deadlines the team faced they pulled it off amazingly.
Due to the tight timeline, no previous look-development had been done for how Iron Man moves. In order to become accustomed to animating how Iron Man (Men?) would fly, Aaron had his team begin with prepartory work creating banks of keyframe tests for how the titular hero would bank, turn and brake etc.
A “vignette team” was tasked with creating a similar mocap bank of small sequences for background characters, with 90% of the work done by just two actors with sections of the crane rebuilt on the mocap floor. These actors helped with the choreography, making actions larger than life due to their distance in the background.
Mocap and keyframe banks such as this are created at the beginning of many projects, with Aaron saying Tintin’s dog Snowy began life as a similar collection of canine mocap before being modified for specific scenes.
Many of the earlier flying tests were re-purposed to fill out the background Iron Man characters during the aerial battle but provided their own problems in terms of composition, leading the team to change their perceptions on what a background performance should be.
Rethinking Background Performance
Initially, a constant use of the negative space behind the lead subjects led viewers to focus on the wrong thing, while a consistent number of suits filling that space led to bad rhythm and a monotone pacing between cuts. Additionally, having the suits flying laterally across the screen to fill that space lacked dynamic composition depth-wise.
As such, the team moved over to a direction Aaron dubbed “Choreographed Chaos”, referencing combat photography from Vietnam, WWII etc, where the cameraman was under pressure to get the shot. Here “the rule of thirds is not your friend”, which produced much more dynamic and realistically hectic shots for the final sequence.
Last of all, the collapsing set itself was handled by one animator only who rigged and animated the crane. With this single animation used across every shot for both efficiency and consistency, only the timing was adjusted to fit each shot.
The second challenge that Weta had was to produce the many suit transformation shots, due to the more kinetic armour-donning this time around. The traditional workflow of a referenced pipeline, whereby the animatable rig, (“puppet”), is referenced into each animation scene to provide not only easy updating but also stops animators from breaking the puppet, was too limiting. For expediency the team needed a way to allow animators to not only modify the model per shot, but also dynamically rig each part that they wished to move.
New “Guide Rig” System
Step in creature technical director Ebrahim (Abs) Jahromi, whom Gilman could not express enough was the key factor in solving this issue. A rarity at Weta, he was embedded within the animation team and made the bold step of opting for non-referenced, unique puppets. Creating a suite of tools to give animators controls to which the hi-res geometry was constrained, animators were freely able to cut up the geometry, auto-generate and place control points, and quickly group and parent hierarchies of objects – essentially rigging on the fly.
With 4 million polygons in the “bone” (under) suit alone, the more cuts created and the heavier the puppet became, the slower the scenes ran. As such, the approach of modifying a lo-res mesh that would in turn influence the hi-res, (toggled on/off to view inter-penetrations), was taken.
Guide Rig further adapted
For the finale battle with multiple suits all transforming and breaking apart, the guide rig was further adapted for limb-removal. This allowed the animator to perform the cuts then have the broken limb sections move over to world-space animation, no longer a child of the body, for which a new constraint system was put in place.
The suit damage incurred from shot to shot posed extra problems in both data-management and shot consistency. In order to ensure that animators working on sequential shots down the line were using puppets and geometry with the correct breaks, auto-updating the geometry was a necessity.
Keyframe vs Mocap
One interesting anecdote towards the end of talk occurred due to the the production team’s tendency to swap out characters in post, (covered more in-depth in issue 134 of Cinefex – a fantastic quarterly resource of VFX post-mortems). Aldrich Killian, played by Guy Pearce, was chosen to replace another character at a late stage for the finale shot where he, (rather incongruously), reveals himself to be The Mandarin.
Now shooting for another production, he had grown a full beard and contractually couldn’t shave. Their initial attempts to use a digital double were thwarted because the beard prevented good marker tracking for the facial performance. They settled on keyframing the entire facial shot, with a whole FACS session done with Pearce.
With that, Gilman shared his appreciation for mocap as a starting point – with keyframing over facial performance capture allowing for more ability to polish and finesse. He particularly appreciated using mocap as a base for some of the trickier action sequences for Legolas in the upcoming Hobbit 2: The Desolation of Smaug.
Game Animation vs Film
In closing, I asked Aaron whether his time in videogames might have influenced the rather game-like solution of creating banks of data from which to quickly assemble the multitude of shots in the finale sequence. He disagreed, but gives an eloquent comparison of film and game animation on his personal blog here, summarised below:
“Ultimately, I think of the film pipeline as linear, each department more or less sequentially following the next department down the pipe. On the other hand, I think of games as an intricate web. Each department is inextricably linked to multiple other departments, going around and around until a cohesive playable system is created.”
For additional reading you can find notes from a similar talk on the original Iron Man film here.