One of the most entertaining presentations of the week, due to both Todd’s upbeat yet humble attitude and the sheer multitude of videos displayed during the presentations, ranging from multiple render-passes highlighting the various explorations of lighting and materials on the robotic protagonists to behind-the-scene shots of the film plates throughout the various layers of post-production layering.
Incredibly heartening were the animation renders illustrating the sheer amount of cheating going on when characters went off-screen. With the original brief requiring 14 robots in total, they scoped for only 14 transformation animations, but ended up creating over 140 due to each transformation being created specifically to sell the particular shot. Some examples shown had Transformers’ legs going through the ground, various parts scaling into the body to be hidden away, even bits flying off only to return just at the moment they were required on camera – just like our cutscenes!
Some animation-related notes:
Despite having a very humanoid rig underneath all the additional moving parts, Mocap was used for subtle idling movements for the Transformers â€“ everything else was keyframed.
Similarly, procedurally-generated secondary (i.e. physics-based) animation was used rather liberally on the characters. The demo illustrating this (showing Optimus Prime) had only a few flaps around the shoulder and arm areas, leaving much of this type of work to the animators themselves.
Character animation was done in two passes, with one team providing the finished body animation of the Transformers and a second group later going in to animate the vast (see: tens of thousands) number of smaller moving parts required to raise the characters to big-screen fidelity.
Toddâ€™s main inspiration came not from other movies based on their action and visual effects, but rather on the actual type of film used when shooting the live-action plates which results in certain lighting artifacts due to the stretching of the film image to fit the screen. In his words, â€œYou donâ€™t want to use VFX films as your VFX referenceâ€.
He had a little advice for camera-shake, being that it should always be implemented slightly after the event causing it, presumably varying by distance from the event. This is something we could easily experiment with in future games.
Surprisingly, several of the VFX shots were created almost entirely using 2D in After Effects, with one example shown involving explosions made entirely of video footage being translated in 3D space – good old sprites are still useful when it comes to creating smoke and explosions even for the big guys – it is fair to say that Mr Vaziri is an advocate of low-fi techniques.
I’ve been to a few ILM lectures now, and at each one I’m staggered by the amount of time it can sometimes take to render just one frame for a CGI sequence. For those who still wonder how long it will take for videogames to achieve photorealism, (a fool’s quest at best), consider this gulf:
Time currently allocated to render one frame in most XBOX360/PS3 games – 1/30th or 1/60th of a second.
Time currently required to render one ILM VFX-heavy frame for film – anywhere from 10 to 30 hours.