Once again, the Japanese Softimage site has posted information on another showpiece title – and they spend a heavy amount of time talking about how Softimage interfaces with Motionbuilder. This is encouraging for me as I’ve decided to dive fully into Motionbuilder for my current project after finding it to be the most rounded solution for mocap, keyframe and facial animation out there.
When we initially showed Mass Effect at E3 2006 I recall a handful of Square developers attending to evaluate the facial animation. While it looks like their production methods are somewhat dated due to the long development cycle, playing FFXIII shows the eventual result to be outstanding – presumably due to their dedicated engine for facial closeups and meticulous planning.
The google translation for the piece is especially bad, but so far I can gather:
Like Square Enix studio-offshoot Feelplus, cutscenes are divided into 4 categories depending on importance. A & B level cutscenes employ full keyframed facial animation (with lip-sync done individually for both Japanese and English versions). C & D level cutscenes just use procedurally-generated facial and lip-sync animation.
Similarly, the cutscene team is divided into four distinct teams that handle each progressive stage of creating a full cutscene. They are:
Motion Capture Group: Shooting and cleaning up motion-capture data.
Body Motion Team: Creature keyframe animation, and human motions that cannot be captured.
Facial Group: Facial acting and lip-sync.
Simulation Team: Hair and cloth simulations.
Cutscene shoots are meticulously planned ahead of time. Beyond storyboards, clean layout boards that contain descriptions of the actors, props and set layouts required for each scene are created.
Temporary voice-over was used on the set with full ADR done later. Interestingly, it appears that animatics were projected on the walls during the shoots to give the actors a better sense of their virtual counterparts and surroundings.
Within Motionbuilder, additional tools were created to easily allow editing and exporting of complex scenes via a check-box matrix of assets vs shots, seen at the lower-right of the image below – something I’ve had in mind for some time as the best solution for working on scenes that require a lengthy full export only very occasionally.
Character-wise, the highest resolution characters (Lightning and Snow) consist of up to 223 bones, covering the basic skeleton, auxiliary (corrective and simulation), facial and hair bones.
The keyframed facial animation was done via a traditional slider setup. The image below shows the numerous attributes, which must have become quite unwieldy and therefore given rise to the need for a dedicated team to specialise in this area.
For simulations, the wind level is initially set for each scene to provide the requisite amount of movement in the simulated assets.
Around 20 bones are used for hair, employing a spring system that dampens as a character’s inertia increases to prevent crashing with the head.