When I first joined the project it was for cinematics only. As such, I took some time to deconstruct the cinematics of previous games and find their limitations. To aid this, we created this initial test below as a method of trying out the new full performance capture technology being prototyped in the studio. Work-in-progress facial quality aside, comparing between the 3DS Max versions and ingame highlighted inconsistencies related to the compression of animation, causing keys to be played back with linear interpolation and adding to the robotic look of the movement – where all subtle motions are lost.
To overcome this, we moved all cinematic animation to streaming to free up more memory and allow its playback to be much less compressed. This memory also made way for much more hi-res characters in terms of texture detail and bone count. While we still primarily used 3DS Max Biped for gameplay animations, cinematics needed to be passed from motionbuilder to Max for export into the game. Moving over to a motionbuilder-only pipeline greatly reduced our iteration time, which I strongly believe makes all the difference towards the end of a project in terms of polish.
We adopted the full performance capture approach – recording the head, body & voice simultaneously gives rise to subtle details when all play back in sync. While not a one-stop solution for quality, taking this approach gives the animator the absolute best quality base to start with, leaving them in a great position to add their own magic to the scene. Shipping with over 4.5 hours of cinematics, this was an essential solution that allowed us to select which important scenes to give the full treatment, while still allowing lesser scenes to make the cut. Below is an example of the finished result – watch for the subtlety present as all three elements work together, when Hickey mimes the action of being hung. This kind of non-verbal dialogue should be more common in the future of game storytelling as we more widely adopt this method.