I’ve been feeling for some time now that Japanese developers have been falling behind their western counterparts in the technology side of game development, so it’s always good to hear that the Metal Gear Solid team still stand up as a cutting-edge developer – even more so when you learn this via a huge drop of “behind-the-scenes” images, (via the now-defunct Softimage site), from one of the largest games to be released this year.
A few weeks back, details of the facial animation rig and other workflow info had been posted on the Japanese XSI website and I was planning to extract information via the google translation and observation alone, but someone beat me to it, (and managed to do a much better job than I would ever have). Head on over to Chris Evans’ (Tech-Art Lead at Crytek) blog for full translations of the following sections:
Regarding the facial setup, it looks very reminiscent of the same method I saw presented at ADAPT 2007 by Aaron Holly of Disney. This involved a similar setup of a bone rig driven by a mesh giving the two following important advantages.
- It was highly flexible and able to be moved between multiple similar faces as the animation is stored on a nurbs mesh that drives the bones rather than the bones directly, therefore allowing for varying bone positions.
- If using a pose-based facial animation solution such as FaceFX, the bones travel along the curve of a nurbs surface rather than a simple linear translation, therefore better mimicking the movement of skin across the skull.
This is certainly something I’d be keen to try in the near future given that it now appears to have successfully been put through a full videogame production.