Archives For Facial Animation

It’s currently tapped for several game of the year awards so now is a good time to write up my notes on David Lam’s talk from the 2013 Montreal International Game Summit on the cinematic process of The Last of Us, which he also kindly gave at our studio later the same week.

David was tasked with supervising the animation for cutscenes on The Last of Us, where he said there was even more of an emphasis on story and characters than Uncharted. In particular, there was a strong focus on the relationship between the lead characters, primarily the 180 degree transformation of Joel’s attitude towards Ellie over the course of the game. While being mindful of sensitive story elements, David jumped straight in by showing everyone present the final scene of the game – (note: there are story spoilers below also). This scene, he said, best illustrated the team’s mantra of “Grounded Realism”, with a down to earth, life-like approach to performances in order to create empathy.

Continue Reading…

Here is a video on Far Cry 3′s full performance capture technology, which is virtually identical to Assassin’s Creed III’s given that we used their workflow entirely albeit from a third-person perspective. I can’t imagine recording face, body and voice separate again after seeing the subtle nuances picked up by all three working together in sync.

YouTube Preview Image

Marc is the Technical Director in charge of R&D at our Montreal mocap studio, (we have one in Toronto also), so oversees the motion-capture technology-sharing on all Montreal projects. For more info on FC3′s character pipeline you can see an additional talk by Character Technical Director Kieran O’Sullivan here.

While I may disagree with their approach to interactive storytelling, one can’t deny Quantic Dream’s ambition and accomplishment in terms of performance capture. Probably more than just the purported tech demo, the level of emotion captured in this piece sets a high standard with simultaneous body, facial and VO capture fast becoming the standard in our post-L.A.Noire era.

YouTube Preview Image

UPDATE: Behind the scenes video here.

This Is Next-Gen

February 9th, 2012 — Leave a comment

This generation’s long life-cycle has been a mixed blessing. On the one hand, so many years without new hardware has allowed us to push the limits of our current engines and learn so much about finesse and polish without the need to relearn or start from scratch.  The challenge this time around has been more about real artistry in both visuals and game design without the ability to hide behind some major leap in visual fidelity or previously-impossible technology opening up new game concepts. On the other, we’re getting to a state of diminishing returns In terms of effort to rewards, and as a gamer I really want to see something that gets me excited again in a way that only realtime interactive CG can provide. And this is the first glimpse of where we’re going.

While mostly a lighting demo of Infinite Realities‘ scanned head we’ve encountered before, seeing it running in realtime on a PC at work both terrifies and exhilirates me. Once he opens his eyes and begins to move around, it will be our task to breathe life into characters that look like this in immersive worlds that are consistent and do not break the suspension of disbelief.

Since arriving in Canada, I’ve worked exclusively on games with character casts are in the hundreds, requiring procedural and systemic solutions to create their animation. Perhaps this leap in character realisation will instead force us to concentrate on stories with fewer actors that have an unprecented depth of character.

Everyone’s talking about it so it would be rude not to post that the new trailer for Rockstar’s (Sydney-based team Bondi’s) L.A. Noir dropped yesterday and the facial performances look fantastic. Using Depth Analysis’ MotionScan technology, they appear to be going all out to capture the performances of apparently over 200 characters.

YouTube Preview Image

While the visual benefits are obvious, I can’t help but wonder just how much it cost the production for so many actor contracts when not only voices but likenesses are required. Additionally, the production task must be one of the hardest parts of the game, considering that at the time of writing they’re still hiring senior roles for the cinematics team around 6 months before the planned release date.

That said, I’ve really been keeping an eye on this since E3 so can’t wait for spring to roll around, plus I’m a Noir nut since a Humphrey Bogart stint last summer.

Read more about the MotionScan tech and process here.

UPDATE: They’re not still hiring as the front-page job posting on their site is dated 2008. Makes me feel better about the frequency of my own posts.

Activision CEO Bobby Kotick says a lot of things. While he has talked at previous investor conferences about facial animation, (curious in itself as an investor buzz-topic), he joins a growing line of non-animators claiming to have overcome the biggest challenge in CG acting. Speaking at the recent America Merrill Lynch conference:

“This has been the Holy Grail in a lot of respects for video games – the ability to have characters on the screen that you can have an emotional connection with. The medium for the last 25 years has been very visceral, interactive, immersive medium – but it was very hard to have characters to actually have empathy towards or an emotional connection with… or that might make you laugh or make you cry; be some catalyst for an emotional reaction… Call Of Duty: Black Ops is the first game where we’ve been able to perfect the facial animation, mouth movement technology so that the lines that are being delivered are believable. The facial animation looks like a real person”

As to my position on the still theoretical Uncanny Valley, I’m convinced it more than certainly exists in Masahiro Mori’s pure sense as I’ve seen (or to put it correctly, felt) it in films like Beowulf and Midnight Express, though in games we often mistakenly attribute it to a combination of bad rigging and animation – failure not even near the Uncanny Valley.

Continue Reading…

Once again, the Japanese Softimage site has posted information on another showpiece title – and they spend a heavy amount of time talking about how Softimage interfaces with Motionbuilder. This is encouraging for me as I’ve decided to dive fully into Motionbuilder for my current project after finding it to be the most rounded solution for mocap, keyframe and facial animation out there.

Final Fantasy XIII Hair Rig

When we initially showed Mass Effect at E3 2006 I recall a handful of Square developers attending to evaluate the facial animation. While it looks like their production methods are somewhat dated due to the long development cycle, playing FFXIII shows the eventual result to be outstanding – presumably due to their dedicated engine for facial closeups and meticulous planning.

Continue Reading…

Still with Capcom’s fighter, the more I play it the more I realise the actual animation is merely “functional”, but I imagine that’s what is required to ship a reboot of a franchise where every animation is subject to timing changes for game balancing throughout the project. What appeals most about this visuals are the incredibly solid models and their accompanying rigging and facial poses, so it’s nice to see that the Japanese Softimage site has a page up regarding both these aspects, (with a link to another page demonstrating Resident Evil 5′s volume-retaining arm rig too). Check it out here.

Via the Google translation I see that the game has 25 characters of around 16,000 polygons each, comprising some 5000 animations. The rigging videos are of most interest however, highlighting both their facial & finger sliders and the unique controls for Dhalsim’s squash and stretch limbs. In a break from what I’m used to , the team take a less modular approach to facial expressions, with broad sliders for various facial expressions as opposed to sliders for each area of the face which can afford greater control for the animator but proves more time consuming and being prone to going off-model. This might be a viable approach with such stylised characters however, and they control the following variables:

Continue Reading…

More Metal Gear Details

January 25th, 2009 — Leave a comment

At the risk of coming across as a fanboy, here is a second dose of Metal Gear 4 details divulged on the net. It appears that the Kojima Productions team did the rounds quite a bit post-release as it includes yet more images and information on the making of Metal Gear 4.

The image to the side shows the skeleton used for main protagonist Snake, revealing the inclusion of deformation bones to maintain volume on the elbows, knees and wrists on top of the now-standard twist bones for the shoulders, hips and wrists. Unidentifiable, however, are the curious bones at the neck – perhaps to aid shoulder deformation or simply to attach weapons to?

Some stats from the article:

  • 115 bones in total, comprising:
  • 36 in the face.
  • 47 in the body.
  • 32 in the hands, (3 for each finger, with an additional bone on each hand between the thumb and index finger – presumably to maintain volume).
  • 1700 animations, over MGS3′s 1200.
  • 1400 polygons, up from MGS3′s 4400.
  • 5MB of textures, with a 512×512 for the face and 1024×1024 for the body.

Additionally, a higher-res screenshot of the FaceManager facial animation sliders allow us to peer deeper into the variables used to bring their fantastic characters to life. Here’s the modest list of facial expressions to accompany their similarly conservative facial bone-count:

  • Nose_Up
  • Open_Jaw L/R
  • Smile L/R
  • Anger L/R
  • Kiss L/R
  • Frown L/R
  • Extra_A L/R
  • Extra_B L/R

One imagines the last two to be unique to each character, and there are clearly additional tabs for Phonemes, Eyes and Wrinkles.

I’ve been feeling for some time now that Japanese developers have been falling behind their western counterparts in the technology side of game development, so it’s always good to hear that the Metal Gear Solid team still stand up as a cutting-edge developer – even more so when you learn this via a huge drop of “behind-the-scenes” images from one of the largest games to be released this year.

A few weeks back, details of the facial animation rig and other workflow info had been posted on the Japanese XSI website and I was planning to extract information via the google tranlation and observation alone, but someone beat me to it, (and managed to do a much better job than I would ever have). Head on over to Chris Evans’ (Tech-Art Lead at Crytek) blog for full translations of the following sections:

Regarding the  facial setup, it looks very reminiscent of the same method I saw presented at ADAPT 2007 by Aaron Holly of Disney. This involved a similar setup of a bone rig driven by a mesh giving the two following important advantages.

  1. It was highly flexible and able to be moved between multiple similar faces as the animation is stored on a nurbs mesh that drives the bones rather than the bones directly, therefore allowing for varying bone positions.
  2. If using a pose-based facial animation solution such as FaceFX, the bones travel along the curve of a nurbs surface rather than a simple linear translation, therefore better mimicking the movement of skin across the skull.

This is certainly something I’d be keen to try in the near future given that it now appears to have successfully been put through a full videogame production.