jcAnother first for this site, a guest-post from EA software engineer JC Delannoy, (@jcdelannoy), re-printed with permission. It’s refreshing to have some general #animtips from an animation programmer.

You can read the original post here.


I was recently getting closer to a milestone on my current video game project and found myself fixing a number of small issues in our animation data and code one afternoon. These were the types of problems I had fixed in the past on many occasions and, feeling a little frustrated, I started jotting down some very basic tips and tricks which I had used in the past and could be useful to other developers starting to work with in game animation.

Respond to joystick changes once they have stabilized

Modern gamepads typically contain digital buttons but also analog triggers and joysticks; both of which can output an almost infinite amount of values. Joysticks can output any angle between 0 and 360 degrees and a deflection radius between 0 and 1. Triggers, while it may seem like they are also binary buttons, can output any normalized value between 0 and 1; they are pressure sensitive.

It’s important to be mindful of this difference when binding analog controls to decision points for a character’s actions and animations. Users will take a certain amount of time to change the value of any analog control from 0 to 1, and if there are multiple segments on this range used to trigger various types of actions; some of these intermediary states for the analog control could trigger actions in game even though it was not the intent of the user.

For example, there are generally 2 different ways players will execute a 180 turn with a joystick (illustrated below).

joystick_deflection

In both cases, potential problems can occur. In case A, given how the joystick will pass through the origin; some implementations will attempt to trigger a stopping animation incorrectly on the character because the joystick output has zero radius. On my current project we don’t allow stopping animations to be interrupted until the squash/slowdown part of the animation has completed, therefore the user could not start moving again immediately if he went on to deflect the joystick in the other direction. In case B, the game could attempt to trigger a 90 degree turn if the only criteria is for the joystick to be in a specific segment on the circle.

Of course, if the action/animation which was just triggered is immediately interruptible then it will be replaced once the user reaches his final destination on the analog input and this problem can be ignored to some extent. In the case where the triggered action is not interruptible it will lead to users having to wait for the previous action to complete before getting their intended feedback. It can even lead to having their input ignored entirely.

The trick to prevent these types of occurrences is to introduce a stabilization criteria for the joystick or any other type of analog input. By taking a windowed average of the last few frames of the analog input and comparing to the current value, we can quantify if the analog input has stabilized and should be taken into account; or if it is still changing at a rapid pace and should not be immediately actioned. Using this sort of concept can greatly reduce the amount of false positives that are generated for triggering actions via analog inputs. The typical argument against this is that it will introduce a few frames of latency in the controls. This is absolutely correct however when using it in many areas of character locomotion on my current project none of the users were able to tell the difference once it was tuned appropriately.

Have enough frames of animation to cover blending out

Many characters in video games nowadays are true athletes. They can run at high speeds, jump, run some more, juke and keep running all while maintaining a large amount of forward momentum. Similarly in video games when transitioning from one action to the next, it’s super important for the transitions and motion of the character to move smoothly and believably. One way to achieve this is by adding slightly longer blend times in between moves. This can help in maintaining a smooth transition from one animation to the next. However for this to function properly it is essential for the outgoing/slave animation to still update and move according to its momentum for the duration of the blend.

If the motion remaining on the outgoing animation is shorter than the length of the blend from this animation to the next, it means that the blending algorithm will be using a static pose with no defined velocity as the slave. This will often be interpreted as zero speed. Depending on the blend weight remaining on the slave animation, this abrupt transition on the slave animation to zero speed can create discontinuities in the blended speed of the character during the transition.

Different types of blending algorithms can reduce this problem but generally the blend will look smoother if the slave animation still contains some key data for the duration of the blend. Consider this speed chart of a character running forward and transitioning to another quicker run forward with a long crossfade blend. In one case, the blend begins many frames away from the end of the asset, in the other case the blend begins when the slave animation has less frames left than the length of the blend.

blend_out

Notice that the blended speed contains a discontinuity in the bottom case, caused by the speed of the slave animation’s speed changing suddenly to zero. However in the top image, the character’s speed will remain exactly where we want it to be throughout the blend thanks to the large amount of animation frames left from the transition point in the slave animation.

Avoid losing interest motion while blending into assets

Most motions that a character will perform contain an anticipation, an action and a follow-through. Take away any one of these segments and the character looks unnatural. For example, just try and imagine a baseball player at bat trying to swing without a windup or a follow through, impossible. Unfortunately creating this type of unnatural transition is all too easy in video games.

This phenomena is pretty apparent in basketball games where the ballcarrier is constantly dribbling the ball down to the ground and catching it on it’s way up. In other words, he is almost always either winding up to throw the ball to the floor or following through from catching the ball on its way up. Each one of these segments of throwing the ball and subsequently catching it must be played through to completion or the dribble will look unnatural. Great care and attention must be taken if transitioning during any one of the segments so it can be replaced with something that will maintain the flow of anticipation, action and follow-through. A problem that comes up often is when we are transitioning into a new asset that contains an anticipation segment directly at the beginning of the animation. If our crossfade blend is too long, the players do not see the anticipation in our animation as it is still playing mostly the previous animation. The resulting motion in game does not look realistic.

There are ways that this can be fixed by dynamically varying the type or length of the cross fade blend. However it is best if this situation can be avoided altogether by authoring the assets so that there aren’t any vital poses in the animation until the current animation has reached a large enough weight in the blend. This will ensure that the entire anticipation, action and follow-through will all read properly on screen. As an example, if you are blending from a baseball player standing ready to swing a bat, and have a long blend into a swing animation; do not have the player begin his actual swinging motion of the bat too early in the animation assets.

Scaling source motion rather than adding or subtracting

We often need to add motion to an animation to have a character reach a very specific target to interact with; an animation that makes the character step forward to reach a door knob would be an example. Games will often have to transform the motion of the animation at runtime to bring the character’s hand close enough to IK to the knob. In this case, don’t just apply the extra motion required at a constant rate over the whole animation; instead look at when the animation is moving and transform the movement speed using the knowledge of how much extra motion must be applied. In other words, if the character is stationary during any portion of the animation then do not apply any extra motion on those segments; otherwise you could run into issues where the character wants to be idling but is being moved regardless, creating a foot sliding effect.

scale_motion

In the graph on the left, we take the speed graph of an animation coming from a certain speed and slowing down to rest. For our game’s purposes, we want to add an extra 20% of motion to the character during the animation. In one case, we choose to divide the extra motion by the amount of frames in the animation, and apply this equally on each frame. The resulting effect is that the character continues to have a non-zero velocity even near the end when we are in an idle/standing pose. In the other case, we add an extra 20% of motion by simply scaling the speed graph up by 20% throughout. The character is now moving faster in the moving portions of the animation; and he remains at rest with zero velocity when that is what is specified in the animation.

One of the older systems I worked with for getting a character to jump up to a very specific height worked in the follow way: animations would be marked up for their takeoff and landing times and the algorithm would then calculate the extra amount of displacement required to get the character to the specific desired height, and then it would apply this extra movement equally between the moments when the character took off and where he landed. We were never quite able to make this motion look natural in all cases because we were ignoring too much of the root motion. The following year we rewrote this system to instead scale the vertial velocity of the character to align the jump properly and the results looked much more natural. This is because the portions of the jump where the character was not yet moving upward remained roughly the same, and we only sped up the character when he was already moving quickly. By basing everything on the actual motion in the data and simply performing modification to hit very specific marks we were able to achieve better results visually.

Managing quality on large amounts of animations

During the last few years I’ve had the opportunity to work on many sports video games. These types of games have the interesting challenge that they are trying to reproduce the exact motion of specific real world athletes in the game. So while most games could have one animation for any kind of in-game action; we would sometimes have dozens of different variations so that the in-game players could more precisely ressemble their real life counterparts. This creates a very high volume of in-game animations; and any operation that needs to be done on every animation manually will eat up a lot of development time. Just imagine if a new feature suddenly requires a new tag to be added manually to a hundred different assets; that takes a lot of time!

For situations like these it is useful to have a framework that allows for running data verification and fixup scripts over a subset or all of the animation assets. These types of scripts can include performing verification based on a specific set of rules, generating a list of assets which should be manually verified, fixing a set of issues automatically or even adding metadata at specific moments in specific assets.

It is usually best to fix these types of data problems proactively and thoroughly; otherwise if a “fix them as they come up” approach is taken you could be fixing issues here and there for a very long time, constantly having to interrupt whatever you were currently doing to fix one of these data issues. Generally any time I have spent writing verification scripts to go over data; I have gotten this time back many times over by not having to go and deal with data issues as they come up in a complex game of “whack-a-mole” with the problems. Also, as you gain experience with implementing these types of scripts you will get better and faster at creating new ones. Having these scripts ready to be run makes it easy to just re-verify all assets if any new animations with problems were added since the last set of tests were run; this is quite helpful during production when new animations will constantly be added to the game.

Let me know what you think

Hopefully some of the tips mentioned above can help save you a few hours of effort here and there during your production. At the very least, some these tips might help you create an environment where it becomes easier to scale up the amount of animations and animation transitions in your project. If you have any feedback about this list of simple tips, or have your own set of useful tips when dealing with game animation please let me know at jcdelannoy AT gmail DOT com. Thanks!