Virtual Reality And Game Animation

I’ve long held that the technical (i.e. non-artistic) pinnacle that videogames’ can attain, when we’ve finally achieved the point at which technology no longer holds us back, is the complete virtual reproduction of an immersive world in the manner of Star Trek TNG’s Holodeck. If we’re looking to offer wholly-immersive experiences in a virtual environment, then this is the absolute zenith.

As such, my interest has been piqued for some time now by the affordability and accessibility of virtual reality’s second coming in the form of the increasingly popular Oculus 3D headset. I lapped up the VR-related talks at this year’s GDC, convinced the immediate benefits would outweigh criticisms from developers at Valve and the Oculus guys themselves – appreciating the latter’s “mea culpa” approach and the former’s assertion that while far from finished, this is the first step on the long road of virtual reality becoming a viable gaming reality by comparing the current situation to that of the early days of PC 3D accelerator cards.

With that in mind, I’m approaching VR not just as a gamer and potential customer, but also as a game animator. Will all games in the future go first-person? Animators would then be tasked only with NPC and multiplayer-character movement, or will there still be a place for 3rd-person games, albeit from the perspective of a fully-immersed benevolent overseer?

Now that I’ve had more than one session with the headsets on a variety of game demos, the long-and-short of it is that we’ll still be animating player characters for the foreseeable future. I believe the current incarnation will be championed only by the hardcore early-adopter, but it’s worth noting what immediately does and does not work irrespective of the inital hardware entry. Oculus themselves drew our attention to many game design staples that, if we go down this route, require serious re-evaluation to work from this new immersive perspective, and I would tend to agree:

1. No more HUD

2D HUD overlays no longer work as viable methods to convey information to the player. Not only do they not fit in the world, (I experienced leaderboards at incorrect depth/distances), but the very act of looking at the edges of the screen is instantly undesirable. While Oculus pointed to Deadspace as a shining example of rendering health-bars etc onto the player character, I would expect fully-immersive, (especially non sci-fi), games to place the onus on animators to provide different sets of animation to show damage, fatigue etc. This is something we’ve explored in the past, often by no more than a tired idle or limping walk-cycle, but may well become a common requirement with much effort spent on it.

2. Cameras open further

Camera FOVs will become instantly wider to better replicate the human eye’s field of view when constrained by the VR screens. During my experience, the wide-angled Team Fortress 2 camera won out over the narrower Mirror’s Edge and highly constrictive Dear Esther. This will likely be less of an issue as the visor screens themselves better reproduce a wider field of vision as hardware improves, but for now the displays require an unnatural distortion of perspective to make up for this shortcoming, further warping our characters on close inspection. I should also note that the resolution is surprisingly low in the dev-kit version, (though it’s expected to almost double for the full release), giving the impression of having my eyes squashed up close to an old scan-line arcade cabinet, with distant objects in Dear Esther becoming all but indecipherable. It remains to be seen just how off-putting this is for most players.

3. No more cutscenes

Perhaps the one that will have the biggest impact on our discipline – cutscenes are no longer a viable storytelling method. Oculus mentioned this during their talks, and it will likely have wide-ranging effects on the videogame medium if we see an eventual wide-scale adoption of virtual reality. Again, with immersion being not just desirable but a pre-requisite, Oculus warned us that it is now jarring to have the camera control and perspective ripped away from us, but nothing could prepare me for just how bad this was. When playing Mirror’s Edge, the early tutorial levels required that I briefly watch another runner perform an action before I followed suit. While this was no more a cutscene than simply locking the rotation on the target character, it caused me to lose balance several times, and with each scene I became increasingly nauseous to the point of cancelling the first demo session to seek fresh air outside.

A second session was less dramatic, and admittedly, reactions are different for each person. Perhaps it’s something players will simply get used to, but the optimistic part of me is inclined to hope that these ill-effects will be avoided entirely as we learn what works best in terms of cameras and behaviour just as we did in the move from 2D to 3D. If the wrenching of control form the player that we previously experienced in cutscenes is now magnified to the degrees I experienced, then I can see a greater push for more immersive and dynamic storytelling that keeps players in the game – something I am all for.

4. Games will become slower-paced and more detailed

While it may be purely subjective, others agreed that shooting a tiny enemy at a fast pace held far less interest than closely examining the interesting objects and locations of Dear Esther, or just enjoying the vast blocky vistas of Minecraft. I nearly fell backwards at one point marveling at the ceiling rafters in TF2 – preferring instead to explore rather than compete. Importantly, there was an amazing thrill in standing right next to a life-size Medic – I can’t wait to interact with, (and animate), highly-realised characters like this on a deeper level!

Like all new hardware, Oculus Rift will come of age when a game is designed specifically to be used in VR that not only takes full advantage of the system, (and in this case also avoids all the pitfalls), but simply could not have existed before without the full immersion making sense from a narrative and design perspective.

Take a look at the video above incorporating the Razer Hydra as an input method – I for one could spend hours in a Skyrim-esque RPG with that level of interaction. Now fill it with rich character interaction and a more cerebral adventure, and I’m not only sold on the hardware, but have a new creative challenge animation-wise laid out for years to come.