this space intentionally left blank

December 9, 2009

Filed under: gaming»perspective

Uncanny

I'm thrilled, personally, to see actual actors doing voice and motion work for video games, after years of Resident Evil-style butchery. Not to mention that it's nice to see Sam Witwer (Crashdown from BSG) getting work as the Apprentice in Star Wars: The Force Unleashed, or Kristen Bell (Veronica Mars) taking a bit part for the Assassin's Creed games. But friends, I have to say: the weird, digital versions of these actors used onscreen are freaking me out.

We have truly reached the point of the uncanny valley in terms of real-time 3D, which is kind of impressive if you think about it. Or horrifying, if you try to play these games, and are interrupted at regular intervals by dialog from cartilage-lipped, empty-eyed mannequins. It's actually made worse by the fact that you know how these actors are supposed to look, giving rise to macabre, Lecter-esque theories to explain the discrepancies between their real-life and virtual appearance. Don't get me wrong--I'm glad that we've reached the point that such a high level of technical power is available. I'm just thinking it would be nice to be more selective about how it's used.

The problem reminds me of movie special effects after computer graphics really hit their stride--say, around the time I was in high school, and George Lucas decided to muck around with the look of the original Star Wars trilogy, perhaps concerned that they lacked the shiny, disjointed feel of the prequels. In one scene, for example, he added a computer-generated Jabba the Hutt getting sassed by Han Solo, even though it really added nothing to the film apart from a sense of floaty unreality.

The thing is, there wasn't anything wrong with the original effects in Star Wars. They've held up surprisingly well--better than Lucas's CG replacements. The same goes for films like Star Trek II or Alien or The Thing. Even though the effects aren't exactly what we'd call "realistic," they don't kill the suspension of disbelief--and they're surprisingly charming, in a way that today's effortless CG creations are not. Scale models and people in rubber suits have a weight to them that I, personally, miss greatly (the most recent Indiana Jones movie comes to mind). When the old techniques are used--Tarantino's Death Proof, for example, or Guillermo Del Toro's creaturescapes--the results have an urgency and honesty that's refreshing.

Back in videogameland, it amazes me that no-one looks at their cutscenes during development and asks themselves "is there a better way? Is the newest really the best?" At one point, right when CD-ROM became mainstream, it looked like composite video with real human actors might be the future, a la Wing Commander. Somehow, it didn't happen (fear of Mark Hamil, maybe? Psychological scarring from Sewer Shark?). But when you're watching robot Kristen Bell shudder through a cutscene in Assassin's Creed, it's hard not to wish that you could just watch the real Bell, even through a cheesy green screen.

Or, at the very least, it'd be nice if more developers would try alternatives instead of pushing ahead with a character look that they're just not pulling off. I have had harsh words for Mirror's Edge--I believe I compared it to a flammable kitchen appliance--but the developers' choice to create animated interstitial movies instead of realtime rendering was a bold and interesting choice, particularly since the game actually boasted very well-crafted and animated character models. In-engine cutscenes may have been a great bullet-point when we made the transition to hardware-based 3D, but now that novelty has passed. We've worked for years to get to the uncanny valley: it's time to find a way out.

Future - Present - Past