Artificial Intelligence Could Prevent the Next Video Game Animation … – Gizmodo

Posted: May 4, 2017 at 3:19 pm

Human character animation has gotten much better over the years, but its still one of the most recognizable issues when enjoying video games. Animations are normally a predetermined set of canned motions, and while real enough looking in the right setting, can totally break the immersive experience when they stray out of bounds. The uncanny valley is a particularly hard one to escape.

Now, a research team from the University of Edinburgh has developed a new way to animate game characters using neural net computing, which could help developers make more fluid and realistic animations, while decreasing the system resources and time involved. The video demonstrating the new technology is legit:

The minds behind this video, and an accompanying paper published in ACM Transactions, are Daniel Holden, Taku Komura and Jun Saito. When I caught the video, I immediately reached out to Holden for more info. What exactly makes this different from other animation methods, and whats happening in the neural net to make the magic happen?

Neural networks (or NNs) are a way to train a computer with millions of differently weighted data points. Using algorithms, a NN can create completely new outputs on its own, based on the data it has to reference. The emerging technology is commonly used in facial recognition, image processing, and stock market prediction applications.

The workings of the neural network are themselves quite abstract and hard to understand, Holden admitted in an email. But basically, what he and his colleagues have done is employ a system called a phase-functioned neural net, in which the variables controlling a characters movement and interaction within the environment can change on the fly. His aim was to develop intricate, human-like cyclical movement that reacts appropriately to user inputs. As players mash on their gamepads or move a mouse, the neural net learns and evolves over time to create a seamless animation.

We change the weights of the neural network depending on what point in time in the locomotion cycle the character is, explained Holden, those weights being the data that influences what the animation will be. For example, when the character puts their left foot down the weights of the neural network are different to when the character puts their right foot down.

The result? A drastic reduction in the amount of time and effort animators need to achieve that perfect gait. The video above only used about 1.5GB, or 2 hours, of motion capture data, and the animation doesnt need as much on-board processing power to render. Holden thinks it will take some of the burden off animation programmers maintaining the hugely complex animation systems.

It took 30 hours of NN training and 4 million data points to create the animation in the video and, to me, it looks pretty good. The character deftly navigates raised obstacles, appropriately speeds up and slows down based on the input and reacts accordingly to the walls and bridges. Obviously, third-person perspective games could benefit from the new approach, but VR experiences are also mediums where, for maximum immersion, were going to need hyper-realistic motion.

Neural networks have been used in gaming before, in developing opponent AIs, but this is a great example of how the technology can make developers lives easier. Holden just started a new R&D job at Ubisoft, and while unable to say if the tech is going to be used there, he looks forward to seeing more implementation in the future.

I hope that this technique does change gameplayhopefully these sort of technologies will allow game designers to be more adventurous with the kind of environments they create, said Holden.

Bryson is a freelance storyteller who wants to explore the universe with you.

Read more:

Artificial Intelligence Could Prevent the Next Video Game Animation ... - Gizmodo

Related Posts