Researchers from the University of California, Berkeley developed a reinforcement learning-based system that can automatically capture and mimic the motions it sees in YouTube videos.
“Data-driven methods have been a cornerstone of character animation for decades, with motion-capture being one of the most popular sources of motion data. Mocap data is a staple for kinematic methods, and is also widely used in physics-based character animation,” the Berkeley researchers stated in their paper.
Using NVIDIA GeForce GTX 1080 TI and TITAN Xp GPUs, with the cuDNN-accelerated TensorFlow deep learning framework, the team trained their reinforcement learning system on several datasets to estimate the pose of characters and extract the mocap data from different video clips.
Give it video clips, the algorithm estimates the pose and movement of an actor in each frame. In this case, the team trained their algorithm to perform more than 20 acrobatic moves like backflips, cartwheels, and even martial arts.
“The primary contribution of our paper is a system for learning character controllers from video clips that integrates pose estimation and reinforcement learning. To make this possible, we introduce a number of extensions to both the pose tracking system and the reinforcement learning algorithm,” the researchers stated in their paper.

The system can understand poses it sees on videos and single frame images to predict where an actor might go.
A paper describing the method was published on ArXiv this week.
Read more>