It is only recently, through performance capture advances, that actors can lend not just their voices but synchronized body and facial motion to the animated characters we see on screen.
Creating realistic movement in 2D or 3D characters on screen has always posed a challenge. Since the 1950s, Disney is well known for bringing live models into their studios, using real animals and real people for cartoon animators to use as reference points for simulating movement. Actress Helene Stanley was best known for productions she’s never appeared in as herself — she was the live model twirling around the Disney studio in full costume so animators could sketch her and bring their characters to life. Her hand and body movements provided the basis for iconic characters like Cinderella (1950), Anita from 101 Dalmatians (1961), or Aurora from Sleeping Beauty (1959) dancing with the creatures of the forest while singing “Once Upon a Dream.”
Then, the rise of computer animation in the ‘70s and ‘80s was a game changer, and the industry boomed. From classic feature films like The Little Mermaid (1989), to more abstract styles created for TV like Rugrats (1991-2004), animation was thriving. And it was all thanks to streamlined workflows that enabled creators to simulate movement quickly and accurately.
As technology evolved and 3D animation became more popular, models on set were slowly exchanged with actors in motion-capture (mocap) suits being recorded in a studio setting. With high precision cameras tracking and capturing every detail, motion-capture was a game changer for productions. The high-quality data provided transformed animation and post-production pipelines, saving valuable time without compromising on quality.
A synchronized performance
Motion-capture proved a transformative tool, but it’s only in the last few years that the film industry has taken notice of an actor’s actual performance as a digital character in a way that goes beyond simply voice acting. Robin Williams’ iconic talents as the Genie in Disney’s animated hit Aladdin (1992) started a trend of casting Hollywood actors as voices of animated characters that’s still going strong today. And while we recognize the voice instantly, it is only recently that actors can lend more than just their voice to the animated characters we see on screen.
This is all due to the evolution of performance capture — a technology that allows for the recording of not just the body, but the face and voice as well. This process provides a synchronized performance, as opposed to the traditional and often laborious workflow that brings together voice recording, motion-capture, and animation manually. With performance capture, the body, facial expressions, and voice are already in perfect sync from a single source.
Realistic movement as a storytelling device
The richness of the data we gather from performance capture is unparalleled. Optical markers enable high-precision cameras to record the pattern of movement of not only the torso and limbs, but hands, fingers, and face, while a head mounted microphone captures the actor’s voice. The pattern of movement is then transposed onto a 3D model using dedicated software. This brings all the benefits of traditional motion-capture, but more data at the beginning of the pipeline equals more freedom further down the process.
Animators can then transpose an actor’s facial expressions, gestures, and overall body language onto their digital character, which isn’t only a time efficient solution from a workflow standpoint, but also an artistic choice that allows for a depth of expression that simply wasn’t possible before. With the rise of 3D animation and live-action remakes of classics such as the The Jungle Book (2016) and The Lion King (2019), performance capture has become an indispensable part of animation, particularly on productions that look to bring their characters to life with a realism reminiscent of live-action movies.
In the case of The Jungle Book, the challenge of seamlessly matching Neel Sethi’s performance as Mowgli with the computer generated animals was the foundation on which the production stood. They used real-time rendering techniques such as Simulcam to feed the data from the cameras tracking the live-action shots straight into their system, which allowed for the immediate visualization of the performances. Capturing the magic of the 1967 cartoon and giving it the sense of realism the concept demanded wouldn’t have been possible without the technology underpinning it all.
What we’re seeing in the industry is the continuous pushing of the envelope when it comes to blending live-action and animation. As the technology evolves and becomes more widely adopted, creatives continue to explore the solutions that enable them to bridge this gap. Accurate and rich data is the foundation for this creative exploration, enabling performers to step out of their own skin and slip into something entirely different with the help of animators, opening up new horizons for actors, productions, and storytellers everywhere.