Apes to Dragons: Weta’s Production of 'Jane and the Dragon'

Mike Fallows looks into Weta Prods. move from creating King Kong for the big screen to creating dragons in Jane and the Dragon for the small screen. It begins with his experiences on the smaller MoCap Donkey Kong Country for TV.

Jane and the Dragon is the result of teamwork between Weta and Nelvana. All Jane and the Dragon images © 2005-2006 WETA Productions Ltd. /NELVANA Ltd.

Jane and the Dragon is the result of teamwork between Weta and Nelvana. All Jane and the Dragon images © 2005-2006 WETA Productions Ltd. /NELVANA Ltd.

It was in early 1998, after many years animating with pencil, paper and film and having just finished directing my first series using digital color and compositing that a series was offered to me utilizing what was at the time a relatively new process called motion capture. The series, Donkey Kong Country, was to be based on the popular videogame Donkey Kong. (Motion capture is a technique whereby specialized software is used to capture the performances of live actors as data, which is then applied to character models created in the computer; the results being that the characters perform in the same way as the actor did in the original live performance.) Already bitten by the computer bug, I agreed to direct the series.

Although loving the hand-drawn animation I grew up with as a kid, I never considered myself to be what some would call a traditionalist in regards to animation. I loved animation in all its forms. Drawings, stop motion, clay animation, cutouts, scratches on film stock and all the many other techniques used to make animation were interesting and valid to me. History was full of animators finding new ways to create animation and now the computer was offering up several new techniques.

Motion capture (which I will now refer to as MoCap) at the time had already gained favor and was widely used in the gaming industry, but had never been utilized to create a complete series. The technique was viewed as an economic solution to ever-smaller license fees being paid by the broadcasters. It was to be a cheaper and more efficient way of creating animation. Nobody seemed to be viewing MoCap from a creative point of view. I personally was not convinced of the economic advantages but was certainly intrigued on a creative level.

MoCap was, divided into two camps, optical and magnetic. The system we were to use for Donkey Kong Country would be magnetic. The actors would perform in special suits with markers attached to key points in their anatomy (elbows, knees, etc), and a large bundle of cables which ran from their suits to a bank of computers. They would perform on a raised stage under which some mysterious piece of equipment would be creating a magnetic field. As the performers went through their actions and moved through the magnetic field, the positions of the markers in that field would be recorded and saved as data. This data would be transferred to one of our characters (generally an ape) and we would watch a very crude version of the character go through the same motions as the actor, in real time on a monitor at the side of the stage.

Peter Jackson (left) made history by using MoCap to bring Gollum to life with the help of Andy Serkis. © 2005 Universal Studios (left) and © New Line Prods. 2002.

Peter Jackson (left) made history by using MoCap to bring Gollum to life with the help of Andy Serkis. © 2005 Universal Studios (left) and © New Line Prods. 2002.

Rules of the Game

Because the process was still in virtual infancy, there were many technical restrictions. Several pages of them in fact. The stage was tiny, allowing for no more than about three steps, so the characters could not move very far within a scene. This made it difficult to record more than one scene at a time. We needed to choreograph the actions to avoid having an actor become entangled in the cables. We could only capture one character performance at a time so multi-character scenes required multiple passes. A single actor would be scheduled for each day and would perform all of the characters that were required for the scenes and sequences being captured that day, performing one character at a time. We also needed to avoid physical interaction between characters and between characters and props. There were these and many other restrictions (too many to list here) to consider.

We did discover many tricks along the way that enabled us to do things that we originally thought impossible and the whole process became one big learning curve as we pushed the technique to its limits. The process involved having the actors perform along with the recorded dialogue tracks, which were being played through speakers placed around the MoCap stage while another performer sitting at the side of the stage and working with a specially wired up glove, would perform the facial animation at the same time. On a monitor at the side of the stage, I would be watching the character going through the performance that the two live performers were creating.

The restrictions were often frustrating as they impacted on the stories we wanted to tell and how we would stage the sequences in our storyboards but there were little moments of magic that could only have been achieved with MoCap. There was a quality of life to the animation that was unique to the process. It was not exactly like animation I had known and not exactly live action, but somewhere in between. Even though the show went on to do very well, and even though I was pleased with the results we achieved, I came away from the experience feeling that motion capture was not really a viable method of animating a series. There were just too many restrictions that would not have existed had the show been animated in keyframe and not enough on the plus side to justify using MoCap. I considered Donkey Kong Country to be my first and last motion-capture experience. I wanted to continue with computer animation but not with motion capture. Yet there was something.

Mike Fallows had no desire to direct another MoCap series, but when Weta came calling, and with the studios achievements with Gollum, he reconsidered.

Mike Fallows had no desire to direct another MoCap series, but when Weta came calling, and with the studios achievements with Gollum, he reconsidered.

My Precious

Jump ahead to 2004. I had been successful in computer animation with Rolie Polie Olie, The Santa Claus Brothers, Miss Spiders Sunny Patch Kids and others. All were created using keyframe animation. A director down in New Zealand by the name of Peter Jackson had just made cinematic history with his Lord of the Rings trilogy. Jackson had apparently seen the same something that I had glimpsed earlier and used motion capture to bring the character Gollum to life, with the help of an Academy Award-winning performance by actor Andy Serkis, pushing the technique to its highest level to date. Jackson and Serkis were now in the process of bringing another ape, King Kong back to life using the same technique. Motion capture had come a very long way indeed.

The very talented Richard Taylor and his team at Weta workshop in New Zealand, which had been so instrumental in the making of Lord of the Rings, had decided that he wanted to make a television series and had teamed up with Martin Baynton, who had written a popular book called Jane and the Dragon several years earlier. They wanted to incorporate what Weta had learned from the creation of Gollum and produce a series using a combination of motion capture, puppetry and keyframe animation. They connected with Scott Dyer, evp in charge of production at Nelvana in Toronto, and, between them, agreed to a co-production on Jane and the Dragon. I was offered the position of directing the series. I must say the idea of directing another motion capture series was not immediately appealing, however this was Weta and there was Gollum to consider.

When Janes production team moved into storyboarding, the staging of sequences had to be MoCap friendly. Because of the schedule, the digital artists kept the number of pure keyframe scenes to a minimum.

When Janes production team moved into storyboarding, the staging of sequences had to be MoCap friendly. Because of the schedule, the digital artists kept the number of pure keyframe scenes to a minimum.

Jane and the Dragon

In early 2004 I flew to Wellington, New Zealand, along with a group of other Nelvana-ites, Dyer, Irene Weibel, Eric Flaherty and my often partner in crime, producer, Pam Lehn, to meet up with the Weta team, including Richard Taylor, Martin Baynton and Trevor Brymer. They were a wonderful group of creative and enthusiastic people that had a vision to create an animated series of a quality not seen on television before. Weta intended to use the most advanced tools available and apply everything learned from Gollum. They were convinced that motion capture was the correct approach to use for this series, from a creative as well as economic viewpoint. The test footage completed by Weta was breathtaking. It had a storybook aesthetic that was true to Martins original illustrations with a level of detail and authenticity that I had not seen outside of feature films. I was sold. It was time for me to leap back into the world of MoCap.

Martin and Trevor (one of WETAs technical geniuses) imagined a production pipeline based very much on a live-action shoot, a pipeline far different than any that had been used before. I had been with Nelvana since our first baby steps into the CG realm and we had built a pipeline at a time when there was no templates for computer animated television series production pipelines. Our pipeline served us well for several years and through many productions. As we worked through several productions it had evolved and been refined to the point of being perhaps the best and most efficient pipeline in the industry for producing computer animated series. We had a lot of experience with co-productions and most often our partners would adopt all or part of our pipeline into their existing processes.

Here was a studio however that wanted to re-invent the wheel. The idea of forging ahead with an unproven production pipeline was a little unnerving to say the least but fortunately we had a group at Nelvana that was open minded enough to, through several meets, and some give and take, come up with something that incorporated elements of what we knew (our safety net) with what seemed to be the best aspects of this new pipeline. We had our own in house genius in Eric Flaherty who together with Trevor worked through all of the technical issues and found ways to modify our process so that we could reap the benefits of this new pipeline.

Things started going off in a very new direction from Fallows early days on Donkey Kong Country when Jane moved into motion capture.

Things started going off in a very new direction from Fallows early days on Donkey Kong Country when Jane moved into motion capture.

Cautious Steps

Our approach began, as with any series, with the scripts. As is typical in CG and for that matter any form of animation, you need to be aware of the restrictions and difficulties associated with the technique you will be using. At the same time you want to be very careful to not let the technique drive the creative. You want to push the technique to its limits, but at the same not over extend to the point where the results appear compromised. It is often a difficult balance. We needed to choose storylines that would benefit from and accommodate the use of motion capture. Many stories would be killed at the premise stage for being impossible to execute properly. Since we were treading in such untested waters, we needed to be very cautious in the beginning, taking baby steps and sticking to things we knew we could achieve. Many of our earliest concerns began to vanish as we moved further into production and with each script we began to stretch further and further.

After script and the recording of dialogue, we moved into storyboarding. The boards were handled essentially in the traditional way, but again with careful consideration of the process. The staging of sequences had to be motion-capture friendly and as we were unsure of how much keyframe we would be able to handle in our schedule, so we tried to keep the number of pure keyframe scenes to a minimum. Fortunately I was working with several artists that had worked in motion capture before and, in fact, some from the Donkey Kong days, so they were pretty familiar with the limitations.

One of the hardest things to get used to is having a limited ability to cheat. The sequences would be captured as one take, so hook up of character location needed to be accurate between the shots. We could not have characters positions cheated within a sequence. The board artists would often use the 3D set to plan out their shots and to confirm that the shots they intended were possible.

Once a board was finalized, again as in most productions, we would cut a leica reel for each episode. The storyboard panels were scanned and input to our editing program. A rough version of the show was edited together, giving us our shot timing, dialogue placement and allowing us to finalize the narrative. The leicas were left a little long to give us some room to move later and in fact sometimes I would leave shots in a sequence, not necessarily, because I thought I would ultimately use the shot, but to give me a different camera point of view on the sequence that I would not have had otherwise. This is an element of the process that was quite different and the reasons will become clearer as you read on. The locked leica would then be used to create the database for the episode. Dialogue was broken down, the show was broken into sequences and the sequences were planned as to what characters, locations and props would be required in each.

Fallows experience on Jane and the Dragon changed his views on using MoCap to create a series. He now thinks it is a viable and creative tool if the choice to use it is based on creative reasons.

Fallows experience on Jane and the Dragon changed his views on using MoCap to create a series. He now thinks it is a viable and creative tool if the choice to use it is based on creative reasons.

No Cables

It was the next phase when we moved into motion capture that things started off in a very new direction and very different from my early days of MoCap. Five actors had been cast and each was to have one primary character role and some had secondary character roles as well. As in live action, each shooting day would be carefully planned out and the actors would be on call for the days that their characters were needed. Having one actor per character would give us a consistency of performance for each of our characters.

The motion capture was performed on a very large floor space of about eight meters square using an optical system that involved the use of 20 cameras placed at various points around the performance area. The cameras would pick up the movements of markers placed on the suits that the actors wore, much like before, but now without the need for any cables to hinder the actors movements. The cameras would identify where these markers were in a three-dimensional space and save that info as motion data.

Before motion-capture sessions, the actors would rehearse the complete sequence that they would be performing. Once on the motion capture floor, the set to be used for the sequence was loaded into the computers and was visible on the monitors. The floor was marked up with indicators for key set elements like walls and large props or furniture so that the actors could orient themselves, as well as marks that the actors needed to hit at various points in the performance. Sometimes props would be placed on the floor that represented objects that the actors might need to interact with (stairs for instance) There was even a rig built that the actor could sit on that would simulate riding on the back of the dragon.

Once all the markers were in place and the performance area was set up, the performers would be brought in and moved around on the marks while the MoCap director, Peter Salmon, using a virtual camera, loosely set the camera angles for each shot that was in the sequence we were shooting as per the storyboard. The audio would now be played over speakers set around the performance space and the actors would act out the complete sequence, regardless of how many shots were in the sequence, and with up to five characters on the floor at a time. No facial animation was done at this point; it was all about the overall body performance.

It was all handled very much like a stage play or a live action shoot. Several takes were done until the motion capture director was satisfied that we had all the performances we needed. He would have the ability to circle any section of any take to be used later. The truly incredible thing here is that while the performances were taking place, we were able to watch a monitor and, in realtime, see rough versions of our characters act out a sequence, in a simplified version of the set and with rough cutting as per the storyboard. I was essentially watching each sequence come to life before my very eyes and have the ability to identify staging, performance or continuity issues and correct them on the fly. This whole process would continue until a complete episode was shot. A week of shooting per episode was standard.

MoCaps ongoing evolution can be seen in Andy Serkis performance in King Kong. © 2005 Universal Studios.

MoCaps ongoing evolution can be seen in Andy Serkis performance in King Kong. © 2005 Universal Studios.

The next step was to take this data and go into a layout phase where we would view the circled takes of the MoCap and tweak the roughed in cameras, again as per the storyboard, finding the best performance moments, correcting compositions, and even shifting performances in virtual space or time. We could also at this time create alternate versions of shots, for example, adding small camera moves that had not existed in the original storyboard. After going through the entire episode in this way, we would end up with a motion-capture version of the leica with rough animation in place.

Endless Possibilities

This version of the leica was now put back into editing where it got even more interesting. Even though we now had each sequence with MoCap as per the original storyboard and leica, we also had available that entire sequence from the point of view of each of the shots within the sequence. In other words, much like live action rushes where multiple cameras have are used to shoot a sequence, I now had the ability to re-cut the sequences.

Our editor, Annellie Samuels, could move cut points in time, add shots, eliminate shots and add cutaways. Unlike other forms of animation where you only animate what is in the field of view of the camera, complete performances existed whether the camera was on them or not. Annellie could now go and mine this material and get another crack at the sequences, refining the pacing, adjust the flow of the cutting, and focus the narrative. We had all sorts of material with which to build the sequences and the ability to find the special little performance moments that may have happened outside of the cameras view on any given shot and be able to incorporate them into the sequence. This comes back to an earlier point of leaving shots in the leica for the sole purpose of capturing the sequence from that point of view.

Certainly with all CG animation you have the ability to adjust cameras infinitum, but you do not move a camera and find animation that exists outside of the field of view the way you do with this process. If I decided that I wanted to stay on one wideshot for an entire sequence, I could do it since the complete performances were all there. I was able to sit in the edit suite and have Annellie slide a cut point forwards or backwards while maintaining hookup between shots because the full performance was there, while only the camera point of view changed.

The end result is a very organic, live-action feel to the edit. Perhaps more like live action than animation. Typically animation has a more contrived feel to the edit. I dont mean this in a negative way, but animation is typically approached on a shot-by-shot basis, with each shot being carefully planned, composed and animated. Each shot has a beginning and end. You cannot make a shot longer without adding animation. You cannot move the camera to find more of the animation. You would need to move the camera and add animation. You cannot take a scene where a character walks through a shot and change it into a tracking shot that starts earlier than the point when the character originally entered the shot and continue to track beyond the point where the character originally exited the shot. You do not generally animate an entire sequence without regard to the camera position. You are always aware of the cameras position and animate only what it is going to see.

Once the entire episode was re-cut using the MoCap footage, it became the new template, making the original board and leica obsolete. This new cut would be the basis of everything that came after. Keep in mind that at this point all of the unused performance material still existed and could be called upon later if needed. The creative possibilities not only seemed endless they were endless.

Keyframe

The next step was to take the shots into keyframe animation. Facial animation was created in two steps, first the lip sync and then the rest of the face including eyes, brows and expressions. Both these steps utilized joysticks, manipulating the faces of the characters much as you would a very complex puppet with a range of approximately 100 different expressions. The level of facial animation detail depended on the proximity of the camera. We were able to go to extreme closeups and get a very subtle and detailed level of performance. While the facial was being worked on, all of the other keyframe animation was taking place including the hands, actions not possible in motion capture, tweaking of the MoCap performances and of course all of the dragon animation which naturally could not be MoCapped.

Dan Barrett, the animation director and his team did a great job of marrying the keyframe animation seamlessly to the motion capture so that all of the characters, including the dragon, seemed to exist within the same universe. As the production progressed, a library of dragon animation was built up so that an animator could access a previously animated movement of the dragon to use as a starting point for the scene he or she was going to animate.

Finally the scenes had hair and cloth simulations added, the sketch effect was applied and the scenes went through final render. These final rendered scenes were then again put into edit, a final cut was completed and the show went to post.

[Editors note: Jane and the Dragon premieres on Jan. 8, 2006, at 11:00 am on YTV. Jane and the Dragon is the coming-of-age story of 12-year-old Jane, a headstrong knight-in-training who, unlike most girls of her time has a suit of armor in her closet, a sword in her belt and an unforgettable and unpredictably endearing 300-year-old dragon by her side. It airs Sundays at 11:00 am, repeating throughout the week on Thursdays at 8:30 am and 12:30 pm and Saturdays at 6:00 pm.]

To MoCap or not to MoCap

Motion capture has come a very long way from my early experience and my views on its use for creating a series have changed considerably. I now see it as a very viable and creative tool to add to the animation arsenal. I still believe that the choice to use it should be based on creative reasons. It is a strong performance-based technique that brings a high degree of realism and truth to the characters. It allows for subtle acting and is able to convey a wider range of emotion than is normally achieved in television animation. You have the ability to have one performer per character, which comes close to the feature concept of one key animator per character, giving you a consistency of performance, character traits and body language that you cannot get out of large team of animators animating any and all characters they have within their scenes.

The process allows for massaging of the edit long before committing it to the rest of the pipeline, eliminating or correcting many issues that would have come up at a later stage. You have the ability to discover and use performances that you had not planned on and the camera had not originally been intended to see. You have the ability to take advantage of the magical little moments that have occurred in the performance that could not have been scripted. You can create a very organic and flowing edit, free of hookup issues.

Is it animation? Absolutely it is! Would I work with motion capture again? Absolutely! This is a process that is now just reaching its potential and continues to reveal new possibilities. It will certainly be a part of the ongoing evolution of animation for years to come and I am thrilled to have played a part in it.

Mike Fallows has been working in animation for close to 30 years. Hes worked with many of the major animation houses, has directed more than 100 hours of animation, has been nominated and won several Emmys and has never stopped learning about animation.

Tags