In part two of his Golden Compass coverage, Alain Bielik explores the vital contributions of Cinesite, Framestore CFC, Digital Domain and Rainmaker.
Read the first part of our in-depth coverage of The Golden Compass.
The most unique aspect of Philip Pullmans His Dark Materials books is probably the concept of daemons. When a vfx supervisor reads a script in which every character has a daemon alter ego, his first reaction is, Hmmm, thats a nice concept! But then, he starts realizing what it means from a visual effects perspective: every character must always have a daemon beside him. It basically implies that photorealistic creatures should be animated for just about every shot in the movie, except for close-ups and cleverly framed shots! Suddenly, the nice concept turns into a massive vfx effort, so massive, in fact, that two vendors ended up sharing it. While Rhythm & Hues created all the characterization shots, Cinesite was responsible for all background non-talking daemons, which represented a total of 15 distinct animals.
VFX Supervisor Sue Rowe and VFX Producer Aimee Dadswell-Davies oversaw the 19-month assignment at the London-based Cinesite. VFX Supervisor Matt Johnson joined the team during post-production to helm the huge Bolvanger battle sequence. Modeling Supervisor Stephane Paris first sculpted clay maquettes, then remodeled the creatures using Mudbox. CG Supervisor Ivor Middleton oversaw rigging, feathers and lighting set-up, before the daemon models were handed over to Animation Supervisor Quentin Miles and his team.
Cinesite used its proprietary fur and feather pipeline on a wide variety of daemons. It was developed as an in-house plug-in enabling the team to groom the fur, and then export the fur description to RenderMan. Feathers were created using a system that grows out of curves placed on the models surface. The number and length and size of feathers are controlled by a highly tractable system, with grooming parameters for direction, inclination, curvature, twist, bend and feather type. In the initial brief, the daemons had a spirit look with a slightly transparent feel, but we decided to over-engineer the fur system to cope with possible future changes, Rowe notes. This proved a wise move when the required look developed to become more realistic during post-production. The final daemon look was photorealistic, although we added a slight color shimmer according to the host characters soul. For example, good characters have a warm shimmer, while bad characters have a greener shimmering effect.
Vehicles in a Parallel World
Daemon shots wound up being a small part of Cinesites assignment, as the majority of the work focused on the creation of eccentric vehicles and elaborate environments. Conceived by Production Designer Dennis Gassner, the vehicles included a spectacular 147-foot long dirigible. After an initial build in Maya, the model was textured using multiple 4K maps made in Photoshop, using photographs as source material. This was combined with custom PRMan shaders that added point-based occlusion and color-bleeding. Similar techniques were used to create a majestic carriage in another sequence. This time, texturers referenced Chinese lacquered cabinets to create a realistic surface patina.
Even more complex was a remarkable balloon in which Lyra and her friends escape. The craft is a four-sphere balloon with a gondola-like platform in the centre, Rowe explains. The actors were filmed against greenscreen on a gondola that was mounted on a gimbal rig. We created the rest of the craft in CG, including the wires that attach it together; then tracked it into the real section. Mudbox was used to create the sculpted sections of the balloons. A balloon movement simulator was written to ensure that movement of the real rig was emulated realistically with the undulations of the CG sections. For the Anbaric energy source that drives the balloon, we used procedurally generated geometry and shaders within Maya.
During her journey to the North, Lyra later finds herself aboard an even more formidable vehicle, the Noorderlicht, a huge Gyptian ship. A third of the upper deck was built as a greenscreen set piece to shoot plates with the actors. Wider shots utilized a CG ship built within Maya using reference from blueprints and the basic previsualization model. Textures were created from hundreds of photographs of the set piece. The ship was then populated with motion-captured sailors, along with their CG daemons. To show the ship at sea, I filmed water plates from a helicopter, with a real boat of similar proportions providing accurate wash, Rowe explains. Back at Cinesite, we integrated the CG ship precisely in the wash from the real ship. Since the actual vessel did not have a paddle, we also wrote Houdini simulations to generate procedural ocean surface. Finally, we created realistic undulation on the ships sails and rigging using Syflex and custom tools.
Cityscapes and Arctic Panoramas
The creation of the vehicles logically led to generating the environments, which they had to appear in. Indeed, Cinesite completed most of the digital matte paintings in the movie, a task supervised by David Early. Senior VFX Supervisor Michael Fink recalls, For the Arctic sequences (Svalbard and Bolvanger), I went to the actual Svalbard, Norway, along with Eric Pascarelli, to shoot and film thousands of stills and hours of digital tape of ice, rocks and glaciers. These were used to build the look of Bolvanger, Svalbard, and to some extent the Gyptians encampment. The environments were 3D with projected textures from the location photography and from matte paintings. The same approach was used for Lyras London.
To create the London shots, stills and footage were captured from a helicopter over the city. Baked lighting passes containing radiosity and occlusion were used as a basis for the paintings. Then, Cinesites pipeline used depth mattes to combine multiple projections onto 3D geometry.
A similar approach was employed to create Bolvanger, the Arctic location where the final showdown takes place. The action was filmed in a large green screen set covered with artificial snow. We had so many shots to create that we ended up building a fully 3D environment, Johnson says. We created CG snow, with the mountains modeled as polygons and rendered as subdivision surfaces through a proprietary PRMan pipeline. Matte painted textures were then projected onto the geometry. Some of the epic battle scenes are almost entirely computer generated, with digi-doubles, daemons, the dirigible, the balloon, witches and arrows
Global illumination was used on the environment throughout. We created some tools that used the point-based color-bleeding in PRMan 13 to simulate radiosity on some of the vehicles and environments, CG Supervisor Thrain Shadbolt adds. This was normally baked to a point cloud. We employed a simple sub-scatter shader technique for digi-doubles, such as the witches, as they were designed to be viewed at a distance. Creatures used a fur-specific sub-scatter approach for backlighting. The majority of our shots were then composited using Shake.
Taming Polar Bears
While Cinesite was busy crafting digital environments, Framestore CFC was tackling its most challenging character animation project to date. FCFC had created many talking characters, furry creatures and digital environments before, but the polar bear characters required all these complex elements to work together in hundreds of shots, says VFX Supervisor Ben Morris. Plus, Chris Weitz and Mike Fink had made it clear that the two main bears, Iorek and Ragnar, were leading co-stars of the film. If the audience didnt buy these characters, the movie would fail
The team started by collecting lots of reference of real polar bears -- photos, videos, skeletal reference from a museum. They also received maquette character studies that helped define differences between Iorek and Ragnar. The goal was first to establish a real polar bear model, rig and fur groom, Morris explains. Then, we developed subtle changes to the model to create unique characters and to include such un-natural details as posable thumbs, as described in the book. Lead R&D Developer Alex Rothwell developed a new in-house tool for simulating the action of skin slide and fat/muscle jiggle over the characters muscles and bones. Called fcJiggle, the tool calculated simulation results as a stand-alone application, but was integrated within our Maya pipeline. This allowed our character TDs to paint parameter maps to define distribution of soft tissue versus tight skin, tissue depth, and finally the physical qualities of each tissue type: spring/damping coefficients and number of simulation iterations.
Parallel to this effort, proprietary fur system fcFur was completely revamped and adapted to better fit into the existing animation, lighting and rendering Maya/Liquid/RenderMan pipeline. Using a user-defined filter stack, groomers and look development TDs controlled all the fur parameters -- such as length, thickness, curl, scraggle, orientation, clumping and density -- as painted maps. Depending on the requirements of the shot or sequence, fur dynamics could generate reaction to the movement of the characters skin, external collision objects such as armor, other characters and even wind.
Optimizations were required at every stage of the fur pipeline in order to keep disk space requirements and render times down. These included varying level-of-detail grooms, adaptive hair culling based on render camera visibility, and groom caching for non-dynamic scenes and cycle animations for the bear fight. The armors were created as a combination of basic modeling in XSI and Maya, detailed displacements in Mudbox, and assorted color, bump, spec and wear maps painted in Photoshop. A RenderMan shader combined all these maps into a beauty render, which could be re-built in Shake if needed, using custom shader genies to combine the large numbers of AVOs also generated at render time.
Animating a Performance
The bears facial rigs, meanwhile, were designed to use the best aspects of blend shapes and muscle deformers. Our in-house muscle system controlled the broad volumes and smooth fall-off of the facial animation, Morris explains. Then, blend shapes were used to define more complex and often extreme deformations required in both dialogand action shots. The rigs contained manipulator handles attached directly to the characters head and were presented to animators in Mayas 3D viewport. High-level controls combined multiple lower level deformations using custom Maya mapping nodes similar to set-driven keys. In addition, animators could define libraries of poses, which were stored as keyed/un-keyed offsets to groups of controls.
An important part of the eventual success of the bear effects was the way the plates were captured on set. We had performer Nonso Anozie playing Iorek on set (Tommy Luther played Ragnar), Fink continues. He had leg, arm and head extensions to give him the proportions of a large bear. When he was on all fours, his bear head apparatus kept the bear head at the proper distance from Dakota, and his leg and arm extensions kept his motion similar to a bears. This gave Dakota a fairly accurate eyeline and sense of how big the bear really was. We shot clean passes to remove Tommy from the plates.
The shots featuring Lyra riding on the back of Iorek were probably the most complex to shoot. FCFC had already developed a system, the mRig, for these kinds of shots on such projects as Dinotopia and Harry Potter and the Prisoner of Azkaban. We have software and hardware interfaces that allow us to animate our characters in pre-production, and then program (within Maya) the motion control camera and mechanical rig the actor actually rides on with that same 3D camera and body animation, Morris offers. The great thing about the system is that we can animate, build and test the entire shoot pipeline in 3D and 2D before we even set foot on set!
Early on, the team had realized that shots featuring multiple bears would be unwieldy to render. So, they built into the character management tool a method of displaying pre-cached animation as a GL view node. In effect, there were no bears in the scene, just a preview. At render time, the scenes were built from ReadArchived or RIF rib calls. CG Supervisors Andy Kind and Laurent Hugueniot oversaw the look and rendering. Compositing Supervisor Ivan Moran and his team assembled the final composites using Shake and Inferno.
A Bears World
Framestore CFC also generated many of the environments for the bear shots. Digital Environment Supervisor Martin Macrae created them using a combination of digital matte paintings for distant views, 2.5D projected paintings on 3D geometry for the midground, and full 3D environments for all foreground environments. An environment shader was developed specifically to render large areas of snow and ice, including hand painted and procedural displacements, scatter, refraction and placement of snow and ice. In addition, vfx artists generated many passes of snow and ice interaction for the bears as they walk, run and fight in a sequence.
Ragnars palace is described in the book as a cold, dark, decaying environment filled with dense smoke. We took inspiration from painters such as Rembrandt and Caravaggio, and really played the flambeau lights and God rays on the guard bear armor and huge columns in the environment, Morris observes. Although the 3D environment was relatively straight forward to build, lighting and compositing the sequence was a great challenge. The team, lead by Kind and David Bowman, brought together Lyras greenscreen elements, 2D plate elements of flaming flambeaus, many layers of 3D for Ragnar, the armored guard bears, 3D digital environment and vfx passes for smoke and God rays. The scene as a whole was very satisfying to complete, as it contained such rich visual design, and a fantastically menacing performance by Ian McShane as the voice of Ragnar.
Getting Up to Speed
When the shot count kept rising, Digital Domain was called in to produce late additional shots. Although starting from scratch, VFX Supervisor Bryan Grill and his team had to seamlessly match the look and feel that other vendors had been working on for over a year. On top of creating all the alethiometer/golden compass shots, the team had to tackle the challenging ice bridge sequence. Most of the bridge modelling and animation was created in Maya. We received an initial bridge model from Framestore CFC, which they were using for their shots, Grill says. It gave us a great place to start. In the distant shots, the bridge was created using a combination of matte painting and supplemental shader layers, e.g., reflection specular layers. All mid-ground and close-up bridge shots were created using a combination of textures and shaders. Particle effects were created using Houdini, and rendered using a combination of Vmantra and Digital Domains proprietary volumetric renderer Storm.
Initially, we put a lot of development into our rigid body simulation pipeline, says CG Supervisor Darren Hendler. But the bridge collapse required too much art direction for that approach to be satisfactory. Instead, we used our new level set fracture tools to break up the ice bridge geometry. This process lets you input a key geometry shape, it then recursively divides the geometry based on the input geometry to create chunks that all fit together perfectly and have no holes. Once we had all the fractured chunks, these were then all hand animated to produce the final bridge destruction. The cracks were created using 3D in our proprietary compositing software Nuke. They were setup as textures on cards, and then rendered and incorporated into the shader layers using refraction vectors.
For the Battle of Bolvanger, Compositing Supervisor Joe Farrell reconstructed the environment from four 4K renders from Cinesite. The stills were then projected back onto 3D geometry with Nuke. Using this technique, we managed to track, render and composite over 50 shots within a 2-week timeframe, Grill observes. They were handed off to help other vendors with the lighting of their creatures. We then built a complex 3D pan and tile setup in Nuke for the fire and smoke required to make the Bolvanger building burn.
Of Wolves, Dust and Gypsies
Also involved in the Bolvanger battle sequence with Cinesite, FSCFC and R&H, was Tippett Studio. VFX Supervisors Matt Jacobs and Frank Petzold were responsible for the creation of CG wolves. We built them in Maya, Jacobs explains. The fur was created using Tippetts fur MEL. We employed Maya and RenderMan for rendering, and relied heavily on our global illumination shader to light the wolves to match production finals from other facilities. When the wolves jumped around in attack mode, we used filmed elements to act as snow foot hits. Those elements were then composited into the shots using Shake.
In London, Rainmaker UK also became involved in the global Golden Compass effort. VFX Supervisor Paddy Eason and his team were responsible for creating the scene in which the concept of dust is first introduced to the characters via a projection in a room. The dust particle simulation was done in Houdini by CG Supervisor Sean Lewkiw. It consisted of 20 layers of particle simulations of varying levels of noise, size, speed and density. The particles were rendered in Mantra and composited into the scene using Shake.
The studios assignment also included major enhancements of the Gyptian camp environment. Much of the scene was shot with painted backings that needed to be replaced digitally, Eason says. We generated a new background using a combination of digital stills, Terragen 3D-rendered mountains and snowscapes and pure Photoshop paintwork, along with moving elements such as mist, smoke and shadows.
CG animation was required for a chase in which two warriors fall to their deaths in a giant chasm. To extend the stunt captured on stage, the stuntmen were replaced with digital doubles in Maya, and rendered in PRMan using customised shaders. The background landscape was also extended down into the depths with a Photoshop matte painting.
In London, Peerless Camera provided a great variety of visual effects, with John Paul Docherty supervising the assignment. The shots mainly comprised of 2D compositing, paintwork and CG additions -- such as a stained glass window. The team also performed a tricky character removal in a scene filled with smoke and steam, which required a Houdini particle simulation.
Top-Notch Performances
After almost two years of effort, Fink looks back with great satisfaction on what ended up being his most ambitious assignment to date. I think that we had great success in some things and less in others. Overall, Im quite proud of the work. I think that we attempted some very difficult sequences and came up with beautiful work overall. The visual effects had to carry four main characters that were not live-action actors (Pan, Iorek, Ragnar, The Golden Monkey), and I am quite happy with their performances.
Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications, both print and online, and occasionally to Cinefex. In 2004, he organized a major special effects exhibition at the Musee International de la Miniature in Lyon, France.