2007 Year in Review: Digital Acting and 3D Environments

Beowulf marks the first vfx-intensive performance capture hybrid, so Jerome Chen offers his perspective on entering it in the Oscar race.

10best01_OptimusPrime_Transf.jpg

Transformers represents a new watermark in ultra realistic hard body surfaces. In addition to Optimus Prime's 10,108 parts, there are also 1.8 million polygons and 2,000 texture maps. © DreamWorks Llc/Paramount. 

What marked 2007? Among other things, the Autobots battling the Decepticons in Transformers; The Maelstrom overtaking Pirates of the Caribbean: At World's End; the web-slinging superhero taking on Sandman and Venom in Spider-Man 3; 300's triumph in next-gen moviemaking; the marvelous daemons of The Golden Compass; Beowulf's dvancement in the performance capture hybrid; and Ratatouille's great leap forward in vfx for animation (resulting in Oscar VFX consideration for both of these animated titles). Here's a recap of our top five each from digital acting and 3D environments in this Year in Review:

Digital Acting

1. Optimus Prime from Transformers

Industrial Light & Magic stepped up to the challenge of Transformers. Fourteen fully CG characters were created with all of their individually controllable vehicular pieces, along with a new rendering system. The 30-foot-tall Autobots and Decepticons had to look real, which was difficult considering how complex and chained together they are. The original Optimus Prime action figure has 51 pieces; the movie version has 10,108.

CG Supervisor Hilmar Koch explains: "This was really skillful artwork [hard surface modeling to build robots] -- basically, hand-painted 2D texture maps and a little bit of shader work. But there's a whole lot of development, of course, in just getting the assets so that they're correct and look right so we could go to this level of lighting refinement. The big innovations were making extremely high-resolution texture maps and making sure we could do hybrid rendering between ray tracing and non-ray tracing. And the internal innovation was the way we would bring assets into our rendering pipeline. We simplified the lighting pipeline so that when an asset comes in like a robot, the artists deal with a single entity and can drop it into the lighting structures. All of that is in service of being able to use the accurate modeling of the sets with the hybrid rendering geometry."

As for meeting the demands of hard surface modeling to build the robots, Koch explains that they had two or three people on set taking photographs from all angles that would be uploaded and shipped to ILM. "Up here we'd take commercially available software to stitch together high-res images, which we'd then map back onto the proxies of set geometries. In order to get the robots right, we'd first light simple gray and silver spheres until they fit into the environments. If we achieved that, usually we could take the heavy rendering assets that are the robots and put them in the same positions. In theory and practice, they were exposed to the same lighting on set. So if the spheres were right, the robots would just drop back in."

For VFX Supervisor Scott Farrar, Transformers indeed represents a new watermark in ultra realistic hard body surfaces. "This had to be rendered and it's terribly complicated. For instance, in addition to Optimus Prime's 10,108 parts, there are also 1.8 million polygons and 2,000 texture maps. Which is also why ILM developed dynamic rigging so the animators could deal with the large number of parts, interacting on the fly while paying attention to certain sections that they were animating. "Let's say you had a close-up," Farrar says, "and the animator only had to deal with one arm, shoulders and a head. They basically identify the area and grab that to simplify it."

Meanwhile, there were several advancements with regard to more realistic-looking lighting. "We were very specific as far as key lights and fill lights, not including the overall reflections: that was a given for each and every robot," Farrar continues. "So we had to plus that out. But then we added key lights, and shadows and cutters and flags, so our robots would actually go in and out of narrowed lights with barn doors just like you'd do on sets. I wanted you to see that these are really big guys, so that if a robot knelt down toward camera, he actually moved out of a key on his left side and it got darker there, and he moved into a right key, and you really felt the volume lighting in ways you haven't seen before.

10best02_Sandman_Birth_Prog.jpg

Sony Imageworks created tools for Sandman that could be portable across different applications. One example was the fluid solver, which shared the same data Spheresim, © 2007 Columbia Pictures Industries Inc. All rights reserved. 

2. Sandman from Spider-Man 3

Sony Pictures Imageworks began development two years ago on Sandman, a complex, shape-shifting, sand creature that evokes the pathos of the legendary golem. Sand Effects Supervisor Doug Bloom and a team of TDs came up with a pipeline and toolset.

"We figured that the more they could duplicate the physics of sand, the better off they'd be, since story and storyboards and animatics were still being worked on," Bloom explains. "We wanted to prepare to emulate any possible behavior. We wanted the sand to look as realistic as possible and then later art direct and break away from reality of physics."

After six months of sand tests, they did side-by-sides of live-action footage and CG tests. What came out of that was a successful development effort in which all of the software R&D, programming and custom tools were ready to roll right into production. "During that process we had a team of maybe four full-time people doing custom software development, some of which was done with C++, some of which was done with Python and a lot of tools that were developed were exposed as plug-ins to Houdini, the particle effects package from Side Effects," Bloom continues. "And all the tools were developed as libraries so we could link into them easily from other packages as necessary. One of the tools developed was a fluid and gas simulation engine and that was done early on. And during the sand test sequence, one of our effects TDs wrote a user interface that connected up to the fluid engine. And later on, as we ramped up for production with more and more TDs, we exposed the UI to the fluid solver and moved most of the work to Houdini at that point. Everything was done in an open system because when we were going through the sand tests, aside from trying to match these tests, we still weren't clear what was going to be required of the character. So we wanted to create as many tools as possible that could be portable across different applications in a fashion that would allow us to have the various tools communicate and share data.

"One of the big examples was the fluid solver, which shared the same data formats as another simulator, which was called Spheresim, which is a stripped down rigid body simulator that only deals with spheres. It removes all of the extra calculations you need for other shapes as well as any calculations you'd need for the rotation. So the nice thing about that system was that it allowed us to simulate sand grains piling up, and what we'd do is have each sand grain represented by a single sphere. In the case of a very close-up shot or even a shot that might be a little wider, each sphere would represent a cluster of 10-50 sand grains. The nice thing about this application was, because it was developed in the same library structure of C++ code, it actually shared forces and other data formats with the fluid solver, allowing us to take all of these little spheres that were stacking up like little rigid bodies as a result of the sphere same algorithm and at any point we could flip a switch and have them enter into a gas or fluid simulation, creating a nice swirly, turbulent motion that we could then render as a fine sand grain, fine dust or individual rocks.

"This allowed us to mix and match Sandman as a solid character in a human form. In the [Flint] Marko atomized sequence, for example, you see him dissolving and blowing away into individual sand grains. Again, that was done with this whole suite of tools that shared this common file format. At this moment, he's a polygonal mesh. And at a particular frame, we're going to swap this out for millions of little particles that will be constrained to the mesh. You won't actually see his transition, but this allowed us to pick individual particles off that mesh and have them blow away."

3. Grendel from Beowulf

Sony Pictures Imageworks has made a tremendous leap forward in performance captured animated humans in Beowulf, with an incredible amount of geometric and textural detail in the models and the clothes and the faces. "When we started talking about how to do Beowulf, we didn't know that we were going to go this realistic and detailed -- that evolved over time," says VFX Supervisor Jerome Chen. "The evolution of that was predicated by the motion capture performances of the characters and how the keyframe animation was applied on top of them. As we looked at these characters, we realized that we wanted more and more detail on the faces, on the clothes, on the world. The performances of the characters were so big in scope..."

For Beowulf, Imageworks developed new human facial, body and cloth tools. The character of Beowulf was a challenge, of course, because he looks nothing like Winstone and is portrayed at different ages. But the monstrous Grendel has been somewhat taken for granted. "We were originally going to keyframe him, but Bob wanted to use Crispin Glover to capture his tortured soul, " Chen continues. "The intent was to make him look like a giant mutant child suffering from every known skin disease. We had to dial him back so he didn't look like a corpse using new RenderMan shaders for the look of the skin. Because he's 12-feet tall, the animators had to create a different sense of timing, height and momentum but stay true to the performance."

10best04_GoldenCompass-Ermine-320.jpg

For The Golden Compass' lead human character Lyra's daemon, Pantalaimon, nine models were built, of which seven appear in the finished film, including an Ermine (above). All Golden Compass images © 2007 New Line Cinema. 

4. Pantalaimon, the daemon from The Golden Compass

Featured in more than 500 shots, the daemons -- the physical representation in animal form of human souls -- were created via 3D animation at Rhythm & Hues (in collaboration with Rhythm & Hues India, which contributed animation lighting and composting on about a third of the shots). Representing the studio's most ambitious project in its 20-year history, the CG daemons effort was divided among six teams. Modelers started with maquettes created by Neal Scanlan Studios for all of the hero daemons. For the other characters, the team made good use of Rhythm & Hues' extensive model library and adjusted the models as needed. Using Maya, they built more than 40 base models for the daemon menagerie, with many additional versions to add variety. On top of model adjustments, there were also several variations of fur types and colors to make the total number seem even larger.

The team used the same overall rigging system they have been utilizing since Narnia. It combines a muscle/skin system with traditional deforms. "We have used full muscle systems, but they tend to get unwieldy very quickly," explains VFX Supervisor Bill Westenhofer. "We now use a hybrid that employs traditional deforms for most of the work and augments that with muscles and even blend shapes where they provide the most benefit. For areas where skin slide is required, we add a special deformer that allows a mesh to slide within the hull of the already deformed shape. When we combine this with relaxation controls, it acts just like skin sliding over a hard surface. You then blend that with tension driven displacement maps to produce wrinkling skin, and the result is very convincing."

A tricky part of the daemon assignment, meanwhile, was generating realistic transitions between daemon forms. As a reflection of the changing nature of a child, the characters are able to adopt the shape of another animal effortlessly. This ability disappears when the child reaches adulthood. For lead human character Lyra's daemon, Pantalaimon, no less than nine models had to be built, of which seven appear in the finished film. These included an ermine, wildcat, hawk, sparrow, seagull, moth and mouse.

"The transitions between daemons were really fun to do," Westenhofer suggests. "We were able to take advantage of the fact that the transitions were always intended by the director to be quick and to feel natural (i.e., without a lot of flashy effects). That meant we could essentially do the equivalent of a 2D morph in 3D. An animator would see a rig for the animals on both sides of the transition (e.g., the cat and the bird). He would then dial in the deform to mush the starting character into the shape of the latter, while animating out the same control on the second character to bring it from a mushed shape of the first animal into the proper form of the second. We would render both animals for the entire range of the transformation, and a 2D cross dissolve would control how much of each made it to the final image."

5. Iorek, the polar bear from The Golden Compass

With the creation of the polar bears, Framestore CFC tackled its most challenging character animation project to date. These characters required complex elements to work together in hundreds of shots. "If the audience didn't buy these characters, the movie would fail," insists VFX Supervisor Ben Morris.

The team started by collecting lots of reference of real polar bears -- photos, videos, skeletal reference from a museum. They also received maquette character studies that helped define differences between Iorek and Ragnar. "The goal was first to establish a 'real' polar bear model, rig and fur groom," Morris explains. "Then, we developed subtle changes to the model to create unique characters and to include such un-natural details as posable thumbs, as described in the book. Lead R&D Developer Alex Rothwell developed a new in-house tool for simulating the action of skin slide and fat/muscle jiggle over the characters' muscles and bones. Called fcJiggle, the tool calculated simulation results as a stand-alone application, but was integrated within our Maya pipeline. This allowed our character TDs to paint parameter maps to define distribution of soft tissue versus tight skin, tissue depth, and finally the physical qualities of each tissue type: spring/damping coefficients and number of simulation iterations."

Parallel to this effort, proprietary fur system fcFur was completely revamped and adapted to better fit into the existing animation, lighting and rendering Maya/Liquid/RenderMan pipeline. Using a user-defined filter stack, groomers and look development TDs controlled all the fur parameters -- such as length, thickness, curl, scraggle, orientation, clumping and density -- as painted maps. Depending on the requirements of the shot or sequence, fur dynamics could generate reaction to the movement of the character's skin, external collision objects such as armor, other characters and even wind.

Optimizations were required at every stage of the fur pipeline in order to keep disk space requirements and render times down. These included varying level-of-detail grooms, adaptive hair culling based on render camera visibility, and groom caching for non-dynamic scenes and cycle animations for the bear fight. The armors were created as a combination of basic modeling in XSI and Maya, detailed displacements in Mudbox, and assorted color, bump, spec and wear maps painted in Photoshop. A RenderMan shader combined all these maps into a beauty render, which could be re-built in Shake if needed, using custom shader genies to combine the large numbers of AVOs also generated at render time.

10best06_Pirates3-BlackPear.jpg

The creation of The Maelstrom required full-on computational fluid dynamics. Even the biggest 32-gig machines couldn't properly convey what was required. © Disney Enterprises Inc. All rights reserved.

3D Environments

1. The Maelstrom from Pirates of the Caribbean: At World's End

This leap in CG water far exceeded even ILM's exemplary work last year on Poseidon. Indeed, as VFX Supervisor John Knoll concedes, they had to rethink their initial approach. "The assumption at the beginning was, given that the Maelstrom was this very large environment that takes on this whirlpool shape, I thought you could sculpt a funnel shape and then put an ocean surface shader for the sequence. But it was pretty apparent from the first test that we weren't going to get enough visual complexity. This really required full-on computational fluid dynamics. That's a function of resolution, computational power and memory. Even with the biggest 32-gig machines, you couldn't properly convey what was required. What we needed were a few optimization tricks to get maximum amount of detail."

As a result, former Stanford researcher, Frank Lossaso-Petterson, who helped build the university's fluid sim engine that has assisted ILM, was hired on staff to create the Maelstrom. "We needed to simulate a lot more water and for the Maelstrom sequence there were a lot more shots that had to be done," explains the fluid sim expert. "It became clear early on that the resolution needed was above and beyond anything we or anyone else had done before. With respect to R&D, we needed a way of getting all that visual detail on the surface that director Gore [Verbinski] wanted and at the same time be able to run in a short enough time to get a few iterations. The first couple of simulations that we ran in the traditional, Poseidon way, in parallel and on the newest machines, revealed that we would've needed another 10x the amount of time to attain the required resolution. So it became clear that we needed some technological changes.

"The Stanford engine allows us to run a basic photorealistic simulation. However, it's been extended at ILM to allow for greater control for art direction and rendering. So I've developed a lot of tools to make that work on top of the basic engine, as well as other tools in order to make the engine more accessible to a lot more people. One of the new tools focused on manipulating data, so that if the director wants it to go a little faster or spin faster in certain areas, we could handle this in a quick turnaround. That led to us having to deal with other challenges, such as manipulating the data of water underneath a ship that is interacting with this water. So we developed technology where we could add in the ship interaction after running the general overall water motion, which had a lot of advantages. Not only could we manipulate the overall water motion and then add the weight afterwards but we could also run a single simulation for half the shots and then another simulation for the rest of the shots. We could also run individual simulations for highly detailed shots with greater resolution.

"One technique that we used was... to simulate the Maelstrom as though it were flat and basically perturb gravity... because the surface is what matters. Instead of sloping down, the gravity was really pointing inward. And then part of the post process tools that we have would be to deform the surface into the shape that we wanted. All the art direction is being established while we run these shots. All these advancements made feasible what otherwise would've taken 40-45 weeks to render. And with parallelism across 40 processors, you take that down to a few weeks. And then running the simulation in a flat domain, as I've just described, brought that down to a few days."

As Knoll explains, the ability to composite two simulations at different resolutions and then apply deformations and overriding animations, enabled Verbinski to achieve important visual clarity. "It was important to Gore to really read that the water spins faster and faster as you go farther down into Maelstrom," Knoll adds. "The story points required you to see certain moments clearly: When the Flying Dutchman and the Black Pearl are on opposite sides of Maelstrom and the Flying Dutchman cuts down lower latitude into the faster water, it takes this faster track to close the gap and pull in right behind the Black Pearl. But when we actually ran a fluid simulation with all the correct mathematical calculations, it may have been physically correct but it wasn't dramatically correct. Because when you framed up a shot, Gore didn't think the water lower in frame was moving sufficiently faster than the water at the top of the frame to convey his story points."

10best07_s3101SpiderMan3-FinalFight-320.

The final battle in Spider-Man 3 consisted of 350 vfx shots and represented the most difficult sequence. © 2007 Columbia Pictures Industries Inc. All rights reserved. 

2. The Final Battle from Spider-Man 3

According to Digital Effects Supervisor Peter Nofz, the final battle consisted of 350 vfx shots and represented the most difficult sequence. "The reason that the final battle is so big is because the location in New York didn't quite exist the way Sam wanted, so they needed a construction building in there and a sand pit in there, so we needed to recreate this final battle area completely in CG. And getting three or four characters in every shot wasn't easy. It raises the bar right there; it means there's much more animation, much more effects and much more destruction. And obviously when things change [as a result of last minute reshoots], it's never a good thing for us because it means we have to partially start over, so there are all these continuity repercussions. But this time there were so many more elements that we needed to keep track of."

As a result, Stokdyk says they were ready for the final battle and whatever last minute changes were necessary in resolving the outcome. "The symbiotic goo team took on that challenge, and the way that [FX Team Co-Lead] Ryan Laney and CG Supervisor Dave Seager built those tools made it very efficient for them to respond to changes. That's not to say that changes late in the game aren't hard, but the team was prepared for that and the way for Sam has worked on the last two movies. It involved the convergence of almost all our pipelines: sand, goo, character and environment. So the final battle was almost a miniature show. Because of that, I designated from the start, one CG Supervisor, Francisco DeJesus, to plot out the master plan for the sequence. When I was in New York shooting, getting as much generic plates as well as acquiring environment, Francisco analyzed what could be used and also drove a lot of the development of the giant Sandman, and then once really the shots started rolling in and being turned over, we started splitting to other CG supervisors.

"One thing, in particular, that we did that I've been wanting to do for this franchise is pick a real location and replicate it, because there's definitely an urban planning element to the placement of buildings. But in every environment that was fully synthetic in Spider-Man 2, it was an environment that didn't exist in the real world. Here we had the benefit of matching a real world location exactly so we could intermix real location photography with CG. And, for the final battle, although we put another CG building under construction there, we were able to go and take real photography of the location, shoot real people there and give Sam [Raimi] the flexibility to do what he wanted in post."

3. Battle One from 300

"We worked on the whole of Battle One," explains Animal Logic VFX Supervisor Kirsty Millar, "which is the first time the Spartans clash with the Persians, This sequence is comprised of 176 vfx shots and runs to around eight minutes.

"The Spartans train all of their lives for combat and strategically use the landscape as part of their fighting tactics. In Battle One, they first arrange themselves inside a narrow canyon known as the 'hotgates,' so, although the Spartans are vastly outnumbered, the first wave of Persian infantry effectively has only one row of around nine Persians fighting one row of nine Spartans. This is the part of the sequence we called the 'hack n' slash' -- lots of fast cuts of hand-to-hand combat. The Spartans start to gain the upper hand and push the remaining Persian infantry over the cliffs of Thermopylae and into the ocean below. The Persian archers then launch a volley of millions of arrows that 'blot out the sun,' so the Spartans have to 'fight in the shade.' The Spartans 'tuck-tail,' crouching beneath their shields as the arrows bounce harmlessly around them. The Persians then send in the Cavalry, which the Spartans meet by forming a pointed phalanx in a narrow road bordered by steep cliffs, falling down to the ocean on one side. This again effectively reduces the huge Persian army to one-on-one combat, as only around four horses can fit on this narrow road. Needless to say, the Spartans defeat the last wave and live to fight another day.

"The most complex shot for us was hg036_081, also known as the 'Crazy Horse' shot," she continues. "We called it this because Zack [Snyder] had the idea to use the same sort of camera rig that was used in the film 'Crazy Horse', which used a beam splitter to provide a synchronous pos and matte. Zack wanted to take it a step further and simultaneously record three angles, a wide, mid and close, through the same viewfinder. The three angles could then be 'nested' to make digital zooms. The action was recorded at high-speed, with lots of motion- interpolated speed ramping in post to highlight the action. There were technical issues with the camera rig on the day so a fairly low-tech compromise was arrived at. They simply bolted three cameras together on a dolly. The resulting slight offset of the angles meant we had to morph between each in order to re-align the framing. We tracked the wide angle and worked out the camera offsets of the mid and close, then rendered the 3D environment with the angle changes incorporated. We added Leo's CG spear and CG sword extension, lots of CG spear tips and ends, digital doubles in the background, CG debris, blood, a CG leg being hacked off, light rays, dust and atmos. The final shot was about 1,700 frames long.

"The sheer volume of footage for this shot, plus the number of CG elements to go into it, with the additional complication of multiple speed ramps throughout, meant this was the shot we felt was the most likely to use a huge amount of resources and generally go awry. We also wanted to keep the edit flexible, so that Zack could change the timings if he wanted this helped to guide our approach. Senior Compositor Tony Cole oversaw the shot with 3D Lead Andrew Jackson and 3D TD Clinton Downs working out a very efficient and flexible pipeline. In the end, it ran incredibly smoothly and turned out to be a stunning moment on screen."

10best09abcd_Beowulf-025_x.gif

Beowulf had the largest vfx crew assembled at Sony Pictures Imageworks, with a team of nearly 450. With continuous fire and water, the film exceeds even the Spider-Man franchise in terms of the wide range of effects. 

4. The Opening from Beowulf

This was the largest vfx crew assembled at Sony Pictures Imageworks, with a team of nearly 450. VFX Supervisor Jerome Chen says it exceeds even the Spider-Man franchise in terms of the wide range of effects. "Usually we do fire or water continuously. Here we had to do all of it, including scenes where Beowulf fights sea monsters during a storm at sea. We did full water simulation and additional particle work for interaction and characters in the water: the rain, the storm and mist blowing off..."

And the environments had to have the same level of detail as the characters so they wouldn't stand out. "Part of the challenge is designing dynamic sets and creating believable textures...there was always a wealth of photographic detail that we could get," Chen offers. "We always want to incorporate some level of reality in the textures. We would find photos of stone and castle walls and incorporate that into environments...

"When we introduce Beowulf, it's in a raging storm. You can see the boat in the distance and it comes plunging down 100 feet over a wave and comes up and smacks right into the camera. The original idea was to break it up with a conversation, and then it was decided to do it all in one shot that was a minute long, which meant more effects work. So now we had to do this budgetary analysis, which proved that the complexity of the shot made it five times more difficult. But you couldn't argue with it. And afterward, [director Robert Zemeckis] said the result is beautiful, it's unique and he just hopes the world's ready."

10best10_Ratatouille-125_v1.jpg

A lot of CG math and bending ones and zeros went into emulating how food looks when it's cooking in Ratatouille had the largest vfx crew assembled at Sony Pictures Imageworks, with a team of nearly 450. With continuous fire and water, the film exceeds even the Spider-Man. © Disney/Pixar.

5. The Kitchen from Ratatouille

As Brad Bird suggests, it all pretty much started with the food and how to take advantage of the new subsurface scattering at Pixar: "We had consultants who were gourmet cooks give an overview of not only how food looks but also how things are set up in the kitchen. It's not remotely real but gives the fantasy a footing. There was a lot of effort put into even how food looks when you're preparing it. What causes sauce to curl around when people are stirring it? A lot of this is CG math and bending ones and zeros to emulate something that's organic."

To achieve greater independent control between contrast and saturation, they came up with a new illumination model at Pixar. "One of the chronic problems for the industry is that darks get very gray and muddy in CGI," explains Effects Supervisor Apurvah Shah. "We dealt with it in the past by adding colored shadows or fill light. But it always seems to look painted on and not as organic. People are very careful in picking these local colors, but then you are hostage to what happens in the illumination model ultimately to those colors. And so we did some work you retain more of that saturation in these dark colors."

From an effects perspective, cooking was also a challenge for Shah. "In the movie, cooking is very tightly integrated with the animation. For example, chopping. You have this very tight feedback loop with the chef's knife essentially coming down on the cutting board and chopping something up. There's obviously nothing actually there, so what we decided to do was, rather than craft the performance with all these constraints, we wrote a chopping system that analyzes the cutting planes that the knife was generating and then cut up the model post animation based on where the knife is coming down, and then rigid body simulate that to the actual motion."

Meanwhile, having dealt with large quantities of CG water in Finding Nemo and even with rapids and a sewer and a kitchen sink in Ratatouille, they needed to create a lot of small bodies of liquid related to cooking. They range from a glass of wine to a bowl being whisked or a pot of soup. "We had to actually do a lot of work post simulation to massage the shapes," Shah says. "Basically we worked on the front end with the simulator itself. We found the right parameter set, which took a lot of testing, initially. These default parameter sets tended to want to work better for larger bodies of water. We used two different simulators for the movie. For liquids, we used the simulator originally written for Nemo but with modifications and we also used a public domain simulator, Stanford's PhysBAM. Another key was we wrote our own mesher, like a surfacer, that would generate not only a liquid surface but also a parameterization for it. One of the things we found that we needed to really sell these smaller bodies of water was to texture them. We worked in-house to come up with parameterization for these surfaces."

Raindrops even posed a challenge in the opening sequence establishing Remy's rural environment outside of Paris. When they looked at artwork early on, they realized that the raindrops weren't just rendered as little strokes. "They didn't have a straight edge, so we used that," Shah adds. "It's a compound image made up of little particles. When the raindrops hit the river, we wrote some extra programming to create some extra undulation in the water because it doesn't disappear from a rat's perspective -- it has fallout. So you have a little simulation there that solves that connection. It just so happened that it rained here in the area, so we went out and shot rain hitting the leaves, and it happened to be the same level of rain, which was a lucky coincidence."

Bill Desowitz is editor of VFXWorld.

Bill Desowitz's picture

Bill Desowitz, former editor of VFXWorld, is currently the Crafts Editor of IndieWire.

Tags