'Inspired 3D': Constructing the 'Inspired' Character — Part 2

Continuing our excerpts from the Inspired 3D series, Tom Capizzi presents an in-depth character construction tutorial.

All images from Inspired 3D Modeling and Texture Mapping by Tom Capizzi, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

All images from Inspired 3D Modeling and Texture Mapping by Tom Capizzi, series edited by Kyle Clark and Michael Ford. Reprinted with permission.

Read Constructing the Inspired Character Part 1.

This excerpt is the next in a number of adaptations from the new Inspired 3D series published by Premier Press. Comprised of four titles and edited by Kyle Clark and Michael Ford, these books are designed to provide animators and curious moviegoers with tips and tricks from Hollywood veterans. The following is excerpted from Modeling & Texture Mapping.

Subdivision Modeling Techniques

Subdivision modeling techniques are used to take a low-resolution polygonal object and increase the resolution using a smoothing algorithm to create a high-resolution model. Several methods work quite well to accomplish this task.

Polygon smoothing is conceptually the simplest type of subdivision modeling. The original polygonal model (Figure 17) is defined as the low-resolution cage, and the higher-resolution geometry is created directly from it (Figure 18). You can use subdivision steps to determine the final resolution of the resultant model. As a rule, the resolution should begin with one single subdivision and increase from there based on the need of the model. The entire model can be subdivided, or selected faces can be subdivided.

i3D17_13-17.jpgi3D18_13-18.jpg

i3D23_13-23.jpgi3D24_13-24.jpg[Figures 23 & 24] Additional geometry added at the corner.

i3D25_13-25.jpgi3D26_13-26.jpg[Figures 25 & 26] A model with two rows of controlling polygons.

  • 3. In Figure 23, additional polygons were added at the corner. This allowed the smoothing operation to behave more predictably in Figure 24.

4. In Figure 25, additional rows of polygons were added along the edges. Notice how the highlights on the edges are confined to the two rows. In order to control flashing, a large face on a polygonal model that transitions into a smaller face must be separated by two rows of polygons. Figure 26 shows how the additional rows give the smoothing operation more control.

The areas of the face that normally require additional work are the ears (Figures 27 and 28), eyes (Figures 29 and 30), nose (Figures 31 and 32), and mouth (Figures 31 and 32). The details range from major reconstruction to the simple addition of a polygon row to sharpen the area just a small amount.

i3D27_13-27.jpgi3D28_13-28.jpg[Figures 27 & 28] The low-resolution ear (left), and the high-resolution ear (right).

i3D29_13-29.jpgi3D30_13-30.jpg[Figures 29 & 30] The low-resolution eye (left), and the high-resolution eye (right).

i3D31_13-31.jpgi3D32_13-32.jpg[Figures 31 & 32] The low-resolution nose and mouth (left), and the high-resolution nose and mouth (right).

Cleanup

Once the polygons have been split, sculpted, merged, deleted and manipulated into the model that is going to be smoothed, certain cleanup tools should be used. In truth, these tools must be used every time the model is going to be previewed using the subdivision method required for the model. These first tools that should be used are merge vertices and merge multiple edges. These will simplify the unseen entities that may be creating problems.

To do a final check on the model, you can use the polygon cleanup tool. You must use this tool carefully. Be sure that the model is inspected carefully before the results of this tool are accepted; this tool can cause major problems to otherwise usable models.

Hair

The original sketch had a baseball cap on the head of the character. This was an attempt to avoid what became a difficult process of making many layers of NURBS surfaces into hair. Many computer-generated characters use layers of NURBS surfaces to create hair. This character was supposed to be a young guy who did not pay careful attention to hair care, so the hairstyle would have to be loose.

i3D33_13-33.jpgi3D34_13-34.jpgi3D35_13-35.jpg[Figures 33-35] The first hair proposal (left), and the final hair (center). A close-up of the eye reveals some of the way the eye was built (right).

The real story here is the difference between the hair that was originally proposed and what finally appeared on the character.

Figures 33 and 34 show some of the progression from long hair to the relatively clean-cut look.

Eyes

The eyes were built from three NURBS spheres nested inside each other:

1. A clear outer later (the cornea). This layer is simply a clear reflective ball that surrounds the rest of the eye

2. A colored interior layer (the iris). This layer has a recessed, or concave, area around the color of the iris that reacts to light. When light is directed above the eye, the iris will have additional reflection that occurs beneath the pupil. This anatomy is physiologically incorrect, but this lighting has become an accepted way that eyes appear in computer-generated characters. The characters in the Pixar films and the PDI films have their eyes constructed in a similar manner. The opening for the pupil is simply a hole that has a cluster that controls the diameter of the hole. This allows for animation of the size of the pupil.

3. A black inner layer (the pupil). This layer is adapted to fit the iris. The shader on this layer is a black surface shader that emits no light whatsoever.

Teeth, Gums and Tongue

These were simple models. The teeth are simple NURBS spheres that have been flattened to resemble teeth, and the gums are surfaces that have been sculpted to accept the simple teeth.

The tongue is a half sphere that has been modified to resemble a tongue. All of these items are NURBS construction because this type of model is relatively easy to create, map and animate. Even though these parts are important, the focus of this chapter does not need to cover this material.

To learn more about character modeling and other topics of interest to animators, check out Inspired 3D Modeling and Texture Mapping by Tom Capizzi; series edited by Kyle Clark and Michael Ford: Premier Press, 2002. 266 pages with illustrations. ISBN 1-931841-49-7. ($59.99) Read more about all four titles in the Inspired series and read Part 3.

i3D17_tomCapizzi.jpgi3D18_kyleClark.jpgi3D19_mikeFord.jpg

Tom Capizzi (left), Kyle Clark (center) and Mike Ford (right).

Tom Capizzi is a technical director at Rhythm & Hues Studios. He has teaching experience at such respected schools as Center for Creative Studies in Detroit, Academy of Art in San Francisco and Art Center College of Design in Pasadena. He has been in film production in L.A. as a modeling and lighting technical director on many feature productions, including Dr. Doolittle 2, The Flintstones: Viva Rock Vegas, Stuart Little, Mystery Men, Babe 2: Pig in the City and Mouse Hunt.

Series editor Kyle Clark is a lead animator at Microsofts Digital Anvil Studios and co-founder of Animation Foundation. He majored in film, video and computer animation at USC and has since worked on a number of feature, commercial and game projects. He has also taught at various schools, including San Francisco Academy of Art College, San Francisco State University, UCLA School of Design and Texas A&M University.

Michael Ford, series editor, is a senior technical animator at Sony Pictures Imageworks and co-founder of Animation Foundation. A graduate of UCLAs School of Design, he has since worked on numerous feature and commercial projects at ILM, Centropolis FX and Digital Magic. He has lectured at the UCLA School of Design, USC, DeAnza College and San Francisco Academy of Art College.