Convergence at VIEW Conference

Marco Consoli converges on the VIEW Conference in Torino, Italy to discover a bit about the future of visual effects and animation.

The VIEW Conference focuses on how entertainment and the arts use the same high-end tech tools. A movie engine developed for the game Crysis (above) was also used to create a virtual simulation for urban planning in Nice and Cannes. © EA.

The VIEW Conference focuses on how entertainment and the arts use the same high-end tech tools. A movie engine developed for the game Crysis (above) was also used to create a virtual simulation for urban planning in Nice and Cannes. © EA.

Convergence. This word came to my mind attending the eighth edition of VIEW Conference, a festival about virtual reality that was held in Torino, Italy, from November 6 to 9. The idea is that, during the last years, different media were attracted more and more to the virtual world, driven by computer simulations and computer graphics: movies, videogames, architecture, fine arts and even marketing are now taking advantage of the same high-end technology, so that theres not much difference now between watching a previsualization clip made for visual effects for a Hollywood blockbuster and a similar movie made to sell a municipality a new square or bridge. Another example? The videogames publisher Crytek developed a movie engine called CryENGINE 2 that was used for their new game Crysis, while the French architecture firm IMAGTP showed at the conference how they used the same engine to create a virtual simulation for urban planning in Nice and Cannes. At the same time, James Cameron is using a similar videogame engine to go with a camera inside the virtual set of his next movie, Avatar, that will be released in 2009.

Beowulf and the Uncanny Valley

One of the most intriguing talks was Parag Havaldars Creating Compelling Character Animation, about the great work behind the upcoming long-awaited Beowulf. Parag is the lead R&D engineer at Sony Imageworks, and he showed an enchanted audience the process of motion capture developed for the new Bob Zemeckis flick. The technology, called performance capture, is based on capturing the movements of actors body, face and hands, and it has been used previously on Zemeckis The Polar Express and Gil Kenans Monster House. Besides being scanned to give the visual effects technicians the data and visual reference to create the geometry of their digital figures, Angelina Jolie, Anthony Hopkins and their colleagues acted on a sound stage wearing markers, filmed by 260 Vicon cameras; the result was the motion data that were used as a backbone for their digital counterparts on screen.

Sony Imageworks Parag Havaldar talked about the work behind Beowulf. He showed an enchanted audience the process of motion capture developed for the film. © 2007 by Paramount Pictures and Shangri-La Entertainment, LLC. All Rights Re

Sony Imageworks Parag Havaldar talked about the work behind Beowulf. He showed an enchanted audience the process of motion capture developed for the film. © 2007 by Paramount Pictures and Shangri-La Entertainment, LLC. All Rights Re

Listening to Havaldars lecture, we learned some things: although the technology has improved a lot in the last years, theres still not any automatic motion-capture system, able to create a realistic digital character without the refinement of talented animators. The most difficult part to capture is the face and the subtlety of human expressions: Havaldar showed that for Beowulf all the possible expressions were created starting from 64 basic poses of the Facial Action Coding System developed by scientists Paul Ekman and Wallace Friesen; watching the results on screen, we can say that hardware and software have improved a lot since The Polar Express, giving the characters a more human-like appearance. Even the zombie effect that afflicted Tom Hanks character in that movie has been corrected, thanks to an advanced use of electrooculography to capture eye movements.

Yet despit thel efforts made by Havaldar and his team to achieve realistic characters, watching a short clip where Angelina Jolie as the mother of Grendel promises eternal glory to Ray Winstons virtual muscular Beowulf, there were some moments when I had a strange feeling of uneasiness. So I appreciated a lot the next lecture by Peter Plantec, a journalist, artist, and long-time contributor to VFXWorld, about The Art of Crossing the Uncanny Valley. Basically he explained that the more a virtual character pretends to be real, the more we look for flaws, because the subconscious part of our brain automatically wants to protect us from lies, and sends us a continuous and deep message of danger that brings us to the Uncanny Valley, where we can feel only repulsion for the character. In Peters opinion, this will change in the future, when well be more accustomed to virtual humans. But to achieve a result that avoids this feeling of discomfort, he explained that three things are needed: psychology, because people creating these digital characters have to understand human perception; artistry, because motion capture must be tweaked by animators; and a further improvement in the technology to capture even the most subtle expression and detail.

Virtual Sets, Concept Illustrations and a Glimpse of Avatar

Another very interesting meeting took place on November 8 with Tino Schaedler, a German architect who worked as digital set designer on movies such as Charlie and the Chocolate Factory, V for Vendetta and the upcoming The Golden Compass. In a world where technology develops very rapidly, the old generation of production designers very often feels uncomfortable using digital tools, so Tino works to help them achieve their vision through the use of computer graphics. More and more art directors and production designers are relying on animation software to create total virtual environments for their movies or to craft digital models in order to build the real sets, and Tino usually works producing previs movies like the one that was used by Tim Burton to visualize the scene in Chocolate Factory where the candy boat floats on the chocolate river inside the tunnel.

What is a concern in Tinos job is that, due to the use of computers by different professionals on the same movie production, very often it is hard to know who has the final word about some issues. For example, who is responsible for creating a virtual environment and discussing it with the director: the production designer or the visual effects supervisor? In a very interesting chat about his work, Tino explained to us that, in the future, production designers will be able to visit and discuss the digital sets with the directors and cinematographers, thanks to a motion capture stage and a videogame engine, in a way similar to what James Cameron is doing on the Avatar movie set. Actors perform on a sound stage, where their motion is recorded. Then the director goes to another MoCap sound stage with a handheld monitor whose motion is captured as well, and through that monitor hes able to watch the performance of the digital characters in the virtual set powered by a videogame engine, recording at the same time the motion of the virtual camera. Sounds interesting, but its a little weird, isnt it?

A corollary to Schaedlers lecture was James Clynes speech about conceptual design. Clyne is a well-known and much appreciated conceptual illustrator, who has worked five times with Steven Spielberg: the look of the futuristic cars in Minority Report is his stuff, as well as the look of the frozen Earth at the end of A.I.: Artificial Intelligence. Clyne, whose talent is also used in Avatar, remarked on the difficulties he has conceiving his sketches when he has to find inspiration in just a few lines of the script, where the description of environments or objects is usually very poor. He also added that even if he usually works with Photoshop or Painter to provide 2D images to the art department, every illustration must be elaborated while thinking about a further development in 3D, which means keeping in mind accurate proportions of every object.

EAs Glenn Entis cited Natural Motions physics engine, which LucasArts is using to create the videogame Indiana Jones, as a big leap forward toward producing more realistic experiences in gaming. © 2006 Lucasfilm Ent. Co. Ltd.

EAs Glenn Entis cited Natural Motions physics engine, which LucasArts is using to create the videogame Indiana Jones, as a big leap forward toward producing more realistic experiences in gaming. © 2006 Lucasfilm Ent. Co. Ltd.

The Future of Gaming Between Art and Technology

Gaming was also a topic discussed at VIEW Conference. As I said before, videogame engines are of great interest in the visual effects industry for previsualization, and in architecture, where they are used as simulators to create digital environments that can easily be explored and evaluated by clients. Luckily, in Torino I had the opportunity to have a chat with Glenn Entis, chief visual and technical officer of the gigantic studio Electronic Arts. Talking about the future of videogames, I asked him what was the most important aspect that must be developed among gameplay, graphics and artificial intelligence. He told me that the most important thing will be how new technologies will help game designers create their games in a simple way, citing the example of Natural Motions physics engine -- which LucasArts is using to create the new videogame Indiana Jones -- as something that is quite a big leap forward in terms of producing more realistic experiences in gaming, but is also something that the LucasArts team had difficulty controlling. When you combine physics and animation, you can get some interesting things Entis told me, but you very rarely get what you want, because animation is not simply a matter of physical correctness. That statement made me think that this must be why Danny Dimian, CG supervisor at Sony ImageWorks, who was invited to talk about the digital waves in Surfs Up, told me that there are usually problems when visual effects people try to create realistic animations, since directors often want an effect that is exactly the opposite of what happens in the real world because its more interesting from an artistic point of view.

The clash between art and technology is what I talked about with Joseph Olin, president of the Academy of Interactive Arts & Sciences, who was in Torino to present a speech about digital entertainment and culture. When I remarked that something commonly perceived as highly technological is very probably also considered artistically poor (this is why videogames, at least in Italy, are not yet considered a form of art, and why many cinephiles are live-action purists and judge visual effects and computer graphics as arty), Olin told me that interactive entertainment as a medium is very young in terms of finding ways to express what artistic people want to communicate to an audience. There are film critics who say that videogames arent art because its the user that determines the experience, Olin told me. And my answer is that its untrue, because its the game designer that determines all the experiences that the player can have. He added that videogames are a form of art, as some games like Okami clearly show. The challenges for the future are, in Olins opinion, related to the ability to introduce more moral and serious issues in videogames, and to using new technologies, such as new software for artificial intelligence, to create better and more realistic gaming experiences.

Ratatouille was the subject of fascinating talks by Pixar's Jessica McMackin and Alex Harvill.

Ratatouille was the subject of fascinating talks by Pixar's Jessica McMackin and Alex Harvill.

Behind the Magic of a CG Cartoon and an Aerial Dogfight

Even though I had already written about Ratatouille and Shrek the Third for some Italian magazines, and already knew many things about the making of these two CG movies, I was truly fascinated by the lectures of Lucia Modesto, technical director at DreamWorks Animation, and by her colleague Jessica McMackin, technical director at Pixar. Their movies are so nicely crafted, at least in terms of the beauty of the computer-generated images, that sometimes the audience forgets that theres a lot of hard work behind a nice mouse or a disgusting ogre walking and saying funny lines on the big screen. Just to make an example of the complexity behind a movie like Ratatouille, I could mention Technical Director Alex Harvills speech about the creation of Paris skyline in the sequence where Remy goes from the sewer to the top of the roof, where he finds Auguste Gusteaus Restaurant: a wonderfully vivid digital matte painting that is the result of a digital drawing, plus some 3D geometry of the buildings and bridges, plus visual effects to create lights, shadows and water. Another impressive work of digital craftsmanship was explored in the presentation of Mohsen Mousavi, lead crowd-FX technical director at Pixomondo Studio, which showed how it was possible to create believable aerial dogfights for the Red Baron movie on a low budget, using just good visual historical references about biplanes and an extraordinary team of animators (and not even one real plane or any motion capture technology).

Marco Consoli is a freelance journalist who writes about movies, animation and videogames for the highly regarded Italian newspaper Corriere della Sera and magazines such as Ciak, Jack, and LEspresso. Hes very passionate about visual effects and has been writing about them for 10 years. He is 36 and lives between Milan and Venice-- but he supports Florences football team Fiorentina.

Tags