On site: creating lifelike characters in Pixar movies

T oy Story, the first fulllength computer-generated animated feature film (released 1995) established itself as a visual benchmark for computer graphics hardware and software development. Soon after the film’s debut, graphics chip makers wanted to know how they could compute Toy-Storyquality imagery on a PC; game developers wanted to know how they could deliver Toy-Story-quality animation on game consoles; and robotics researchers wanted to know how they could build artificial intelligence into their machines to achieve Toy-Storyquality lifelike characters. As we at Pixar tried to answer, we also sought to create scenes even more complex, images more wondrous, and characters more fluid. For A Bug’s Life (released 1998), we extended our lighting and shading methodology to depict the transparency and back-lighting of an insect world. We developed new methods for modeling and animating large crowds of characters. And we embraced the use of subdivision surfaces to provide more flexible and organic characters. Toy Story 2 (released November 1999) leveraged these developments, depicting the Toy Story world with far more detailed sets, visually richer texturing, and more sophisticated design and animation of human characters. But any claim that the answers to these questions lie in more processing power, bandwidth, and memory obscures the more interesting truth. That’s why we focus here on how—and why—Pixar animators have made Buzz, Woody, Flik, and many other characters so lifelike. As supervisor of shading and visual effects on the original Toy Story, Tom Porter led a group of technical artists working on all surface appearances in the film, along with certain visual effects outside the mainstream of Pixar’s character-animation process. Back in 1995, Pixar used single-processor 150Mhz SGI Indigo2 machines with 64Mb of memory for Creating Lifelike Characters in Pixar Movies Tom Porter and Galyn Susman