Virtual humans for animation, ergonomics, and simulation

The last few years have seen great maturation in the computation speed and control methods needed to portray 3D virtual humans suitable for real interactive applications. We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the Jack system. Various aspects of real-time virtual humans are considered such as appearance and motion, interactive control, autonomous action, gesture, attention, locomotion, and multiple individuals. The underlying architecture consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a "PaT-Net" parallel finite-state machine controller that can be used to drive virtual humans through complex tasks. Finally, we argue for a deep connection between language and animation and describe current effects in linking them through the JacMOO extension to lambdaMOO.

[1]  Norman I. Badler,et al.  Animating human locomotion with inverse dynamics , 1996, IEEE Computer Graphics and Applications.

[2]  Libby Levison,et al.  Connecting planning and acting via object-specific reasoning , 1996 .

[3]  Norman I. Badler,et al.  Casualty Modeling for Real-Time Medical Training , 1996, Presence: Teleoperators & Virtual Environments.

[4]  Barry D. Reich,et al.  An architecture for behavioral locomotion , 1997 .

[5]  Christopher W. Geib,et al.  Planning and Terrain Reasoning , 1995 .

[6]  Dimitris N. Metaxas,et al.  The integration of optical flow and deformable models with applications to human face shape and motion estimation , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[7]  Welton MacDonald Becket Reinforcement learning of reactive navigation for computer animation of simulated agents , 1997 .

[8]  Ioannis A. Kakadiaris,et al.  Model-based estimation of 3D human motion with occlusion based on active multi-viewpoint selection , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Norman I. Badler,et al.  Real-Time Inverse Kinematics of the Human Arm , 1996, Presence: Teleoperators & Virtual Environments.

[10]  Bonnie Webber,et al.  Animation from instructions , 1991 .

[11]  Ioannis A. Kakadiaris,et al.  Building Anthropometry-Based Virtual Human Models , 1994 .

[12]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[13]  David Salesin,et al.  Comic Chat , 1996, SIGGRAPH.

[14]  A. B. Loyall,et al.  Integrating Reactivity, Goals, and Emotion in a Broad Agent , 1992 .

[15]  Norman I. Badler,et al.  Instructions, Intentions and Expectations , 1995, Artif. Intell..

[16]  Alex Pentland,et al.  Facial expression recognition using a dynamic model and motion energy , 1995, Proceedings of IEEE International Conference on Computer Vision.

[17]  Michael F. Cohen,et al.  Efficient generation of motion transitions using spacetime constraints , 1996, SIGGRAPH.

[18]  David C. Brogan,et al.  Animating human athletics , 1995, SIGGRAPH.

[19]  Norman I. Badler,et al.  Kinematic control of human postures for task simulation , 1996 .

[20]  Hyeongseok Ko,et al.  Insertion of an articulated human into a networked virtual environment , 1994, Fifth Annual Conference on AI, and Planning in High Autonomy Systems.

[21]  Joseph Bates,et al.  The role of emotion in believable agents , 1994, CACM.

[22]  Alex Pentland,et al.  The ALIVE system: full-body interaction with autonomous agents , 1995, Proceedings Computer Animation'95.

[23]  Christopher W. Geib,et al.  Planning for animation , 1996 .

[24]  Norman I. Badler,et al.  Simulating humans: computer graphics animation and control , 1993 .

[25]  Ken Perlin,et al.  Improv: a system for scripting interactive actors in virtual worlds , 1996, SIGGRAPH.

[26]  Norman I. Badler,et al.  Real-Time Control of a Virtual Human Using Minimal Sensors , 1993, Presence: Teleoperators & Virtual Environments.

[27]  Martha Palmer,et al.  Final Report to Air Force HRGA Regarding Feasibility of Natural Language Text Generation from Task Networks for Use in Automatic Generation of Technical Order from DEPTH Simulations , 1997 .

[28]  Daniel Thalmann,et al.  Simulation of object and human skin formations in a grasping task , 1989, SIGGRAPH.

[29]  Thomas W. Sederberg,et al.  Free-form deformation of solid geometric models , 1986, SIGGRAPH.

[30]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[31]  Norman I. Badler,et al.  Terrain reasoning for human locomotion , 1994, Proceedings of Computer Animation '94.

[32]  Norman I. Badler,et al.  User-controlled physics-based animation for articulated figures , 1996, Proceedings Computer Animation '96.

[33]  Dimitris N. Metaxas Physics-Based Deformable Models: Applications to Computer Vision, Graphics, and Medical Imaging , 1996 .

[34]  Norman I. Badler,et al.  Task-Level Object Grasping for Simulated Agents , 1996, Presence: Teleoperators & Virtual Environments.

[35]  Anthony A. Maciejewski,et al.  Computational modeling for the computer animation of legged figures , 1985, SIGGRAPH.

[36]  References , 1971 .

[37]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[38]  Jean-Claude Latombe,et al.  Planning motions with intentions , 1994, SIGGRAPH.

[39]  Sharon A. Stansfield A Distributed Virtual Reality Simulation System for Situational Training , 1994, Presence: Teleoperators & Virtual Environments.

[40]  Norman I. Badler,et al.  Inverse kinematics positioning using nonlinear programming for highly articulated figures , 1994, TOGS.