Generating Human Motion by Symbolic Reasoning

This paper describes work on applying A1 planning methods to generate human body motion for the purpose of animation. It is based on the fact that although we do not know how the body actually controls massively redundant degrees of freedom of its joints and moves in given situations, the appropriateness of specific behavior for particular conditions can be axiomatized at a gross level using commonsensical observations. Given the motion axioms (rules), the task of the planner is to find a discrete sequence of intermediate postures of the body via goal reduction reasoning based on the rules along with a procedure to discover specific collision-avoidance constraints, such that any two consecutive postures are related via primitive motions of the feet, the pelvis, the torso, the head, the hands, or other body parts. Our planner also takes account of the fact that body motions are continuous by taking advantage of execution-time feedback. Planning decisions are made in the task space where our elementary spatial intuition is preserved as far as possible, only dropping down to a joint space fonnulation typical in robot motion planning when absolutely necessary. We claim that our work is the first serious attempt to use an A1 planning paradigm for animation of human body motion.

[1]  David Zeltzer,et al.  Towards an integrated view of 3-D computer animation , 1986 .

[2]  Andrew P. Witkin,et al.  Spacetime constraints , 1988, SIGGRAPH.

[3]  Oussama Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1986 .

[4]  Richard Waldinger,et al.  Achieving several goals simultaneously , 1977 .

[5]  Tomás Lozano-Pérez,et al.  Spatial Planning: A Configuration Space Approach , 1983, IEEE Transactions on Computers.

[6]  A. A. Maciejewski,et al.  Obstacle Avoidance , 2005 .

[7]  Richard Fikes,et al.  STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.

[8]  Craig W. Reynolds Flocks, herds, and schools: a distributed behavioral model , 1987, SIGGRAPH.

[9]  Norman I. Badler,et al.  Strength guided motion , 1990, SIGGRAPH.

[10]  Austin Tate,et al.  Generating Project Networks , 1977, IJCAI.

[11]  Earl David Sacerdoti,et al.  A Structure for Plans and Behavior , 1977 .

[12]  Marcel Schoppers,et al.  Universal Plans for Reactive Robots in Unpredictable Environments , 1987, IJCAI.

[13]  Austin Tate,et al.  Interacting Goals And Their Use , 1975, IJCAI.

[14]  David E. Wilkins,et al.  Domain-Independent Planning: Representation and Plan Generation , 1984, Artif. Intell..

[15]  T W Calvert,et al.  The interactive specification of human animation , 1986 .

[16]  Oussama Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1985, Autonomous Robot Vehicles.

[17]  Norman I. Badler,et al.  Simulating human tasks using simple natural language instructions , 1991, 1991 Winter Simulation Conference Proceedings..

[18]  Daniel Thalmann,et al.  A vision-based approach to behavioural animation , 1990, Comput. Animat. Virtual Worlds.

[19]  Marcel Joachim Schoppers,et al.  Representation and automatic synthesis of reaction plans , 1989 .

[20]  John H. Munson Robot Planning, Execution, and Monitoring in an Uncertain Environment , 1971, IJCAI.

[21]  Norman I. Badler,et al.  Interactive behaviors for bipedal articulated figures , 1991, SIGGRAPH.

[22]  Norman I. Badler,et al.  A kinematic model of the human spine and torso , 1991, IEEE Computer Graphics and Applications.

[23]  Robert A. Kowalski,et al.  Logic for problem solving , 1982, The computer science library : Artificial intelligence series.

[24]  Norman I. Badler,et al.  Interactive real-time articulated figure manipulation using multiple kinematic constraints , 1990, I3D '90.