Multimodal Pedagogical Planning for 3D Learning Environments - A Unified Framework

ZHANG, WEI. Multimodal Pedagogical Planning for 3D Learning Environments A Unified Framework. (Under the Direction of Dr. James C. Lester and Dr. R. Michael Young.) Pedagogical planning lies at the heart of knowledge-based learning environments. In recent years, multimodality and authoring have become key issues in the creating of learning environments. The purpose of this research has been to design, implement, and evaluate a multimodal pedagogical planning system and a multimodal pedagogical authoring system. First, we designed and implemented a multimodal pedagogical planning system for 3D learning environments. In 3D learning environments, the student can cooperate with an animated agent to solve a problem or study a new concept. The agent uses natural language narration accompanied by synchronized gestures to communicate with the student. 3D animation and camera motions are also used to offer the student a vivid and friendly learning environment. In order to control the agent’s utterances, the agent’s gestures and 3D multimedia presentation in the 3D learning environment, we designed a multimodal pedagogical planning system that uses a coordinated distributed planning structure. The multimodal pedagogical planning system contains a Pedagogical Planner which is a global planner, and three coordinated distributed planners: an Agent Utterance Planner, an Agent Gesture Planner, and a Cameral and Animation Planner. The planners synchronize the agent’s verbal and physical behaviors on phrase boundaries. We implemented the multimodal pedagogical planning system in the PhysViz 3D learning environment for the circuit experiments. Second, we designed and implemented a multimodal pedagogical authoring system based on the multimodal pedagogical planning architecture. One of the most difficult problems faced by ITS research is authoring. Because of their complexity, ITSs are notoriously difficult to author. However, because of the modularity of the multimodal pedagogical planning architecture we developed, we hypothesized that our model of multimodal communication designed above can serve as the basis for an ITS authoring environment. The multimodal pedagogical authoring system can be used to author the pedagogical plans, including plans that describe the agent’s utterance, agent’s gesture, camera motions and animations. We empirically evaluated the authoring architecture in conjunction with the PhysViz 3D learning environment by studying non-technical subjects’ ability to modify learning activities for PhysViz. In the informal evaluation, 10 subjects were invited to use our multimodal pedagogical authoring system and answer a questionnaire. The results of this questionnaire suggest that most subjects without advanced knowledge of Computer Science can build 3D learning environments with the help of our authoring system.

[1]  Luke S. Zettlemoyer,et al.  Task-sensitive cinematography interfaces for interactive 3D learning environments , 1998, IUI '98.

[2]  Justine Cassell,et al.  Human conversation as a system framework: designing embodied conversational agents , 2001 .

[3]  M. Beardsley Expression and Meaning: Studies in the Theory of Speech Acts , 1981 .

[4]  Robert Wilensky,et al.  The berkeley UNIX consultant project , 1988 .

[5]  James Davidson,et al.  Natural Language Understanding. , 1979 .

[6]  Qiang Yang,et al.  Intelligent planning - a decomposition and abstraction based approach , 1997, Artificial intelligence.

[7]  Benjamin Bell,et al.  Investigate and Decide Learning Environments: Specializing Task Models for Authoring Tool Design. , 1998 .

[8]  Bruce Blumberg,et al.  Expressive autonomous cinematography for interactive virtual environments , 2000, AGENTS '00.

[9]  Drew McDermott,et al.  Issues in the Development of Human-Computer Mixed Initiative Planning , 1994 .

[10]  Han Reichgelt,et al.  COCA: A Shell for Intelligent Tutoring Systems , 1992, Intelligent Tutoring Systems.

[11]  Eric K. Ringger,et al.  The Design and Implementation of the TRAINS-96 System: A Prototype Mixed-Initiative Planning Assistant , 1996 .

[12]  Allen Munro,et al.  Authoring Simulation-Centered Tutors with RIDES , 1997 .

[13]  Elizabeth F. Churchill,et al.  “May I help you?”: designing embodied conversational agent allies , 2001 .

[14]  W. V. van Joolingen,et al.  Scientific Discovery Learning with Computer Simulations of Conceptual Domains , 1998 .

[15]  Sharon J. Derry,et al.  Individualized Tutoring Using an Intelligent Fuzzy Temporal Relational Database , 1990, Int. J. Man Mach. Stud..

[16]  James C. Lester,et al.  Realtime Generation of Customized 3D Animated Explanations for Knowledge-Based Learning Environments , 1997, AAAI/IAAI.

[17]  William R. Murray,et al.  A Practical Approach to Bayesian Student Modeling , 1998, Intelligent Tutoring Systems.

[18]  Drew McDermott,et al.  Introduction to artificial intelligence , 1986, Addison-Wesley series in computer science.

[19]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[20]  Alex,et al.  A Fully-Integrated Approach to Authoring Learning Environments : Case Studies and Lessons Learned , 2002 .

[21]  Arthur C. Graesser,et al.  Using Latent Semantic Analysis to Evaluate the Contributions of Students in AutoTutor , 2000, Interact. Learn. Environ..

[22]  Avelino J. Gonzalez,et al.  Automated Generation of Plans through the Use of Context-Based Reasoning , 1998, FLAIRS Conference.

[23]  F. Inglis How To Do Things With Words. , 1971 .

[24]  L. Miles,et al.  2000 , 2000, RDH.

[25]  Richard Fikes,et al.  STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.

[26]  James C. Lester,et al.  Intelligent multi-shot visualization interfaces for dynamic 3D worlds , 1998, IUI '99.