Deriving Explanations from Qualitative Models

Qualitative computer simulations have great potential for teaching people to understand and interact with their physical environment. Prerequisite for using that potential, is that these simulations can be explained to humans in ways that they comprehend. Preferably, these explanations should be generated on the basis of the qualitative models that underly the simulations, to avoid having to handcraft the explanations for every new domain. The research that we describe in this paper deals with exactly that problem. It combines two lines of earlier research: representing qualitative models, GARP [2], and didactic discourse planning, DDP [13]. All qualitative models represented in GARP can be questioned by students, using an as yet limited set of questions, that will be answered by a generic didactic discourse planner. The overall interaction between students and systems is guided by a ‘mental tour’ through the successive states of the simulation (the ‘envisionment graph’). At each state several questions can be asked. These questions are linked to ‘information needs’, the topics of discourse. On the basis of these topics, the discourse planner will plan sequences of utterances, taking into account such things as the students beliefs, and the state of the discourse process.