Artificial intelligence and simulation (tutorial session)

The term "simulation" is often interpreted quite narrowly: as a way of making predictions by running a behavioral model to answer questions of the form "What-if...?". The major impact of Artificial Intelligence (AI) research on simulation is to encourage the use of additional kinds of modeling based on inferencing, reasoning, search methods, and representations that have been developed in AI. This natural though long overdue--extension of simulation can produce behavioral models that answer questions Beyond "What-if...?". The result is sometimes referred to as "Knowledge-Based Simulation". This tutorial presents some of the major concepts of Artificial Intelligence and illustrates their applicability to simulation using exampies drawn from recent Knowledge-Based Simulation research. It focuses on the present state-of-the-art, current problems and limitations, and future directions and possibilities. 1. A BRIEF OVERVIEW OF AI Artificial Intelligence (AI) has defined many of the frontiers of computer science since the 1950s [Klahr and Waterman 1986]. It is a vast, loosely defined area encompassing various aspects of pattern recognition and image processing, natural language and speech processing, robotics, symbolic computation, automated reasoning, expert systems, neural nets, and a host of other disciplines. Throughout its history, AI has been concerned with problems whose solution seemed impossible using conventional computer science. This attempt to make computers intelligent has two distinct motivations, referred to here as modeling and engineering. The modeling approach seeks to model the way humans (or other intelligent beings) perform tasks that require intelligence: It attempts to identify problems that require intelligence and to elucidate the mechanisms we employ in our own solutions of those problems. The engineering approach, in contrast, is concerned with producing systems that solve useful problems, regardless of whether those problems require intelligence or their solutions involve mechanisms parallel to our own. In practice these two approaches are often merged or even confused; but the distinction is useful for understanding the different emphasis of different AI efforts and the various roles of AI in modeling (and modeling in AI). The modeling approach to AI has a psychological or philosophical premise: can we use computers to build models of how we believe intelligence works? That is, given a conceptual theory of intelligence, can we embody that theory in a computer model? Computer models ideally make such theories concrete, allowing them to be tested, validated, and refined much more effectively than if they remained purely conceptual. The modeling approach to AI therefore views the implementation of computerized models as a primary technique for advancing our understanding of intelligence. In addition, the implemented models often suggest novel mechanisms that may in turn become part of new conceptual theories. For example, it is this kind of "metaphor feedback" that has led to the popular conception of the brain as a computer. Insights gained from AI models (both from their failures and their successes) have contributed to revisions of theories in areas ranging from linguistics to cognitive psychology. The engineering approach to AI has a different premise: computers are not organisms, so why not use them to their own best advantage to try to solve useful problems, without worrying about whether they are solving them the way we would? This focuses on solving interesting and useful problems rather than on defining or understanding intelligence. In practice, this approach works symbiotically with the modeling approach; if a model fails to work, engineering may suggest a solution. While such solutions are often ad hoc, they may nevertheless reveal fundamental flaws or alternative possibilities in the conceptual theory that produced the model, thereby suggesting revisions to the theory. Whenever the engineering approach succeeds in solving a problem (or even approaches success), its results tend to be appropriated by conventional computer science or engineering, so that AI often receives no credit for the eventual solution. This leads to the common misconception that AI never solves any problem; however, a more constructive interpretation is that this represents successful technology transfer. The lack of distinction between these two different motivations often leads to confusion about how AI research should be evaluated and judged. Ideally, research stemming from a modeling motivation should be judged according to the insight it produces into how natural intelligence works; mechanisms designed for such AI programs should be evaluated with respect to how closely they parallel and illuminate their corresponding biological mechanisms. The behavior of such programs should be judged according to how well they mimic the behavior of humans (or other intelligent entities), rather than how well they solve particular problems. In contrast, research stemming from an engineering motivation should be judged solely according to its problem-solving performance; mechanisms designed for these AI programs should be evaluated according to standard software engineering principles. Unfortunately, since these two motivations tend to be combined and confused, AI programs are often evaluated and judged by whichever criterion provides a more generous answer. Poor problem-solving performance is often excused on the basis that a program provides interesting modeling insights, whereas ad hoc models are often excused in the interests of performance. Some AI programs simultaneously attempt to justify their poor performance on the basis of their modeling while justifying their ad hoc models on the basis of their performance. On the other hand, many AI programs have produced interesting modeling insights, many have achieved excellent problem-solving performance, and some have even combined the two. AI has made many contributions to computer science and software engineering. Often the problems that AI attacks are also attacked from other quarters of computer science, and it is not always easy to assign credit for the solutions that eventually emerge. AI has had at least some part in producing---or is currently attempting to produce--a number of advances that have direct bearing on simulation and modeling. These include the object-oriented programming paradigm, demons, dynamic planning, goal-directed heuristic search, spreading-activation search, taxonomic inference (as implemented by class/subclass, or "IS-A' , inheritance hierarchies), forward and backward chaining, qualitative reasoning, truth maintenance, proof procedures for formal logic, simulated annealing, neural nets, and the representation of spatial and temporal phenomena, uncertainty, plans, goals, beliefs, and so-called "deep structures". The following sections outline some of the most important areas of overlapping research and cross-fertilization between AI and simulation. 2. AI AND SIMULATION In any discussion of AI and simulation, the term "simulation" must be freed from the confines of its own tradition, where it often denotes a very limited form of modeling. There is a strong tendency in simulation circles to view simulation narrowly as a way of making