Representing Mental Events (or the Lack Thereof)

This paper focusses upon the level of granularity at which representations for the mental world should be placed. That is, if one wishes to represent thinking about the self, about the states and processes of reasoning, at what level of detail should one attempt to declaratively capture the contents of thought? Some claim that a mere set of two mental primitives are sufficient to represent the utterances of humans concerning verbs of thought such as "I forgot her name." Alternatively, many in the artificial intelligence community have built systems that record elaborate traces of reasoning, keep track of knowledge dependencies or inference, or encode much metaknowledge concerning the structure of internal rules and defaults. The position here is that the overhead involved with a complete trace of mental behavior and knowledge structures is intractable and does not reflect a reasonable capacity as possessed by humans. Rather, a system should be able instead to capture enough details to represent a common set of reasoning failures. I represent a number of examples with such a level of granularity and describe what such representations offer an intelligent system. This capacity will enable a system to reason about itself so as to learn from its reasoning failures, changing its background knowledge to avoid repetition of the failure. Two primitives are not sufficient for this task.

[1]  Ashwin Ram,et al.  Multistrategy Learning with Introspective Meta-Explanations , 1992, ML.

[2]  Ashwin Ram,et al.  Introspective reasoning using meta-explanations for multistrategy learning , 1995 .

[3]  Jon Doyle,et al.  A Truth Maintenance System , 1979, Artif. Intell..

[4]  Ashwin Ram,et al.  AN EXPLICIT REPRESENTATION OF FORGETTING , 1992 .

[5]  Terry Winograd,et al.  FRAME REPRESENTATIONS AND THE DECLARATIVE/PROCEDURAL CONTROVERSY , 1975 .

[6]  T. O. Nelson,et al.  When People's Judgments of Learning (JOLs) are Extremely Accurate at Predicting Subsequent Recall: The “Delayed-JOL Effect” , 1991 .

[7]  John McCarthy,et al.  SOME PHILOSOPHICAL PROBLEMS FROM THE STANDPOINT OF ARTI CIAL INTELLIGENCE , 1987 .

[8]  Ashwin Ram,et al.  Failure-Driven Learning as Input Bias , 2019, Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.

[9]  Roger C. Schank,et al.  Inside case-based explanation , 1994, Artificial intelligence series.

[10]  D. Sleeman,et al.  Introspection Planning: Representing Metacognitive Experience , 1995 .

[11]  S. McRoy Belief as an Effect of an Act of Introspection: Some Preliminary Remarks , 1993 .

[12]  Gordon S. Novak,et al.  Artificial Intelligence Project , 1990 .

[13]  R. Schank,et al.  Primitive concepts underlying verbs of thought. , 1972 .

[14]  Ashwin Ram,et al.  Interacting Learning-Goals: Treating Learning as a Planning Task , 1994, EWCBR.

[15]  Gees C. Stein,et al.  Towards More Flexible and Common-Sensical Reasoning about Beliefs , 1995 .

[16]  Ashwin Ram,et al.  A Theory of Questions and Question Asking , 1991 .

[17]  David Leake,et al.  Modeling Case-based Planning for Repairing Reasoning Failures , 1995 .

[18]  Angela C. Kennedy,et al.  Using a Domain-Independent Introspection Mechanism to Improve Memory Search , 1995 .

[19]  T. O. Nelson,et al.  The feeling of knowing for different types of retrieval failure. , 1985, Acta psychologica.

[20]  Richard Reviewer-Granger Unified Theories of Cognition , 1991, Journal of Cognitive Neuroscience.

[21]  Bruce Krulwich,et al.  Cognitive behavior, basic levels, and intelligent agents A position paper for the 1995 Spring Symposium on Representing Mental States and Mechanisms , 1995 .

[22]  David Leake Representing Self-knowledge for Introspection about Memory Search , 1995 .

[23]  Gerald J. Sussman,et al.  Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis , 1976, Artif. Intell..

[24]  Roger C. Schank,et al.  Explanation Patterns: Understanding Mechanically and Creatively , 1986 .

[25]  David A. McAllester Truth Maintenance , 1990, AAAI.