Rationality-Guided AGI as Cognitive Systems Ahmed Abdel-Fattah, Tarek R. Besold, Helmar Gust , Ulf Krumnack, Martin Schmidt, Kai-Uwe K uhnberger ({ahabdelfatta | tbesold | hgust | krumnack | martisch | kkuehnbe}@uni-osnabrueck.de) Institute of Cognitive Science, University of Osnabr¨uck, Albrechtstr. 28, 49076 Osnabr¨uck, Germany Pei Wang (pei.wang@temple.edu) Department of Computer and Information Sciences, College of Science & Technology, Temple University, 1805 N. Broad Street, Philadelphia, PA 19122 USA Abstract methodologies (Baum, Hutter, & Kitzelmann, 2010). Here, we approach cognition in AGI systems by particularly pro- moting “rationality” as one of such indispensable criteria, and analyze some divergent, sometimes seemingly irrational, be- haviors of humans. In this article, our goal is twofold. We first concern ourselves with explicitly allocating ideas from AGI within CogSci. Second, we give a conceptual account on some prin- ciples in normative rationality-guided approaches. After ex- plaining our approach at a general level, we explain how two cognitively inspired systems, namely NARS and HDTP, have the potential to handle (ir)rationality. We conclude by giving some remarks and future speculations. The integration of artificial intelligence (AI) within cogni- tive science (CogSci) necessitates further elaborations on, and modelings of, several indispensable cognitive criteria. We ap- proach this issue by emphasizing the close relation between ar- tificial general intelligence (AGI) and CogSci, and discussing, particularly, “rationality” as one of such indispensable criteria. We give arguments evincing that normative models of human- like rationality are vital in AGI systems, where the treatment of deviations from traditional rationality models is also nec- essary. After conceptually addressing our rationality-guided approach, two case-study systems, NARS and HDTP, are dis- cussed, explaining how the allegedly “irrational” behaviors can be treated within the respective frameworks. Keywords: Rationality; intelligence; AGI; HDTP; NARS Motivations and Background For more than five decades, artificial intelligence (AI) has always been a promising field of research on modeling hu- man intelligence. The success of projects like IBM’s Watson (Ferrucci et al., 2010), for instance, increases the hopes in achieving not only language intelligence but also inference mechanisms at a human-level and paves the way for solving more baffling tasks. However, AI has turned into a vague, un- specific term, in particular because of the tremendous num- ber of applications that belong, in fact, to seemingly orthogo- nal directions. Philosophers, psychologists, anthropologists, computer scientists, linguists or even science fiction writers have disparate ideas as to what AI is (or should be). The challenge becomes more obvious when AI is looked at from a CogSci perspective, where the focus is mainly on explain- ing processes of general cognitive mechanisms (not only on how one or another intelligence task can be solved by a com- puter). We think that from a CogSci perspective the kind of intelligence characterizing classical AI problems is not yet exhaustive enough. Solutions to most of the problems are not cognitively inspired: neither do they consider essential cogni- tive mechanisms (or general intelligence results) nor do they show the biological plausibility of the solutions. Artificial General Intelligence (AGI) refers to a research direction that takes AI back to its original goals of confronting the more difficult issues of human-level intelligence as a whole. Current AGI research explores all available paths, in- cluding theoretical and experimental computer science, cog- nitive science, neuroscience, and innovative interdisciplinary Why AGI? In current AGI research, there are approaches following dif- ferent paths, including those (1) inspired by the structure of human brain or the behavior of human mind, (2) driven by practical demands in problem solving, or (3) guided by ratio- nal principles in information processing. We are concerned with the latter approach, which has at least three essential ad- vantages. One advantage of the rationality-guided approach, from an AGI perspective, is that it is less bound to exactly reproducing human faculties on a functional level. Another advantage is that it gives AI the possibility of being estab- lished in a way similar to other disciplines, where it can give a theoretical explanation to intelligence as a process that can be realized both in biological systems and computational de- vices. The third advantage of the rationality-guided approach is that it is not limited to a specific domain or problem. Rationality The term rationality is used in a variety of ways in various disciplines. In CogSci, rationality usually refers to a way a cognitive agent deliberatively (and attentively) behaves in, according to a specific normative theory. The prototypical in- stance of cognitive agents that can show rational behavior is humans, who so far are also the ultimate exemplar of gener- ally intelligent agents. When modeling intelligence, it is rea- sonable to initially take the remarkable abilities of humans into account with respect to rational behavior, but also their apparent deficiencies that show up in certain tasks.
[1]
Boicho Kokinov.
Analogy in decision-making, social interaction, and emergent rationality
,
2003,
Behavioral and Brain Sciences.
[2]
Sylvia Weber Russell.
The Structure-Mapping Engine: Algorithm and Examples (Book)
,
1992
.
[3]
Pei Wang,et al.
THE ASSUMPTIONS ON KNOWLEDGE AND RESOURCES IN MODELS OF RATIONALITY
,
2011
.
[4]
A. Tversky,et al.
Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment
,
1983
.
[5]
Kai-Uwe Kühnberger,et al.
An Argument for an Analogical Perspective on Rationality & Decision-Making
,
2011
.
[6]
Douglas R. Hofstadter,et al.
A logic of categorization
,
2002,
J. Exp. Theor. Artif. Intell..
[7]
G. Gigerenzer.
Rationality for Mortals: How People Cope with Uncertainty
,
2008
.
[8]
Allen Newell,et al.
SOAR: An Architecture for General Intelligence
,
1987,
Artif. Intell..
[9]
L. Cosmides,et al.
The Adapted mind : evolutionary psychology and the generation of culture
,
1992
.
[10]
Charles Kemp,et al.
Bayesian models of cognition
,
2008
.
[11]
Jonathan Evans,et al.
Logic and human reasoning: an assessment of the deduction paradigm.
,
2002,
Psychological bulletin.
[12]
Brian Falkenhainer,et al.
The Structure-Mapping Engine *
,
2003
.
[13]
Jennifer Chu-Carroll,et al.
Building Watson: An Overview of the DeepQA Project
,
2010,
AI Mag..
[14]
Brian Falkenhainer,et al.
The Structure-Mapping Engine: Algorithm and Examples
,
1989,
Artif. Intell..
[15]
Kai-Uwe Kühnberger,et al.
Rationality and General Intelligence
,
2011,
AGI.
[16]
Keith Stenning,et al.
Human Reasoning and Cognitive Science
,
2008
.
[17]
Pei Wang,et al.
Rigid Flexibility: The Logic of Intelligence
,
2006
.
[18]
P. Wason,et al.
Natural and contrived experience in a reasoning problem
,
1971
.
[19]
Kai-Uwe Kühnberger,et al.
Rationality Through Analogy - Towards a Positive Theory and Implementation of Human-Style Rationality
,
2012
.
[20]
Angela Schwering,et al.
Syntactic principles of heuristic-driven theory projection
,
2009,
Cognitive Systems Research.
[21]
G. Miller,et al.
Cognitive science.
,
1981,
Science.
[22]
Pei Wang,et al.
Formalization of Evidence: A Comparative Study
,
2009,
J. Artif. Gen. Intell..
[23]
Ariel Rubinstein,et al.
A Course in Game Theory
,
1995
.
[24]
Niki Pfeifer,et al.
A Probability Logical Interpretation of Fallacies
,
2008
.
[25]
Kai-Uwe Kühnberger,et al.
Metaphors and heuristic-driven theory projection (HDTP)
,
2006,
Theor. Comput. Sci..
[26]
C. Lebiere,et al.
The Atomic Components of Thought
,
1998
.
[27]
P. Thagard,et al.
Coherence in Thought and Action
,
2000
.
[28]
L. Cosmides,et al.
Cognitive adaptations for social exchange.
,
1992
.
[29]
S. Alexander Haslam,et al.
I Think, Therefore I Err?
,
2007
.