A theory of the graceful complexification of concepts and their learnability

Conceptual complexity is assessed by a multi-agent system which is tested experimentally. In this model, where each agent represents a working memory unit, concept learning is an inter-agent communication process that promotes the elaboration of common knowledge from distributed knowledge. Our hypothesis is that a concept’s level of difficulty is determined by that of the multi-agent communication protocol. Three versions of the model, which differ according to how they compute entropy, are tested and compared to Feldman’s model (Nature, 2000), where logical complexity (i.e., the maximal Boolean compression of the disjunctive normal form) is the best possible measure of conceptual complexity. All three models proved superior to Feldman’s: the serial version is ahead by 5.5 points of variance in explaining adult inter-concept performance. Computational complexity theories (Johnson, 1990; Lassaigne & Rougemont, 1996) provide a measure of complexity in terms of the computation load associated with a program’s execution time. In this approach, called the structural approach, problems are grouped into classes on the basis of the machine time and space required by the algorithms used to solve them. A program is a function or a combination of functions. In view of developing psychological models, it can be likened to a concept, especially when y’s domain [y = f(x)] is confined to the values 0 and 1. A neighboring perspective (Delahaye, 1994) aimed at describing the complexity of objects (and not at solving problems) is useful for distinguishing between the“orderless, irregular, random, chaotic, random” complexity (this quantity is called algorithmic complexity, algorithmic randomness, algorithmic information content or Chaitin-Kolmogorov complexity; Chaitin,

[1]  William M. Smith,et al.  A Study of Thinking , 1956 .

[2]  Jacques Ferber,et al.  A meta-model for the analysis and design of organizations in multi-agent systems , 1998, Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160).

[3]  Yuri Gurevich,et al.  Logic in Computer Science , 1993, Current Trends in Theoretical Computer Science.

[4]  Akihiro Nozaki,et al.  Anno's Hat Tricks , 1985 .

[5]  Ronald Fagin,et al.  Reasoning about knowledge , 1995 .

[6]  Brian A. Davey,et al.  An Introduction to Lattices and Order , 1989 .

[7]  A. Newell SOAR as a unified theory of cognition: Issues and explanations , 1992, Behavioral and Brain Sciences.

[8]  Elizabeth A. Kendall,et al.  A Methodology for Developing Agent Based Systems , 1995, DAI.

[9]  S. Phillips,et al.  Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology. , 1998, The Behavioral and brain sciences.

[10]  O. G. Selfridge,et al.  Pandemonium: a paradigm for learning , 1988 .

[11]  K. Fischer A theory of cognitive development: The control and construction of hierarchies of skills. , 1980 .

[12]  Jacob Feldman,et al.  Minimization of Boolean complexity in human concept learning , 2000, Nature.

[13]  R. Axelrod,et al.  The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration , 1998 .

[14]  Allen Newell,et al.  Production Systems: Models of Control Structures , 1973 .

[15]  Francis M. Crinella,et al.  Brain mechanisms and intelligence. Psychometric g and executive function , 1999 .

[16]  T. Shallice,et al.  Task Switching : A PDP Model , 2001 .

[17]  J. Delahaye Information, complexité et hasard , 1994 .

[18]  John H. Holland,et al.  Induction: Processes of Inference, Learning, and Discovery , 1987, IEEE Expert.

[19]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[20]  Randal E. Bryant,et al.  Graph-Based Algorithms for Boolean Function Manipulation , 1986, IEEE Transactions on Computers.

[21]  R. Shepard,et al.  Learning and memorization of classifications. , 1961 .

[22]  John Harris,et al.  Handbook of mathematics and computational science , 1998 .

[23]  N. Cowan The magical number 4 in short-term memory: A reconsideration of mental storage capacity , 2001, Behavioral and Brain Sciences.

[24]  Richard Reviewer-Granger Unified Theories of Cognition , 1991, Journal of Cognitive Neuroscience.

[25]  R. Keith Sawyer,et al.  Artificial Societies , 2003 .

[26]  M. Levine Hypothesis behavior by humans during discrimination learning. , 1966, Journal of experimental psychology.

[27]  Charles H. Bennett,et al.  On the nature and origin of complexity in discrete, homogeneous, locally-interacting systems , 1986 .

[28]  L. E. Bourne Knowing and Using Concepts. , 1970 .

[29]  W. R. Garner The Processing of Information and Structure , 1974 .

[30]  Gregory J. Chaitin,et al.  Information, Randomness and Incompleteness - Papers on Algorithmic Information Theory; 2nd Edition , 1987, World Scientific Series in Computer Science.

[31]  J. Feldman A catalog of Boolean concepts , 2003 .

[32]  J. H. Steiger Tests for comparing elements of a correlation matrix. , 1980 .

[33]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[34]  G. Mann The Quark and the Jaguar: adventures in the simple and the complex , 1994 .

[35]  A. Baddeley The psychology of memory , 1976 .

[36]  J. Fodor,et al.  Concepts: a potboiler , 1994, Cognition.