The Epsilon State Count

Terminology and de nitions are proposed for the application of systems concepts to Adaptive Behavior research. The de nition of a system can be applied to agents and environments in several ambiguous ways. For this reason, a distinction is introduced between a mechanistic perspective and a functional perspective. For both perspectives a methodology is proposed for estimating the number of states of an agent, yielding the Mechanistic Epsilon State Count and the Functional Epsilon State Count, respectively. In addition, a similar means is proposed for counting the number of states required to perform a particular task in an environment: the Task Epsilon State Count. Importantly, the methodology explicitly looks for and reports the relation between the number of states and their e ect on performance. This methodology provides a uniform language to describe and compare di erent agents and environments, and for this reason it may a ord valuable comparisons between di erent Adaptive Behavior studies.

[1]  T. Gelder,et al.  The dynamical hypothesis in cognitive science , 1998, Behavioral and Brain Sciences.

[2]  T. Järvilehto,et al.  The theory of the organism-environment system: I. Description of the theory , 1998, Integrative physiological and behavioral science : the official journal of the Pavlovian Society.

[3]  T. Gelder,et al.  It's about time: an overview of the dynamical approach to cognition , 1996 .

[4]  John F. Kolen,et al.  The observers' paradox: apparent computational complexity in physical systems , 1995, J. Exp. Theor. Artif. Intell..

[5]  C. Lee Giles,et al.  Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks , 1992, Neural Computation.

[6]  Stewart W. Wilson The animat path to AI , 1991 .

[7]  Bram Bakker,et al.  Trading off perception with internal state: reinforcement learning and analysis of Q-Elman networks in a Markovian task , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[8]  Michael L. Littman,et al.  An optimization-based categorization of reinforcement learning environments , 1993 .

[9]  Stewart W. Wilson Knowledge Growth in an Artificial Animal , 1985, ICGA.

[10]  Michael L. Littman,et al.  Memoryless policies: theoretical limitations and practical results , 1994 .

[11]  Viktor Mikhaĭlovich Glushkov,et al.  An Introduction to Cybernetics , 1957, The Mathematical Gazette.

[12]  Jordan B. Pollack,et al.  Analysis of Dynamical Recognizers , 1997, Neural Computation.

[13]  Randall D. Beer,et al.  Computational and dynamical languages for autonomous agents , 1996 .

[14]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .