Robots and representations

The design of complex interactive robots inherently yields a form of representation — an interactive form. Interactive representation is, arguably, the foundational form of representation from which all others are derived. It constitutes the emergence of representational truth value for the system itself, a criterion not addressed in current literature. There is a form of representation that arises naturally in the design of complex interactive systems — robots. This form arguably constitutes an emergence of the fundamental form of representation, out of which increasingly complex forms are constructed and derived. Furthermore, this form of representation naturally satisfies an essential metaepistemological criterion for original representation: system detectable truth value. No alternative approach to representation in the current literature addresses this criterion. Recognizing and exploiting the emergence of this form of representation in robotics and dynamic systems is a rich frontier for exploration. In standard artificial intelligence and cognitive science models, inputs are received and processed, and, perhaps, outputs emitted. The critical consideration that arises in robot design is the possibility of a closure of this sequence of process such that robot outputs influence subsequent inputs via the environment, and, therefore, influence subsequent internal states and processes in the robot. That is, the critical consideration is the closure of input, processing, and output to include full interaction, not just action. This simple closure introduces several important possibilities. In particular, possible internal states that might be consequent on some action or course of action can be functionally indicated in the robot. Because such possible consequent states in the robot will depend, in part, on the environment, those states, or some one of those states, may or may not actually occur — the environment may or may not yield the appropriate input(s) in response to the output(s) to induce those indicated states in the robot. If none of those indicated states are entered by the robot, then the indications are false, and are falsified for the robot. The error in such indications is detectable by and for the system itself. In effect, to indicate such internal states as consequent on particular interactions on the part of the robot is to implicitly predicate of that environment whatever properties are sufficient to support those indications. It is to anticipate that the environment will in fact respond as indicated, if the interaction is engaged in. Some environments will possess a sufficiency of those properties, and will yield one of the indicated states, while other environments will not possess such properties, and will not yield any of the indicated states. For those environments that do not yield an indicated state, to set up such an indication is to set up an implicit predication, an anticipation, that is false, and potentially falsifiable by the system (Bickhard, 1993, in press; Bickhard & Terveen, 1995). The possibility of error, and especially of system detectable error, is a fundamental meta-epistemological criterion for representation. Whatever representation is, it must be capable of some sort of truth value. Conversely, something is representation for a particular system only if it is capable of some sort of truth value for that system. This is critical because many states and conditions and phenomena re representational — can have truth value — but only for some user or designer or obser ver outside of the system itself, not for the system itself (Bickhard, 1993; Bickhard & Terveen, 1995). Moderately complex robots, then, naturally involve a form of representation that is representational for the robot, not just for an observer or analyst or designer or user of the robot. This claim generates five questions: 1) How can notions such as ‘indication’ in the above discussion be made good in a functional manner in a robot, without committing a logical circularity by presupposing the very representationality that is allegedly being modeled? 2) Why would it be useful for a robot to have such representations of interactive potentialities? 3) How could such a notion of representation possibly be adequate to “normal” representational and cognitive phenomena such as representation of objects; representation of abstractions, such as numbers; language; perception; rationality; and so on? I will only outline an answer to the first of these questions, referring others to other sources. 4) On what basis would a robot set up such indications? And 5) How does this model of representation relate to contemporary research in artificial intelligence, cognitive science, connectionism, and robotics? My responses to this question too will, obviously, be abbreviated. The Functional Story First, I need to address the question of how interactive representation could be implemented without presupposing representation. All that is needed are some architectural principles adequate to the model that are themselves strictly functional — not representational. This is, in fact, rather simple. The indicated internal outcome states for an interaction function like final states in an automaton recognizer, but for an automaton that emits outputs to an interactive environment (Bickhard, 1980a). The indication of such states can be implemented with pointers — a pointer, say, to some location that will contain a “1” in the state being indicated. This is certainly not the only architecture that will implement the notions required, but it does suffice. To indicate the interaction itself, upon which the indications of final states are based, requires only a pointer to the subsystem — perhaps the subroutine or interactive recognizer — that would engage in those interactions. So, a pointer to a subsystem together with a pointer or pointers to final states associated with that subsystem suffices for the implicit predication of interactive representation, but none of these pointers themselves are or require representation. Insofar as there is representation here, it is genuinely emergent in the architectural organization. The Usefulness of Interactive Representations Choice. Why would it be useful for a robot to have such indications? For two reasons: First, if there are multiple interactions possible in a particular environment, the indicated internal outcomes of those interactions can be used in selecting which interaction to actually engage in (Bickhard, 1997b). A frog seeing a fly might set up indications of the possibility of tongue-flicking-and-eating, while a frog seeing a shadow of a hawk might set up indications of the possibility of jumping in the water. A frog seeing both needs some way to decide, and internal outcome indications provide a basis for such decision — e.g., select the interaction with the indicated outcomes that have the highest priority relative to current set-points. (Note that if the relevant outcomes are presumed to be represented , rather than indicated — as must be the case if the outcomes are considered to be external outcomes in the environment — then there is a circularity involved in using such notions to model representation.) Error . The second reason why such indications might be useful is that they create the possibility of error, and — most importantly — the possibility of the detection of error by the system. Detection of error, in turn, can be useful for guiding heuristics and strategies of interaction, and for evoking and guiding learning processes. Any general form of learning, in fact, requires such system detection of error (Bickhard & Terveen, 1995). In slogan form: Only anticipations can be falsified; therefore only anticipations can be learned. On the Adequacy of Interactive Representation Interactive, or robotic, representati on might seem adequate for the kinds of interactive properties that interactive indications will implicitly predicate of the environment, but there are many other things to be represented that do not prima facie look like interactive properties. To make good on claims of the adequacy of interactive representation as a general form of representation would require a programmatic treatment of many or most of these representational phenomena. There isn’t space to even begin that explication here (see, for example, Bickhard, 1980a, 1980b, 1992, 1993, in press, forthcoming; Bickhard & Campbell, 1992; Bickhard & Richie, 1983; Bickhard & Terveen, 1995; Campbell & Bickhard, 1986, 1992), but I will outline an approach to the interactive representation of physical objects in order to indicate that this is at least a plausible programme. Complexities of Interactive Indications. Before addressing objects per se, I need to outline some forms of complexity that can be involved in interactive indications. The first is that there may be multiple interactive possibilities indicated at a given time. The second is that interactive indications can be conditionalized on each other: interaction A with possible outcome Q might be indicated, and, if A is engaged in and Q is in fact obtained, then interaction B with possible outcome R becomes possible. There are other kinds of complications possible, but branchings and conditionalized iterations of interactive indications will suffice for briefly addressing the problem of object representation. Webs. Branchings and conditional iterations yield the possibility of interactive indications forming potentially complex webs or nets of indications. In effect, the whole of such a web is indicated as currently possible, but actually reaching some parts of the web will be contingent on perhaps many intermediate interactions and outcomes. Objects. Some sub-networks in such a complex web may have two critical properties: 1) A subnet may be closed in the sense that, if any part of it is reachable — possible — then all parts of it are. That is, all points (p

[1]  J. Piaget The construction of reality in the child , 1954 .

[2]  Rita Nolan A Theory of Content and Other Essays , 1992 .

[3]  R. L. Campbell,et al.  Clearing the ground: Foundational questions once again , 1992 .

[4]  C. Campbell,et al.  On Being There , 1965 .

[5]  Mark H. Bickhard,et al.  Piaget on Variation and Selection Models: Structuralism, Logical Necessity, and Interactivism , 1988 .

[6]  Wade Troxell,et al.  Interactivistm: a Functional Model of Representation for Behavior-Based Systems , 1995, ECAL.

[7]  R. Kirk Language, Thought, and Other Biological Categories , 1985 .

[8]  Mark H. Bickhard,et al.  Interactivism and genetic epistemology , 1989 .

[9]  Robin J. Evans,et al.  Towards a theory of cognition under a new control paradigm , 1992 .

[10]  C. Pollard,et al.  Center for the Study of Language and Information , 2022 .

[11]  Mark H. Bickhard,et al.  Levels of representationality , 1998, J. Exp. Theor. Artif. Intell..

[12]  James H. Moor,et al.  Knowledge and the Flow of Information. , 1982 .

[13]  Ulrich Nehmzow,et al.  Using Motor Actions for Location Recognition , 1991 .

[14]  R. Hepburn,et al.  BEING AND TIME , 2010 .

[15]  P. Churchland A neurocomputational perspective , 1989 .

[16]  Allen Newell,et al.  Physical Symbol Systems , 1980, Cogn. Sci..

[17]  Wade Troxell,et al.  INTELLIGENT BEHAVIOR IN MACHINES EMERGING FROM A COLLECTION OF INTERACTIVE CONTROL STRUCTURES , 1995, Comput. Intell..

[18]  Jerome A. Feldman,et al.  Connectionist models and their implications , 1988 .

[19]  M. Martin White Queen Psychology and Other Essays for Alice , 1995 .

[20]  Benjamin Kuipers,et al.  Navigation and Mapping in Large Scale Space , 1988, AI Mag..

[21]  Mark H. Bickhard Emergence of Representation in Autonomous Agents , 1997, Cybern. Syst..

[22]  David E. Rumelhart,et al.  The architecture of mind: a connectionist approach , 1989 .

[23]  R. A. Brooks,et al.  Intelligence without Representation , 1991, Artif. Intell..

[24]  Barry Loewer,et al.  Meaning in mind : Fodor and his critics , 1993 .

[25]  Benjamin Kuipers,et al.  A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations , 1991, Robotics Auton. Syst..

[26]  William P. Alston,et al.  Knowledge and the Flow of Information , 1985 .

[27]  Mark H. Bickhard,et al.  Developmental aspects of expertise: Rationality and generalization , 1996, J. Exp. Theor. Artif. Intell..

[28]  C. A. Hooker,et al.  Reason, regulation, and realism : toward a regulatory systems theory of reason and evolutionary epistemology , 1999 .

[29]  Mark H. Bickhard,et al.  Knowing Levels and Developmental Stages , 1986 .

[30]  Randall Beer,et al.  Intelligence as Adaptive Behavior , 1990 .

[31]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[32]  J. Fodor,et al.  How direct is visual perception?: Some reflections on Gibson's “ecological approach” , 1981, Cognition.

[33]  Mark H. Bickhard,et al.  Foundational issues in artificial intelligence and cognitive science - impasse and solution , 1995, Advances in psychology.

[34]  Lynn Andrea Stein,et al.  Imagination and situated cognition , 1991, J. Exp. Theor. Artif. Intell..

[35]  Mark H. Bickhard,et al.  Representational content in humans and machines , 1993, J. Exp. Theor. Artif. Intell..

[36]  P. Smolensky On the proper treatment of connectionism , 1988, Behavioral and Brain Sciences.

[37]  Maurice Merleau-Ponty Phenomenology of Perception , 1964 .

[38]  E. Gibson,et al.  On the Nature of Representation: A Case Study of James Gibson's Theory of Perception , 1983 .

[39]  Rolf George,et al.  The Semantic Tradition from Kant to Carnap , 1996 .

[40]  James L. McClelland,et al.  Parallel Distributed Processing: Explorations in the Microstructure of Cognition : Psychological and Biological Models , 1986 .

[41]  R A Brooks,et al.  New Approaches to Robotics , 1991, Science.

[42]  James L. McClelland,et al.  Phenomenology of perception. , 1978, Science.

[43]  R. L. Campbell,et al.  Topologies of learning and development , 1996 .

[44]  Tim Smithersy Mapbuilding Using Self-organising Networks in \really Useful Robots" , 1991 .

[45]  Erich Prem,et al.  Grounding and the Entailment Structure in Robots and Artificial Life , 1995, ECAL.

[46]  Randall D. Beer,et al.  Computational and dynamical languages for autonomous agents , 1996 .

[47]  Ruth Garrett Millikan,et al.  White Queen Psychology and Other Essays for Alice. , 1984 .

[48]  Mark H. Bickhard,et al.  Is Cognition an Autonomous Subsystem , 1997 .

[49]  Mark H. Bickhard,et al.  Cognition, convention, and communication , 1980 .

[50]  Mark H. Bickhard Troubles with Computationalism , 1996 .

[51]  M. Litch,et al.  On Explaining Behavior , 2000 .

[52]  Mark H. Bickhard,et al.  Some foundational questions concerning language studies: With a focus on categorial grammars and model-theoretic possible worlds semantics , 1992 .

[53]  Jean-Arcady Meyer,et al.  Robots and Representations , 1998 .

[54]  Ulrich Nehmzow,et al.  Mapbuilding using self-organising networks in “really useful robots” , 1991 .