Software Agents with Concerns of their Own

We claim that it is possible to have artificial software agents for which their actions and the world they inhabit have first-person or intrinsic meanings. The first-person or intrinsic meaning of an entity to a system is defined as its relation with the system's goals and capabilities, given the properties of the environment in which it operates. Therefore, for a system to develop first-person meanings, it must see itself as a goal-directed actor, facing limitations and opportunities dictated by its own capabilities, and by the properties of the environment. The first part of the paper discusses this claim in the context of arguments against and proposals addressing the development of computer programs with first-person meanings. A set of definitions is also presented, most importantly the concepts of cold and phenomenal first-person meanings. The second part of the paper presents preliminary proposals and achievements, resulting of actual software implementations, within a research approach that aims to develop software agents that intrinsically understand their actions and what happens to them. As a result, an agent with no a priori notion of its goals and capabilities, and of the properties of its environment acquires all these notions by observing itself in action. The cold first-person meanings of the agent's actions and of what happens to it are defined using these acquired notions. Although not solving the full problem of first-person meanings, the proposed approach and preliminary results allow us some confidence to address the problems yet to be considered, in particular the phenomenal aspect of first-person meanings.

[1]  Roger Penrose,et al.  Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness , 1996 .

[2]  E. D. Paolo,et al.  From participatory sense-making to language: there and back again , 2015 .

[3]  R. A. Brooks,et al.  Intelligence without Representation , 1991, Artif. Intell..

[4]  K. Gödel Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I , 1931 .

[5]  Paul Vogt,et al.  The physical symbol grounding problem , 2002, Cognitive Systems Research.

[6]  K. Gödel Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I , 1931 .

[7]  M. Tomasello,et al.  Understanding and sharing intentions: The origins of cultural cognition , 2005, Behavioral and Brain Sciences.

[8]  Roger Penrose,et al.  Précis of The Emperor's New Mind: Concerning computers, minds, and the laws of physics , 1990, Behavioral and Brain Sciences.

[9]  R. Penrose,et al.  Consciousness in the universe: a review of the 'Orch OR' theory. , 2014, Physics of life reviews.

[10]  Olaf Blanke,et al.  Keeping in Touch with One's Self: Multisensory Mechanisms of Self-Consciousness , 2009, PloS one.

[11]  Donald Perlis,et al.  The roots of self-awareness , 2005 .

[12]  Aaron Hunter,et al.  Belief Change with Uncertain Action Histories , 2015, J. Artif. Intell. Res..

[13]  Brian Scassellati,et al.  Using probabilistic reasoning over time to self-recognize , 2009, Robotics Auton. Syst..

[14]  John Dinsmore,et al.  The symbolic and connectionist paradigms : closing the gap , 1992 .

[15]  Hector J. Levesque,et al.  Intention is Choice with Commitment , 1990, Artif. Intell..

[16]  A. Cangelosi,et al.  Symbol grounding and the symbolic theft hypothesis , 2002 .

[17]  Stanislaw Krajewski On Gödel's Theorem and Mechanism: Inconsistency or Unsoundness is Unavoidable in any Attempt to 'Out-Gö del' the Mechanist , 2007, Fundam. Informaticae.

[18]  Cristiano Castelfranchi,et al.  The Cognitive-Motivational Compound of Emotional Experience , 2009 .

[19]  L. Steels The symbol grounding problem has been solved, so what’s next? , 2008 .

[20]  Cristiano Castelfranchi,et al.  The role of beliefs in goal dynamics: prolegomena to a constructive theory of intentions , 2007, Synthese.

[21]  Cristiano Castelfranchi,et al.  Trust, relevance, and arguments , 2014, Argument Comput..

[22]  G. G. Gallop Chimpanzees: self-recognition. , 1970, Science.

[23]  Anand S. Rao,et al.  Modeling Rational Agents within a BDI-Architecture , 1997, KR.

[24]  T. Froese Beyond neurophenomenology: A review of Colombetti's The Feeling Body , 2015 .

[25]  W. Singer Consciousness and the Binding Problem , 2001, Annals of the New York Academy of Sciences.

[26]  Anand S. Rao,et al.  BDI Agents: From Theory to Practice , 1995, ICMAS.

[27]  Luc Steels,et al.  Discovering the Competitors , 1996, Adapt. Behav..

[28]  C. Castelfranchi,et al.  Forgiveness: A Cognitive-Motivational Anatomy , 2011 .

[29]  Anand S. Rao,et al.  Deliberation and its Role in the Formation of Intentions , 1991, UAI.

[30]  Tom Ziemke,et al.  Enactive artificial intelligence: Investigating the systemic organization of life and mind , 2009, Artif. Intell..

[31]  L. Steels Evolving grounded communication for robots , 2003, Trends in Cognitive Sciences.

[32]  Geoffrey Hunter What Computers Can't Do , 1988, Philosophy.

[33]  T. Ziemke,et al.  Rethinking Grounding , 1997 .

[34]  Tom Ziemke,et al.  Mechanistic versus phenomenal embodiment: Can robot embodiment lead to strong AI? , 2001, Cognitive Systems Research.

[35]  Juliette Blevins,et al.  Reply to commentaries , 2006 .

[36]  Luís Miguel Botelho,et al.  The centrifugal development of artificial agents: a research agenda , 2007, SCSC.

[37]  Cristiano Castelfranchi,et al.  Modeling Social Action for AI Agents , 1997, IJCAI.

[38]  G. Pezzulo,et al.  Thinking as the control of imagination: a conceptual framework for goal-directed systems , 2009, Psychological research.

[39]  Olaf Blanke,et al.  Multisensory Origin of the Subjective First-Person Perspective: Visual, Tactile, and Vestibular Mechanisms , 2013, PloS one.

[40]  Thomas Erickson,et al.  Making Sense of Sense Making , 2007 .

[41]  G. Colombetti The Feeling Body: Affective Science Meets the Enactive Mind , 2014 .

[42]  D. Chalmers Facing Up to the Problem of Consciousness , 1995 .

[43]  Kenneth M. Ford,et al.  Why Gödel's Theorem Cannot Refute Computationalism , 1998, Artif. Intell..

[44]  A. Damasio The Feeling of What Happens: Body and Emotion in the Making of Consciousness , 1999 .

[45]  Jaçanã Machado,et al.  Software agents that learn through observation , 2006, AAMAS '06.

[46]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.

[47]  Andreas Birk,et al.  Robot Learning and Self-Sufficiency: What the Energy-Level Can Tell Us about a Robot's Performance , 1997, EWLR.

[48]  R. Penrose,et al.  Consciousness In The Universe , 2011 .

[49]  Aaron Hunter,et al.  Iterated Belief Change Due to Actions and Observations , 2011, J. Artif. Intell. Res..

[50]  Z. Pylyshyn Computation and cognition: issues in the foundations of cognitive science , 1980, Behavioral and Brain Sciences.

[51]  G. Carpenter,et al.  Behavioral and Brain Sciences , 1999 .

[52]  Benjamin Kuipers,et al.  Autonomous Learning of High-Level States and Actions in Continuous Environments , 2012, IEEE Transactions on Autonomous Mental Development.

[53]  Tony Savage,et al.  The grounding of motivation in artificial animals: Indices of motivational behavior , 2003, Cognitive Systems Research.