Engineering Societies in the Agents World III

Our thesis is that an agent is autonomous only if he is capable, within a non predictable environment, to balance two forms of rationality: one that, given goals and preferences, enables him to select the best course of action (means-ends), the other, given current achievements and capabilities, enables him to adapt preferences and future goals. We will propose the basic elements of an economic model that should explain how and why this balance is achieved: in particular we underline that an agent’s capabilities can often be considered as partially sunk investments. This leads an agent, while choosing, to consider not just the value generated by the achievement of a goal, but also the lost value generated by the non use of existing capabilities. We will propose that, under particular conditions, an agent, in order to be rational, could be led to perform a rationalization process of justification that changes preferences and goals according to his current state and available capabilities. Moreover, we propose that such a behaviour could offer a new perspective on the notion of autonomy and on the social process of coordination. 1 Rationality in Traditional Theories of Choice Traditional theories of choice are based upon the paradigm that choosing implies deciding the best course of action in order to achieve a goal [31]. Goals are generally considered as given or, at least, they are selected through an exogenous preference function which assigns an absolute value to each possible state of the world [29]. Potential goals, once ordered according to preferences, are selected by comparing each absolute value with the cost of its achievement. In particular, the agent will commit to the goal that maximizes the difference between the absolute benefit of the goal and the cost of using the capabilities that are needed. This means-ends paradigm subtends a type of rationality that March defines as anticipatory, causative, consequential, since an agent anticipates the consequences of his actions through a knowledge of cause-effect relationships [9] [24]. Here, as underlined by Castelfranchi, autonomy is viewed in the restrictive sense of executive autonomy : the only discretionality the agent possesses is about the 1 In this paper we do not intentionally draw any distinction between artificial and human agents, but we rather discuss the concept of agent in general. way in which a goal is to be achieved and not about which kind of goal should be preferable; in this sense, even if an agent selects a goal, he is unable to direct the criteria of the selection. The interest of the agent is always reconducible to the one of the designer and, as Steels concludes referring to Artificial Agents, “AI systems built using the classical approach are not autonomous, although they are automatic . . . these systems can never step outside the boundaries of what was foreseen by the designer because they cannot change their own behaviour in a fundamental way.” [34]. Sometimes, as we will propose, autonomy and rationality lie in the possibility to change our mind on what is good and what is bad on the basis of current experience; basically, this is equivalent to the possibility to decide not just how to achieve a goal, but rather which goal is to achieve and, moreover, which is preferable. 2 Another Perspective on Rationality: Ex-post

[1]  Hal R. Arkes,et al.  The psychology of waste. , 1996 .

[2]  R. Conte,et al.  Cognitive and social action , 1995 .

[3]  Barry M. Staw,et al.  Attribution of the "causes" of performance: A general alternative interpretation of cross-sectional research on organizations. , 1975 .

[4]  James G. March,et al.  A primer on decision making : how decisions happen , 1994 .

[5]  R. Thaler Quasi Rational Economics , 1991 .

[6]  R. Frank Microeconomics and behavior , 1991 .

[7]  Cristiano Castelfranchi,et al.  Guarantees for Autonomy in Cognitive Agent Architecture , 1995, ECAI Workshop on Agent Theories, Architectures, and Languages.

[8]  Eldar Shafir,et al.  Reason-based choice , 1993, Cognition.

[9]  H. Arkes,et al.  The Psychology of Sunk Cost , 1985 .

[10]  Hersh Shefrin,et al.  Beyond greed and fear : understanding behavioral finance and the psychology of investing , 2000 .

[11]  Eric J. Johnson,et al.  Behavioral decision research: A constructive processing perspective. , 1992 .

[12]  R. Daft,et al.  Toward a Model of Organizations as Interpretation Systems , 1984 .

[13]  H. Arkes,et al.  The sunk cost and Concorde effects: Are humans less rational than lower animals? , 1999 .

[14]  Barry M. Staw,et al.  Knee-deep in the Big Muddy: A study of escalating commitment to a chosen course of action. , 1976 .

[15]  James G. March,et al.  How Decisions Happen in Organizations , 1991, Hum. Comput. Interact..

[16]  D. Soman The mental accounting of sunk time costs: Why time is not like money. , 2001 .

[17]  Luc Steels,et al.  When are robots intelligent autonomous agents? , 1995, Robotics Auton. Syst..

[18]  Donald Nute,et al.  Counterfactuals , 1975, Notre Dame J. Formal Log..

[19]  Michael X Cohen,et al.  A Garbage Can Model of Organizational Choice. , 1972 .

[20]  Joel Brockner,et al.  Face-Saving and Entrapment , 1981 .

[21]  Barry M. Staw,et al.  Understanding Behavior in Escalation Situations , 1989, Science.

[22]  D. Johnstone THE “ REVERSE ” SUNK COST EFFECT AND EXPLANATIONS RATIONAL AND IRRATIONAL , 2000 .

[23]  H. Simon,et al.  Reason in Human Affairs. , 1984 .

[24]  Jon Doyle,et al.  Doyle See Infer Choose Do Perceive Act , 2009 .

[25]  Roberta Ferrario,et al.  Counterfactual Reasoning , 2001, CONTEXT.

[26]  Fausto Giunchiglia,et al.  Local Models Semantics, or Contextual Reasoning = Locality + Compatibility , 1998, KR.

[27]  Stephanie Newport,et al.  Effects of absolute and relative sunk costs on the decision to persist with a course of action , 1991 .

[28]  Chiara Ghidini,et al.  Contextual reasoning distilled , 2000, J. Exp. Theor. Artif. Intell..