A Multi-Category Theory of Intention

A Multi-Category Theory of Intention Henny Admoni (henny@cs.yale.edu) and Brian Scassellati (scaz@cs.yale.edu) Department of Computer Science, 51 Prospect Street New Haven, CT 06511 USA Abstract People excel at attributing intentionality to other agents, whether in simple scenarios such as shapes moving in two di- mensions or complex scenarios such as people interacting. We note that intentionality attributions seem to fall into two cat- egories: low-level intentionality in which an observer has a theory of mind about an agent, and high-level intentionality in which an observer believes the agent has a theory of mind about something else. We introduce the terms L-intentionality and H-intentionality to refer to these attributions, respectively, and describe this division by using examples from previous re- search. Social robots provide a particularly good platform for evaluating the presence of different types of intentionality, and we discuss how robots can help distinguish the relationship be- tween H- and L-intentionality, based on a number of possible models that we enumerate. We conclude by highlighting some interesting questions about intentionality in general and the in- terplay between H- and L-intentionality in particular. Keywords: intention; animacy; computer model; human- robot interaction; robotics Introduction Much research in psychology has focused on people’s ability—and eagerness—to attribute intentions and animacy to simple shapes based on motion. From Michotte’s (1963) and Heider and Simmel’s (1944) experiments with animacy and intention to recent work decomposing intentional actions such as chasing (Gao, Newman, & Scholl, 2009), psycholo- gists have found that intention attributions to moving shapes appear to be immediate and irresistible. Animacy is often ob- served in a display of simple shapes when the motion in the display cannot be explained as ordinary inanimate motion, for instance when speed and direction change without direct contact with other objects (Tremoulet & Feldman, 2000). At the same time, evidence shows that people attribute in- tentions based on high-level behavioral evaluations, as well. For instance, 18-month-old toddlers can recognize and imi- tate intentional actions performed by adults, even if those ac- tions are unsuccessful (Meltzoff, 1995). By pre-school age, children begin to represent others’ beliefs, even when those beliefs are mistaken, in order to correctly predict a person’s intentional action (Wellman, Cross, & Watson, 2001). As adults, neurological evidence indicates that a certain region of the brain is sensitive to whether peoples’ motions are consis- tent or inconsistent with their purported intentions (Pelphrey, Morris, & McCarthy, 2004). While abundant evidence demonstrates peoples’ attribu- tions of intentionality, the types of attributions they make seem to differ. Cues that prompt intention attributions come in two categories: low-level, perceptual cues, such as mo- tion, and high-level cues that must be reasoned about, such as facial expression. To distinguish between intentions cued in these different ways, we introduce two novel terms, re- ferring to intention attributions made from low-level cues as L-intentionality and to attributions made from high-level cues as H-intentionality. To date, little work has explored such cat- egorical differences of intentionality. In the Types of Inten- tionality section, we use examples from previously published research to define our hypothesis that L-intentionality and H- intentionality are separate kinds of intention attributions. Robotics has provided a valuable experimental platform to test perceptions of intentionality. Because robots are ex- tremely flexible (in terms of appearance, motions, sounds, and so on), researchers can manipulate specific variables of a human-robot interaction to test specific features of intention- ality attributions. In the Social Robots as Experimental Plat- forms section, we describe past work with robots and other computational models of intentionality, and we discuss the benefits social robotics can offer intentionality research. The next section, Models of Intentionality, enumerates pos- sible models for the relationship between H-intentionality and L-intentionality based on the hypothesis that these are distinct observations. We describe what each model implies about real-world intentionality attributions, and we note how each model can be tested to confirm or deny our hypothesis. We conclude this paper by discussing some likely starting points for research on the different categories of intention, and describing some interesting questions about intentional- ity that have yet to be addressed. Types of Intentionality In this paper, we define an intentional action as a goal- directed action that is performed deliberately. Intentional- ity is the capacity to express or perform intentional actions. A theory of mind for other agents enables us to attribute intentionality to those agents (Leslie, 1987; Baron-Cohen, 1995), an ability that develops early in life (Meltzoff, 1995). Note that for our purposes, animacy and intentionality are strongly correlated, in that it is impossible to attribute ani- macy without the presence of intentional, goal-directed be- havior (Tremoulet & Feldman, 2006). In this section, we distinguish L-intentionality and H- intentionality as distinct but related categories. We can define each category by how an observer perceives and recognizes intentionality. To put the categorical difference simply, L- intentionality in an agent involves an observer having a theory of mind for that agent; H-intentionality involves an observer believing that the agent has a theory of mind for something else (Figure 1).

[1]  Takayuki Kanda,et al.  Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  Brian Scassellati,et al.  The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents , 2011, Int. J. Soc. Robotics.

[3]  Uwe D. Hanebeck,et al.  Tractable probabilistic models for intention recognition based on expert knowledge , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Uta Frith,et al.  Theory of mind , 2001, Current Biology.

[5]  Patrice D. Tremoulet,et al.  The influence of spatial context and the role of intentionality in the interpretation of animacy from motion , 2006, Perception & psychophysics.

[6]  Joshua Knobe,et al.  Theory of mind and moral cognition: exploring the connections , 2005, Trends in Cognitive Sciences.

[7]  Patrice D. Tremoulet,et al.  Perceptual causality and animacy , 2000, Trends in Cognitive Sciences.

[8]  F. Heider,et al.  An experimental study of apparent behavior , 1944 .

[9]  H. Wellman,et al.  Meta-analysis of theory-of-mind development: the truth about false belief. , 2001, Child development.

[10]  S. Baron-Cohen Mindblindness: An Essay on Autism and Theory of Mind , 1997 .

[11]  Brian Scassellati,et al.  How to build robots that make friends and influence people , 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289).

[12]  Brian J. Scholl,et al.  The psychophysics of chasing: A case study in the perception of animacy , 2009, Cognitive Psychology.

[13]  Kevin A. Pelphrey,et al.  Grasping the Intentions of Others: The Perceived Intentionality of an Action Influences Activity in the Superior Temporal Sulcus during Social Perception , 2004, Journal of Cognitive Neuroscience.

[14]  Ari Weinstein,et al.  Perception of intentions and mental states in autonomous virtual agents , 2011, CogSci.

[15]  Joshua B. Tenenbaum,et al.  Bayesian models of human action understanding , 2005, NIPS.

[16]  Danica Kragic,et al.  Layered HMM for Motion Intention Recognition , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  A. Leslie Pretense and representation: The origins of "theory of mind." , 1987 .

[18]  Justin W. Hart,et al.  No fair!!: an interaction with a cheating robot , 2010, HRI 2010.

[19]  A. Michotte The perception of causality , 1963 .

[20]  Rachid Alami,et al.  A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[21]  Patrice D. Tremoulet,et al.  The Attribution of Mental Architecture from Motion : Towards a Computational Theory , 2008 .

[22]  Patrice D. Tremoulet,et al.  Perception of Animacy from the Motion of a Single Object , 2000, Perception.

[23]  A. Dickinson,et al.  The Intentionality of Animal Action , 1990 .

[24]  M. Rutter,et al.  Understanding intention in normal development and in autism , 1998 .

[25]  A. Meltzoff Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children. , 1995, Developmental psychology.