Using intrinsic complexity of turn-taking games to predict participants’ reaction times Jakub Szymanik (jakub.szymanik@gmail.com) Institute for Logic, Language and Computation, University of Amsterdam Ben Meijering (b.meijering@rug.nl ) Rineke Verbrugge (L.C.Verbrugge@rug.nl ) Institute of Artificial Intelligence, University of Groningen Abstract We study structural properties of a turn-based game called the Marble Drop Game, which is an experimental paradigm de- signed to investigate higher-order social reasoning. We show that the cognitive complexity of game trials, measured with respect to reaction time, can be predicted by looking at the structural properties of the game instances. In order to do this, we define complexity measures of finite dynamic two-player games based on the number of alternations between the game players and on the pay-off structure. Our predictions of re- action times and reasoning strategies, based on the theoreti- cal analysis of complexity of Marble Drop game instances, are compared to subjects’ actual reaction times. This research il- lustrates how formal methods of logic and computer science can be used to identify the inherent complexity of cognitive tasks. Such analyses can be located between Marr’s computa- tional and algorithmic levels. Keywords: cognitive difficulty; strategic games; higher-order social reasoning; theory of mind Introduction In recent years, questions have been raised about the applica- bility of logic and computer science to model cognitive phe- nomena (see, e.g., Frixione, 2001; Stenning and Van Lambal- gen, 2008; Van Rooij, 2008). One of the trends has been to apply formal methods to study the complexity of cognitive tasks in various domains, for instance: syllogistic reasoning (Geurts, 2003), problem solving (Gierasimczuk et al., 2012), and natural language semantics (Szymanik and Zajenkowski, 2010). It has been argued that with respect to its explanatory power, such analysis can be located between Marr’s (1983) computational and algorithmic levels. More recently, there has also been a trend to focus on sim- ilar questions regarding social cognition, more specifically, theory of mind. Especially, higher-order reasoning of the form ‘I believe that Ann knows that Peter thinks . . . ’ became an attractive topic for logical analysis (Verbrugge, 2009). Here, the logical investigations often go hand in hand with game theory (see, e.g., Osborne and Rubinstein, 1994). In this context, one of the common topics among researchers in logic and game theory has been backward induction (BI), the process of reasoning backwards, from the end of the game, to determine a sequence of optimal actions (Van Benthem, 2002). Backward induction can be understood as an inductive algorithm defined on a game tree. The BI algorithm tells us which sequence of actions will be chosen by agents that want to maximize their own payoffs, assuming common knowl- edge of rationality. In game-theoretical terms, backward in- duction is a common method for determining sub-game per- fect equilibria in the case of finite extensive-form games. 1 Games have been extensively used to design experimen- tal paradigms aiming at studying social cognition (Camerer, 2003), recently with a particular focus on higher-order so- cial cognition: the matrix game (Hedden and Zhang, 2002), the race game (Gneezy et al., 2010; Hawes et al., 2012), the road game (Flobbe et al., 2008; Raijmakers et al., 2013), and the Marble Drop Game (henceforth, MDG) (Meijering et al., 2010, 2011, 2012). All the mentioned paradigms are actually game-theoretically equivalent. They are all finite extensive- form games that can be solved by applying BI. As an example in this paper we will consider MDG (see Fig. 1). Many studies have indicated that application of higher- order social reasoning among adults is far from optimal (see, e.g., Hedden and Zhang, 2002; Verbrugge and Mol, 2008). However, Meijering et al. (2010, 2011) report on a near ceil- ing performance of subjects when their reasoning processes are facilitated by, for example, a step-wise training. Still, an eye-tracking study of the subjects solving the game suggests that backward induction is not necessarily the only strategy used (Meijering et al., 2012). We still do not know exactly what reasoning strategies 2 the subjects are applying when playing this kind of dynamic extensive form games. One way to use formal methods to study this question has been proposed by (Ghosh et al., 2010; Ghosh and Meijering, 2011): to formulate all reasoning strategies in a logical language, and compare ACT-R models based on each reasoning strategy with a subject’s actual per- formance in a sequence of games, based on reaction times, accuracy and eye-tracking data. This corresponds to a study between the computational and algorithmic levels of Marr’s (1983) hierarchy. 1 Backward induction is a generalization of the minimax algo- rithm for extensive form games; the subgame-perfect equilibrium is a refinement of the Nash equilibrium, introduced to exclude equilib- ria with implausible threats (Osborne and Rubinstein, 1994). 2 The term ‘strategy’ is used here more broadly than in game the- ory, where it is just a partial function from the set of histories (se- quences of events) at each stage of the game to the set of actions of the player when it is supposed to make a move. We are interested in human reasoning strategies that can be used to solve the cognitive problems posed by the game.
[1]
Sanjeev Arora,et al.
Computational Complexity: A Modern Approach
,
2009
.
[2]
H. Rijn,et al.
The Facilitative Effect of Context on Second-Order Social Reasoning
,
2010
.
[3]
Rineke Verbrugge,et al.
Learning to Apply Theory of Mind
,
2008,
J. Log. Lang. Inf..
[4]
Rineke Verbrugge,et al.
I Do Know What You Think I Think: Second-Order Theory Of Mind In Strategic Games Is Not That Difficult
,
2011,
CogSci.
[5]
Iris van Rooij,et al.
The Tractable Cognition Thesis
,
2008,
Cogn. Sci..
[6]
Sujata Ghosh,et al.
On combining cognitive and formal modeling: A case study involving strategic reasoning
,
2011
.
[7]
R. Baayen,et al.
Mixed-effects modeling with crossed random effects for subjects and items
,
2008
.
[8]
Rineke Verbrugge,et al.
Logic Meets Cognition: Empirical Reasoning in Games
,
2010,
MALLOW.
[9]
R. Verbrugge.
The Facts Matter, and so do Computational Models
,
2009
.
[10]
Han L. J. van der Maas,et al.
Logical and psychological analysis of deductive mastermind
,
2012,
ESSLLI Logic & Cognition Workshop.
[11]
Rineke Verbrugge,et al.
Children’s Application of Theory of Mind in Reasoning and Language
,
2008,
J. Log. Lang. Inf..
[12]
Aldo Rustichini,et al.
Experience and insight in the Race game
,
2010
.
[13]
Marcello Frixione,et al.
Tractable Competence
,
2001,
Minds and Machines.
[14]
Jakub Szymanik,et al.
Comprehension of Simple Quantifiers: Empirical Evaluation of a Computational Model
,
2010,
Cogn. Sci..
[15]
Ariel Rubinstein,et al.
A Course in Game Theory
,
1995
.
[16]
T. Hedden,et al.
What do you think I think you think?: Strategic reasoning in matrix games
,
2002,
Cognition.
[17]
Jakub Szymanik,et al.
Computational complexity of polyadic lifts of generalized quantifiers in natural language
,
2010
.
[18]
Maartje E. J. Raijmakers,et al.
Children’s strategy use when playing strategic games
,
2012,
Synthese.
[19]
David Marr,et al.
VISION A Computational Investigation into the Human Representation and Processing of Visual Information
,
2009
.
[20]
Johan van Benthem,et al.
Extensive Games as Process Models
,
2002,
J. Log. Lang. Inf..
[21]
Keith Stenning,et al.
Human Reasoning and Cognitive Science
,
2008
.
[22]
Bart Geurts,et al.
Reasoning with quantifiers
,
2003,
Cognition.
[23]
Aldo Rustichini,et al.
Experience and Abstract Reasoning in Learning Backward Induction
,
2011,
Front. Neurosci..
[24]
N. Taatgen,et al.
What Eye Movements Can Tell about Theory of Mind in a Strategic Game
,
2012,
PloS one.
[25]
ปิยดา สมบัติวัฒนา.
Behavioral Game Theory: Experiments in Strategic Interaction
,
2013
.