how effective are presidential candidates at engaging viewers during debates? to answer this question, we designed a mobile app and conducted a large-scale national study of individual college students’ real-time reactions to the first general election debate of 2012. Previous studies have relied either on real-time but small-sample individual dial reactions or on large-scale public reactions to debates in their entirety, after the fact, and without consideration of specific statements or events within the debates. by contrast, our approach allowed us to collect moment-by-moment data from a large and diverse group of participants in natural settings. the resulting data make it possible to answer questions previously believed to be outside the bounds of systematic inquiry. here, we explain the method and provide some key findings that illustrate the payoff of our approach. our study suggests that collecting large-scale, real-time data is feasible and valuable for advancing research on a host of public opinion phenomena. Amber e. boydstun is an assistant professor of political science at the University of california– davis, davis, cA, UsA. rebecca A. glazier is an assistant professor of political science at the University of Arkansas at little rock, little rock, Ar, UsA. Matthew t. Pietryka is an assistant professor of political science at florida state University, tallahassee, fl, UsA. Philip resnik is a professor in the department of linguistics and the institute for Advanced computer studies at the University of Maryland, college Park, Md, UsA. the authors are deeply indebted to the hundreds of instructors and thousands of students who participated in the 2012 react labs: educate project. timothy Jurka’s masterful skills helped make the project possible in the first place. All-night postdebate data analysis sessions were made possible—and hilarious—by timothy Jurka, debra leiter, Jack reilly, and Michelle schwarze. the authors thank ben highton for very helpful comments on the manuscript and drew stephens for his invaluable technical support. funding support for this project was provided by a Presidential studies center grant from the University of Arkansas at little rock. *Address correspondence to Amber e. boydstun, University of california–davis, one shields Ave., davis, cA 95616, UsA; e-mail: aboydstun@ucdavis.edu. Public Opinion Quarterly, Vol. 78, special issue, 2014, pp. 330–343 doi:10.1093/poq/nfu007 at U niersity of C alirnia, D avis on N ovem er 4, 2014 http://poqrdjournals.org/ D ow nladed from Presidential debates serve a singular role in U.s. elections. debates uniquely provide candidates unmediated access to a large and diverse audience (trent and friedenberg 2008), including marginally attentive citizens (Pfau 2003) and undecided voters (geer 1988) who use debates to learn about the candidates (blais and Perrella 2008; holbrook 1999; lemert 1993). indeed, debates are the most visible, widely watched events of a presidential campaign (benoit, hansen, and Verser 2003; schroeder 2008). yet, despite the importance of debates, we know little about exactly which candidate cues tend to resonate positively with viewers and, just as important, which cues provoke negative affect. examining the effects of debate cues requires the ability to track a large sample of viewers’ responses to debates in real time in a natural environment. toward this aim, we designed a mobile app for use during the first 2012 debate, providing real-time reactions with a level of scale and detail not previously possible. here, we describe the method we developed and its implementation, along with presenting several key findings that illustrate its value over existing methods. studying debate Reactions Past debate research, although impressive in many ways, has been unable to measure the effect of specific candidate messages on individual attitudes. Most mainstream polls collect aggregate data only after a debate has finished (e.g., holbrook 1999; shaw 1999), making individual-level conclusions impossible. And most large-scale individual-level research on debates also relies on postdebate evaluations (e.g., Abramowitz 1978; geer 1988; hillygus and Jackman 2003; steeper 1978). Whether surveys are cross-sectional (e.g., lanoue 1992; sigelman and sigelman 1984) or panel designs (e.g., kraus and smith 1977; tsfati 2003), the data cannot differentiate between the effects of the debate itself and other influences, such as media coverage of the debates (brubaker and hanson 2009; fridkin et al. 2007). Moreover, these studies cannot isolate which candidate messages are influencing viewers. recent work indicates that researchers cannot trust survey respondents to self-report accurately even whether they watched the debate (Prior 2012). thus, while past research has contributed greatly to our understanding of debate effects (bartels 2006; benoit, hansen, and Verser 2003; geer 1988; holbrook 1999), scholars have often been reduced to educated guesswork about which specific candidate cues produce these effects. A handful of innovative studies have used dial testing to collect real-time data but have been limited by costs and logistical complications associated with specialized hardware, small sample sizes (kirk and schill 2011; Mckinney, kaid, and robertson 2001), and other challenges to external validity such as artificial focus-group settings (ramanathan et al. 2010) and, in the case of kirk and schill’s landmark study (2011), priming from the cnn Real-Time Reactions to a 2012 Presidential Debate 331 at U niersity of C alirnia, D avis on N ovem er 4, 2014 http://poqrdjournals.org/ D ow nladed from moderator (Moore 2008).1 furthermore, dials provide poor measures of participant engagement. Participants are often repeatedly reminded to respond, and a dial can simply be maintained at a non-midpoint position. dials can thus differentiate between degrees of favorability and unfavorability but cannot tell us reliably when a cue has engaged citizens enough to evoke a response. collecting debate Viewer Responses via mobile app our app brings together traditional survey methodology with the moment-bymoment data characteristic of dial-test methods, but it runs on mobile devices, making it possible to utilize a much larger participant pool. Access is via the mobile device’s browser. thus, no “app store” download is required, and it can be used from any smartphone, tablet, or computer. As figure 1 illustrates, four reactions are available: Agree, disagree, spin, and dodge (we consider only the first two here, leaving spin and dodge reactions for later analysis). to register a reaction, the user taps (or clicks) the person to whom they are reacting, followed by a reaction button. All reactions therefore include both a target (Moderator, obama, or romney, order randomized by participant) and a reaction type (Agree, disagree, spin, or dodge), making clear precisely how and to whom a debate viewer is reacting. Viewers’ ability to react on their own initiative allows us to track not only participants’ affect but also when they have passed a minimal threshold of effort to take action—even action as small as a click. if a candidate can get a viewer to click—analogous to other forms of minimal political engagement (shulman 2009; White 2010)—it may represent the first rung in a “ladder of engagement” (karpf 2010, 16) leading to more substantive mobilization. this mobile-app methodology allows us to collect data from a large and diverse group of debate viewers reacting in their natural environments outside the lab (e.g., in their own homes or at debate-viewing parties). responses are viewer initiated and virtually instantaneous, thereby allowing us to capture and analyze unmediated viewer reactions as opposed to digested opinions (brubaker and hanson 2009; fridkin et al. 2007; tsfati 2003).
[1]
Arthur H. Miller,et al.
Front-Page News and Real-World Cues: A New Look at Agenda-Setting by the Media
,
1980
.
[2]
D. Hillygus,et al.
Voter Decision Making in Election 2000: Campaign Effects, Partisan Activation, and the Clinton Legacy
,
2003
.
[3]
Sophia Mã ¶ ller,et al.
Political Campaign Communication Principles And Practices
,
2016
.
[4]
J. Krosnick,et al.
Attitude importance and the accumulation of attitude-relevant knowledge in memory.
,
2005,
Journal of personality and social psychology.
[5]
A. L. McGill,et al.
Are Political Opinions Contagious? An Investigation on the Effects of Seating Position and Prior Attitudes on Moment-To-Moment Evaluations During the Presidential Debates
,
2010
.
[6]
Christopher Wlezien,et al.
On the salience of political issues: The problem with ‘most important problem’ ☆
,
2005
.
[7]
Stuart W. Shulman.
The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking
,
2009
.
[8]
B. Jones,et al.
Agendas and instability in American politics
,
1993
.
[9]
Jack M. McLeod,et al.
ISSUES AND IMAGES
,
1983
.
[10]
D. Shaw.
A Study of Presidential Campaign Event Effects from 1952 to 1992
,
1999,
The Journal of Politics.
[11]
L. Sigelman,et al.
Judgments of the Carter-Reagan Debate: The Eyes of the Beholders
,
1984
.
[12]
Gary Hanson,et al.
The Effect of Fox News and CNN's Postdebate Commentator Analysis on Viewers' Perceptions of Presidential Candidate Performance
,
2009
.
[13]
L. L. Kaid,et al.
The Front-Runner, Contenders, and Also-Rans
,
2001
.
[14]
Rita Kirk,et al.
A Digital Agora: Citizen Participation in the 2008 Presidential Debates
,
2011
.
[15]
W. Riker,et al.
The Strategy of Rhetoric: Campaigning for the American Constitution
,
1996
.
[16]
Christopher Wlezien,et al.
Distinguishing Between Most Important Problems and Issues
,
2011
.
[17]
A. Abramowitz.
The Impact of a Presidential Debate on Voter Rationality
,
1978
.
[18]
James B. Lemert.
Do televised presidential debates help inform voters
,
1993
.
[19]
R. Niemi,et al.
Determinants of State Economic Perceptions
,
1999
.
[20]
Reality Bites: News Exposure and Economic Opinion
,
1997
.
[21]
John W. Kingdon.
Agendas, alternatives, and public policies
,
1984
.
[22]
Amber E. Boydstun,et al.
Colleague Crowdsourcing: A Method for Fostering National Student Engagement and Large-N Data Collection
,
2014,
PS: Political Science & Politics.
[23]
Judith S. Trent,et al.
Political Campaign Communication: Principles and Practices
,
1983
.
[24]
Amber E. Boydstun,et al.
Playing to the Crowd: Agenda Control in Presidential Debates
,
2013
.
[25]
E. E. Schattschneider.
The Semisovereign People: A Realist's View of Democracy in America
,
1960
.
[26]
Marc J. Hetherington,et al.
The Media's Role in Forming Voters' National Economic Evaluations in 1992
,
1996
.
[27]
M. Prior.
Who Watches Presidential Debates? Measurement Problems in Campaign Effects Research
,
2012
.
[28]
Alan Schroeder.
Presidential Debates: Forty Years of High-Risk TV
,
2000
.
[29]
Nicholas A. Valentino,et al.
Elements of Reason: Who Says What? Source Credibility as a Mediator of Campaign Advertising
,
2000
.
[30]
K. Fridkin,et al.
Capturing the Power of a Campaign Event: The 2004 Presidential Debate in Tempe
,
2007,
The Journal of Politics.
[31]
Thomas M. Holbrook.
Political Learning from Presidential Debates
,
1999
.
[32]
Larry M. Bartels.
Priming and Persuasion in Presidential Campaigns
,
2006
.
[33]
John G. Geer,et al.
THE EFFECTS OF PRESIDENTIAL DEBATES ON THE ELECTORATE'S PREFERENCES FOR CANDIDATES
,
1988
.
[34]
B. Jones,et al.
The Politics of Attention: How Government Prioritizes Problems
,
2006
.
[35]
D. Shaw,et al.
Agenda setting function of mass media
,
1972
.
[36]
IyengarShanto,et al.
Selective Exposure to Campaign Communication: The Role of Anticipated Agreement and Issue Public Membership
,
2015
.
[37]
D. J. Lanoue,et al.
ONE THAT MADE A DIFFERENCE: COGNITIVE CONSISTENCY, POLITICAL KNOWLEDGE, AND THE 1980 PRESIDENTIAL DEBATE
,
1992
.
[38]
William L. Benoit,et al.
A meta-analysis of the effects of viewing U.S. presidential debates
,
2003
.
[39]
Thomas E. Nelson,et al.
Media Framing of a Civil Liberties Conflict and Its Effect on Tolerance
,
1997,
American Political Science Review.
[40]
D. Karpf.
Online Political Mobilization from the Advocacy Group's Perspective: Looking Beyond Clicktivism
,
2010
.
[41]
Lynn Vavreck.
The Message Matters: The Economy and Presidential Campaigns
,
2009
.
[42]
Andrea M. L. Perrella,et al.
Systemic Effects of Televised Candidates' Debates
,
2008
.
[43]
Amber E. Boydstun,et al.
Agenda Control in the 2008 Presidential Debates
,
2013
.