U.S. Public Opinion on the Governance of Artificial Intelligence

Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans' perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.

[1]  John T Lang,et al.  Who Does the Public Trust? The Case of Genetically Modified Food in the United States , 2005, Risk analysis : an official publication of the Society for Risk Analysis.

[2]  David C. Parkes,et al.  How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness , 2018, AIES.

[3]  H. Kastenholz,et al.  Laypeople's and Experts' Perception of Nanotechnology Hazards , 2007, Risk analysis : an official publication of the Society for Risk Analysis.

[4]  Jane Macoubrie Nanotechnology: public concerns, reasoning and trust in government , 2006 .

[5]  M. Siegrist The Influence of Trust and Perceptions of Risks and Benefits on the Acceptance of Gene Technology , 2000, Risk analysis : an official publication of the Society for Risk Analysis.

[6]  M. Siegrist,et al.  Perception of risk: the influence of general trust, and general confidence , 2005 .

[7]  Hans Pitlik,et al.  Does social distrust always lead to a stronger support for government intervention? , 2015 .

[8]  Anna Olofsson,et al.  ATTITUDES TO GENE TECHNOLOGY: THE SIGNIFICANCE OF TRUST IN INSTITUTIONS , 2006 .

[9]  Michael Siegrist,et al.  A Causal Model Explaining the Perception and Acceptance of Gene Technology1 , 1999 .

[10]  Michael D. Cobb,et al.  Public perceptions about nanotechnology: Risks, benefits and trust , 2004, Emerging Technologies: Ethics, Law and Governance.

[11]  Stephen Cave,et al.  "Scary Robots": Examining Public Responses to AI , 2019, AIES.

[12]  A. Shleifer,et al.  Regulation and Distrust , 2009 .

[13]  Michael Horowitz,et al.  Public opinion and the politics of the killer robots debate , 2016 .

[14]  B. Mittelstadt Principles Alone Cannot Guarantee Ethical AI , 2019 .

[15]  Sarah Myers,et al.  Gender , Race , and Power in AI , 2019 .

[16]  Devin Caughey,et al.  Policy Preferences and Policy Change: Dynamic Responsiveness in the American States, 1936–2014 , 2017, American Political Science Review.

[17]  Brian A. Nosek,et al.  The preregistration revolution , 2018, Proceedings of the National Academy of Sciences.

[18]  H. Kastenholz,et al.  Public acceptance of nanotechnology foods and food packaging: The influence of affect and trust , 2007, Appetite.

[19]  Christian E. H. Beaudrie,et al.  Anticipating the perceived risk of nanotechnologies. , 2009, Nature nanotechnology.

[20]  Dominique Brossard,et al.  An Overview of Attitudes Toward Genetically Engineered Food. , 2018, Annual review of nutrition.

[21]  C. Critchley,et al.  Attitudes to genetically modified food over time: How trust in organizations and the media cycle predict support , 2015, Public understanding of science.

[22]  Winston T. Lin,et al.  Standard Operating Procedures: A Safety Net for Pre-Analysis Plans , 2016, PS: Political Science & Politics.

[23]  L. Floridi,et al.  A Unified Framework of Five Principles for AI in Society , 2019, Issue 1.

[24]  Jess Whittlestone,et al.  The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions , 2019, AIES.