The ABCs of Assured Autonomy

Each passing day seems to bring new instances of automation of routine tasks and the addition of artificial intelligence or machine learning algorithms to new domains. Autonomous systems are a diverse class of technologies ranging from AI driven natural language processing and image recognition to closed-form control systems for aircraft autopiloting. Coincident with this explosion in autonomy is a related drumbeat of stories of AI failures. These failures have ranged from tragedies resulting in bystander deaths to humorous examples of video game flaws being exploited. Widespread deployment and use of autonomous systems will depend on society trusting that these systems will perform as expected. While bias from incomplete training data is a well-trod area for improvement toward reducing AI failures, it is unclear if this bias is a sufficient or merely a necessary condition for loss of trust. What else may be needed for public assurance in autonomous systems? Here we identify three features of diverse autonomous systems that serve as a foundation for assured autonomy. These features are: the accuracy with which the algorithm senses and perceives the environment in a manner relatable to humans; a reduction in bias driven by the training data and algorithmic bias; and the complexity of the algorithmic process in terms of the ability to reverse engineer the decision-making processes. Building from this foundation, future autonomous systems can begin to reverse the loss of trust starting to be seen with respect to these technologies.

[1]  Heather Roff,et al.  Artificial Intelligence: Power to the People , 2019, Ethics & International Affairs.

[2]  Matthew Scott Aitken,et al.  Assured Human-Autonomy Interaction through Machine Self-Confidence , 2016 .

[3]  R. Sanz,et al.  Fridges, elephants, and the meaning of autonomy and intelligence , 2000, Proceedings of the 2000 IEEE International Symposium on Intelligent Control. Held jointly with the 8th IEEE Mediterranean Conference on Control and Automation (Cat. No.00CH37147).

[4]  Colin G. Drury,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[5]  S. Streufert Trust. A Mechanism for the Reduction of Social Complexity , 1968 .

[6]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[7]  Krsto Pandza,et al.  Strategic and ethical foundations for responsible innovation , 2013 .

[8]  R. V. Schomberg A vision of Responsible Innovation , 2013 .

[9]  J. Stilgoe,et al.  Developing a framework for responsible innovation* , 2013, The Ethics of Nanotechnology, Geoengineering and Clean Energy.

[10]  Tatsuya Nomura,et al.  Experimental investigation into influence of negative attitudes toward robots on human–robot interaction , 2006, AI & SOCIETY.

[11]  Nancy K. Lankton,et al.  Technology, Humanness, and Trust: Rethinking Trust in Technology , 2015, J. Assoc. Inf. Syst..

[12]  Francesc Soriguera,et al.  Autonomous vehicles: theoretical and practical challenges , 2018 .

[13]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[14]  Christopher D. Manning,et al.  Advances in natural language processing , 2015, Science.

[15]  N. L. Chervany,et al.  THE MEANINGS OF TRUST , 2000 .

[16]  Laurence T. Yang,et al.  A survey on deep learning for big data , 2018, Inf. Fusion.

[17]  Peter M. Asaro,et al.  AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care , 2019, IEEE Technology and Society Magazine.

[18]  Jeanna Neefe Matthews,et al.  Managing Bias in AI , 2019, WWW.

[19]  Roman V Yampolskiy Predicting future AI failures from historic examples , 2019, foresight.

[20]  Oren Etzioni,et al.  AI assisted ethics , 2016, Ethics and Information Technology.

[21]  P A Hancock,et al.  Imposing limits on autonomous systems , 2017, Ergonomics.

[22]  Natalia M. Alexandrov,et al.  Design for Survivability: An Approach to Assured Autonomy , 2016 .

[23]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Abdelhak Lakhouaja,et al.  Data science in light of natural language processing , 2018 .

[25]  David H. Autor,et al.  Why Are There Still So Many Jobs? The History and Future of Workplace , 2015 .

[26]  J. Stilgoe,et al.  Responsible research and innovation: From science in society to science for society, with society , 2012, Emerging Technologies: Ethics, Law and Governance.

[27]  Markus Appel,et al.  Are robots becoming unpopular? Changes in attitudes towards autonomous robotic systems in Europe , 2018, Comput. Hum. Behav..

[28]  R. Norgaard Coordinating disciplinary and organizational ways of knowing , 1992 .