A Human-Centered Approach to Interactive Machine Learning

The interactive machine learning (IML) community aims to augment humans' ability to learn and make decisions over time through the development of automated decision-making systems. This interaction represents a collaboration between multiple intelligent systems---humans and machines. A lack of appropriate consideration for the humans involved can lead to problematic system behaviour, and issues of fairness, accountability, and transparency. This work presents a human-centred thinking approach to applying IML methods. This guide is intended to be used by AI practitioners who incorporate human factors in their work. These practitioners are responsible for the health, safety, and well-being of interacting humans. An obligation of responsibility for public interaction means acting with integrity, honesty, fairness, and abiding by applicable legal statutes. With these values and principles in mind, we as a research community can better achieve the collective goal of augmenting human ability. This practical guide aims to support many of the responsible decisions necessary throughout iterative design, development, and dissemination of IML systems.

[1]  N. Shah,et al.  Implementing Machine Learning in Health Care - Addressing Ethical Challenges. , 2018, The New England journal of medicine.

[2]  Dario Amodei,et al.  Reward learning from human preferences and demonstrations in Atari , 2018, NeurIPS.

[3]  Patrick M. Pilarski,et al.  Communicative Capital for Prosthetic Agents , 2017, ArXiv.

[4]  Gary James Jason,et al.  The Logic of Scientific Discovery , 1988 .

[5]  Shane Legg,et al.  Scalable agent alignment via reward modeling: a research direction , 2018, ArXiv.

[6]  N. Pennington,et al.  Back to the future: Temporal perspective in the explanation of events , 1989 .

[7]  Mike Cooley,et al.  On Human-Machine Symbiosis , 1996, Cognition, Communication and Interaction.

[8]  Laurent Orseau,et al.  AI Safety Gridworlds , 2017, ArXiv.

[9]  Mark O. Riedl,et al.  Explore, Exploit or Listen: Combining Human Feedback and Policy Model to Speed up Deep Reinforcement Learning in 3D Worlds , 2017, ArXiv.

[10]  Jerry Alan Fails,et al.  Interactive machine learning , 2003, IUI '03.

[11]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[12]  Maya Cakmak,et al.  Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..

[13]  Per Ola Kristensson,et al.  A Review of User Interface Design for Interactive Machine Learning , 2018, ACM Trans. Interact. Intell. Syst..

[14]  Pat Langley,et al.  Crafting Papers on Machine Learning , 2000, ICML.

[15]  Patrick M. Pilarski,et al.  Simultaneous Control and Human Feedback in the Training of a Robotic Agent with Actor-Critic Reinforcement Learning , 2016, ArXiv.