Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation

Growing interest in eXplainable Artificial Intelligence (XAI) aims to make AI and machine learning more understandable to human users. However, most existing work focuses on new algorithms, and not on usability, practical interpretability and efficacy on real users. In this vision paper, we propose a new research area of eXplainable AI for Designers (XAID), specifically for game designers. By focusing on a specific user group, their needs and tasks, we propose a human-centered approach for facilitating game designers to co-create with AI/ML techniques through XAID. We illustrate our initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users’ needs, and we identify key open challenges.

[1]  Arvind Satyanarayan,et al.  The Building Blocks of Interpretability , 2018 .

[2]  Simon Colton,et al.  The Painting Fool Sees! New Projects with the Automated Painter , 2015, ICCC.

[3]  Michael Mateas,et al.  Tanagra: Reactive Planning and Constraint Solving for Mixed-Initiative Level Design , 2011, IEEE Transactions on Computational Intelligence and AI in Games.

[4]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[5]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[6]  Julian Togelius,et al.  Sentient Sketchbook: Computer-aided game level authoring , 2013, FDG.

[7]  Pascal Vincent,et al.  Visualizing Higher-Layer Features of a Deep Network , 2009 .

[8]  Stephanie Rosenthal,et al.  Verbalization: Narration of Autonomous Robot Experience , 2016, IJCAI.

[9]  Mike Preuss,et al.  Artificial and Computational Intelligence in Games: AI-Driven Game Design (Dagstuhl Seminar 17471) , 2017, Dagstuhl Reports.

[10]  LacaveCarmen,et al.  A review of explanation methods for Bayesian networks , 2002 .

[11]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[12]  Andrea Vedaldi,et al.  Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[13]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[14]  H. H. Clark,et al.  Common ground at the understanding of demonstrative reference , 1983 .

[15]  Li Chen,et al.  Trust building with explanation interfaces , 2006, IUI '06.

[16]  Simon Colton,et al.  Ludus Ex Machina: Building A 3D Game Designer That Competes Alongside Humans , 2014, ICCC.

[17]  Yoshua Bengio,et al.  Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Olcay Boz,et al.  Converting A Trained Neural Network To a Decision Tree DecText - Decision Tree Extractor , 2002, ICMLA.

[19]  Antonios Liapis,et al.  Mixed-initiative co-creativity , 2014, FDG.

[20]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[21]  David Frohlich,et al.  MIXED INITIATIVE INTERACTION , 1991 .

[22]  B. Malle How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction , 2004 .

[23]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Julian Togelius,et al.  Ropossum: An Authoring Tool for Designing, Optimizing and Solving Cut the Rope Levels , 2013, AIIDE.

[25]  N. McGlynn Thinking fast and slow. , 2014, Australian veterinary journal.

[26]  Simon Colton,et al.  Towards the automatic optimisation of procedural content generators , 2016, 2016 IEEE Conference on Computational Intelligence and Games (CIG).

[27]  Or Biran,et al.  Explanation and Justification in Machine Learning : A Survey Or , 2017 .

[28]  Tim Miller,et al.  Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.

[29]  Carmen Lacave,et al.  A review of explanation methods for Bayesian networks , 2002, The Knowledge Engineering Review.

[30]  Finale Doshi-Velez,et al.  A Roadmap for a Rigorous Science of Interpretability , 2017, ArXiv.

[31]  Simon Colton,et al.  The FloWr Online Plat-form: Automated Programming and Computational Creativity as a Service , 2016, ICCC.

[32]  Julian Togelius,et al.  Designer Modeling for Personalized Game Content Creation Tools , 2021, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

[33]  Wolfgang Minker,et al.  Verbal Plan Explanations for Hybrid Planning , 2010, MKWI.