暂无分享,去创建一个
Jichen Zhu | Anushay Furqan | Sebastian Risi | Chelsea M. Myers | Evan Freed | Luis Fernando Laris Pardo | S. Risi | Jichen Zhu | Evan Freed | Luis Fernando Laris Pardo | Anushay Furqan
[1] Perry R. Cook,et al. Human model evaluation in interactive supervised learning , 2011, CHI.
[2] Qian Yang,et al. Grounding Interactive Machine Learning Tool Design in How Non-Experts Actually Build Models , 2018, Conference on Designing Interactive Systems.
[3] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[4] Junmo Kim,et al. Learning Not to Learn: Training Deep Neural Networks With Biased Data , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[6] Andrew Zisserman,et al. Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings , 2018, ECCV Workshops.
[7] Jichen Zhu,et al. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[8] Carrie J. Cai,et al. The effects of example-based explanations in a machine learning interface , 2019, IUI.
[9] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[10] Peter A. Flach,et al. Counterfactual Explanations of Machine Learning Predictions: Opportunities and Challenges for AI Safety , 2019, SafeAI@AAAI.
[11] Qian Yang,et al. Machine Learning as a UX Design Material: How Can We Imagine Beyond Automation, Recommenders, and Reminders? , 2018, AAAI Spring Symposia.
[12] Kim Halskov,et al. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material , 2017, CHI.
[13] Daniela Rus,et al. Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure , 2019, AIES.
[14] Mireia Ribera,et al. Can we do better explanations? A proposal of user-centered explainable AI , 2019, IUI Workshops.
[15] Li Chen,et al. Cluster-Based Visual Abstraction for Multivariate Scatterplots , 2018, IEEE Transactions on Visualization and Computer Graphics.
[16] Hany Farid,et al. The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.
[17] Ross Maciejewski,et al. Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics , 2019, IEEE Transactions on Visualization and Computer Graphics.
[18] Alistair A. Young,et al. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 2017, MICCAI 2017.
[19] Haiyi Zhu,et al. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders , 2019, CHI.
[20] Daniel Jurafsky,et al. Word embeddings quantify 100 years of gender and ethnic stereotypes , 2017, Proceedings of the National Academy of Sciences.
[21] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[22] Ariel Shamir,et al. Can Children Understand Machine Learning Concepts?: The Effect of Uncovering Black Boxes , 2019, CHI.
[23] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[24] Jieyu Zhao,et al. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints , 2017, EMNLP.
[25] Eric Horvitz,et al. Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems , 2017, 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO).
[26] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[27] Maya Cakmak,et al. Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..
[28] Jichen Zhu,et al. Interactive Visualizer to Facilitate Game Designers in Understanding Machine Learning , 2019, CHI Extended Abstracts.
[29] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[30] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[31] Antitza Dantcheva,et al. Mitigating Bias in Gender, Age and Ethnicity Classification: A Multi-task Convolution Neural Network Approach , 2018, ECCV Workshops.
[32] Wei Chen,et al. ScatterNet: A Deep Subjective Similarity Model for Visual Analysis of Scatterplots , 2020, IEEE Transactions on Visualization and Computer Graphics.
[33] Arvind Satyanarayan,et al. The Building Blocks of Interpretability , 2018 .
[34] Qian Yang,et al. The Role of Design in Creating Machine-Learning-Enhanced User Experience , 2017, AAAI Spring Symposia.
[35] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.