暂无分享,去创建一个
Samuel Bassetto | Garrick Cabour | Élise Ledoux | Andrés Morales | A. Morales | S. Bassetto | É. Ledoux | G. Cabour
[1] Tetiana Shmelova,et al. Artificial Intelligence in Aviation Industries: Methodologies, Education, Applications, and Opportunities , 2020 .
[2] Qinggang Meng,et al. An End-to-End Steel Surface Defect Detection Approach via Fusing Multiple Hierarchical Features , 2020, IEEE Transactions on Instrumentation and Measurement.
[3] Gary Klein,et al. Naturalistic Decision Making , 2008, Hum. Factors.
[4] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[5] Yunhui Yan,et al. A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects , 2013 .
[6] Moustafa Zouinar. Évolutions de l’Intelligence Artificielle : quels enjeux pour l’activité humaine et la relation Humain‑Machine au travail ? , 2020, Activites.
[7] Nicholas Ross Milton,et al. Knowledge Acquisition in Practice: A Step-by-step Guide , 2007 .
[8] Jun Akatsuka,et al. Illuminating Clues of Cancer Buried in Prostate MR Image: Deep Learning and Expert Approaches , 2019, Biomolecules.
[9] Asaf Shabtai,et al. When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures , 2019, 2020 International Joint Conference on Neural Networks (IJCNN).
[10] S. Haberman,et al. An Investigation of the Fit of Linear Regression Models to Data from an SAT® Validity Study , 2011 .
[11] Samuel Bassetto,et al. Case Study: A Semi-Supervised Methodology for Anomaly Detection and Diagnosis , 2019, 2019 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM).
[12] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[13] Marieke M. M. Peeters,et al. Hybrid collective intelligence in a human–AI society , 2020, AI & SOCIETY.
[14] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[15] Johanna D. Moore. Explanation in Expert Systems : A Survey , 1988 .
[16] Robert O. Briggs,et al. Machines as teammates: A research agenda on AI in team collaboration , 2020, Inf. Manag..
[17] Robertas Damaševičius,et al. Gamification of a Project Management System , 2014, ACHI 2014.
[18] Nancy J. Cooke,et al. Understanding human-robot teams in light of all-human teams: Aspects of team interaction and shared cognition , 2020, Int. J. Hum. Comput. Stud..
[19] Kenneth D. Forbus,et al. Representing, Running, and Revising Mental Models: A Computational Model , 2018, Cogn. Sci..
[20] Gerald Matthews,et al. Super-machines or sub-humans: Mental models and trust in intelligent autonomous systems , 2021 .
[21] Kai Puolamäki,et al. Interpreting Classifiers through Attribute Interactions in Datasets , 2017, ArXiv.
[22] Ina Wagner,et al. Studies of Work ‘in the Wild’ , 2021, Computer Supported Cooperative Work (CSCW).
[23] William J. Clancey,et al. The Epistemology of a Rule-Based Expert System - A Framework for Explanation , 1981, Artif. Intell..
[24] Yu He,et al. PGA-Net: Pyramid Feature Fusion and Global Context Attention Network for Automated Surface Defect Detection , 2020, IEEE Transactions on Industrial Informatics.
[25] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[26] Vincent G. Duffy,et al. Towards augmenting cyber-physical-human collaborative cognition for human-automation interaction in complex manufacturing and operational environments , 2020, Int. J. Prod. Res..
[27] Samuel Bassetto,et al. A Work-Centered Approach for Cyber-Physical-Social System Design: Applications in Aerospace Industrial Inspection , 2021, ArXiv.
[28] D. Clifton,et al. DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence , 2021, Nature Medicine.
[29] Danah Boyd,et al. Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.
[30] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[31] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[32] Shubham Rathi,et al. Generating Counterfactual and Contrastive Explanations using SHAP , 2019, ArXiv.
[33] Jeffrey M. Bradshaw,et al. Tomorrow’s Human–Machine Design Tools: From Levels of Automation to Interdependencies , 2018 .
[34] Dan Boneh,et al. Adversarial Training and Robustness for Multiple Perturbations , 2019, NeurIPS.
[35] Gary Klein,et al. Macrocognition: From Theory to Toolbox , 2016, Front. Psychol..
[36] Raian Ali,et al. Personalising Explainable Recommendations: Literature and Conceptualisation , 2020, WorldCIST.
[37] S. Kolassa. Two Cheers for Rebooting AI: Building Artificial Intelligence We Can Trust , 2020 .
[38] Lena Osterhagen,et al. Evaluation Of Human Work , 2016 .
[39] William J. Clancey,et al. Principles of Explanation in Human-AI Systems , 2021, ArXiv.
[40] Katia Sycara,et al. Deep learning, transparency, and trust in human robot teamwork , 2021, Trust in Human-Robot Interaction.
[41] Michael W. Boyce,et al. Situation Awareness-Based Agent Transparency , 2014 .
[42] David B. Kaber,et al. Cognitive Engineering and Decision Making: An Overview and Future Course , 2007 .
[43] Fahimeh Rajabiyazdi,et al. A Review of Transparency (seeing-into) Models , 2020, 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC).
[44] Julie A. Shah,et al. A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI , 2020, EXTRAAMAS@AAMAS.
[45] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[46] Kevin B. Bennett,et al. Human Interaction with an "Intelligent" Machine , 1987, Int. J. Man Mach. Stud..
[47] Karen Yeung,et al. Recommendation of the Council on Artificial Intelligence (OECD) , 2020, International Legal Materials.
[48] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[49] The role of interdependence in trust , 2021 .
[50] Sondoss Elsawah,et al. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models. , 2015, Journal of environmental management.
[51] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[52] K. J. Vicente,et al. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work , 1999 .
[53] Alun D. Preece,et al. Stakeholders in Explainable AI , 2018, ArXiv.
[54] Jessie Y. C. Chen,et al. Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues , 2014, IEEE Transactions on Human-Machine Systems.
[55] Lars Niklasson,et al. G-REX: A Versatile Framework for Evolutionary Data Mining , 2008, 2008 IEEE International Conference on Data Mining Workshops.
[56] Nancy J. Cooke,et al. Knowledge Elicitation , 2003 .
[57] Ibrahim Habli,et al. Artificial intelligence in health care: accountability and safety , 2020, Bulletin of the World Health Organization.
[58] Alimohammad Shahri,et al. Towards a Code of Ethics for Gamification at Enterprise , 2014, PoEM.
[59] H B TIMMERMAN,et al. What is task analysis? , 1951, Bulletin of the Medical Library Association.
[60] Mireia Ribera,et al. Can we do better explanations? A proposal of user-centered explainable AI , 2019, IUI Workshops.
[61] Keith Case,et al. A variability taxonomy to support automation decision-making for manufacturing processes , 2019, Production Planning & Control.
[62] S. Mor-Yosef,et al. Ranking the Risk Factors for Cesarean: Logistic Regression Analysis of a Nationwide Study , 1990, Obstetrics and gynecology.
[63] Mingyan Liu,et al. Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.
[64] William J. Clancey,et al. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.
[65] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[66] Maxine Mackintosh,et al. Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency , 2020, npj Digital Medicine.
[67] Samuel Bassetto,et al. Extending System Performance Past the Boundaries of Technical Maturity: Human-Agent Teamwork Perspective for Industrial Inspection , 2021, Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021).
[68] Olivia Wu,et al. How methodological frameworks are being developed: evidence from a scoping review , 2020, BMC Medical Research Methodology.
[69] Mani B. Srivastava,et al. Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning , 2018, 2018 21st International Conference on Information Fusion (FUSION).
[70] Jan Muntermann,et al. A method for taxonomy development and its application in information systems , 2013, Eur. J. Inf. Syst..
[71] Dietmar Jannach,et al. A systematic review and taxonomy of explanations in decision support and recommender systems , 2017, User Modeling and User-Adapted Interaction.
[72] F. Cabitza,et al. The proof of the pudding: in praise of a culture of real-world validation for medical artificial intelligence. , 2019, Annals of translational medicine.