暂无分享,去创建一个
Amir-Hossein Karimi | Gilles Barthe | Borja Balle | Isabel Valera | Borja Balle | G. Barthe | I. Valera | Amir-Hossein Karimi | B. Balle | Isabel Valera
[1] Chris Russell,et al. Efficient Search for Diverse Coherent Explanations , 2019, FAT.
[2] W. Marsden. I and J , 2012 .
[3] Edsger W. Dijkstra,et al. A constructive approach to the problem of program correctness , 1968 .
[4] Daniel Kroening,et al. Decision Procedures - An Algorithmic Point of View , 2008, Texts in Theoretical Computer Science. An EATCS Series.
[5] M. Gario,et al. PySMT: a Solver-Agnostic Library for Fast Prototyping of SMT-Based Algorithms , 2015 .
[6] Luciano Floridi,et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .
[7] Paul Voigt,et al. The EU General Data Protection Regulation (GDPR) , 2017 .
[8] Albert Oliveras,et al. On SAT Modulo Theories and Optimization Problems , 2006, SAT.
[9] Nikolaj Bjørner,et al. Z3: An Efficient SMT Solver , 2008, TACAS.
[10] Fabrizio Silvestri,et al. Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking , 2017, KDD.
[11] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[12] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[13] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[14] Stefan Rüping,et al. Learning interpretable models , 2006 .
[15] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[16] Alex Alves Freitas,et al. Comprehensible classification models: a position paper , 2014, SKDD.
[17] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[18] David Gunning,et al. DARPA's explainable artificial intelligence (XAI) program , 2019, IUI.
[19] Adrian Weller,et al. Challenges for Transparency , 2017, ArXiv.
[20] Lauretta O. Osho,et al. Axiomatic Basis for Computer Programming , 2013 .
[21] I-Cheng Yeh,et al. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients , 2009, Expert Syst. Appl..
[22] Cormac Flanagan,et al. Avoiding exponential explosion: generating compact verification conditions , 2001, POPL '01.
[23] Cynthia Rudin,et al. Please Stop Explaining Black Box Models for High Stakes Decisions , 2018, ArXiv.
[24] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[25] Timon Gehr,et al. An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..
[26] Min Wu,et al. Safety Verification of Deep Neural Networks , 2016, CAV.
[27] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[28] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[29] Robert W. Floyd,et al. Assigning Meanings to Programs , 1993 .
[30] Martin Wattenberg,et al. The What-If Tool: Interactive Probing of Machine Learning Models , 2019, IEEE Transactions on Visualization and Computer Graphics.
[31] Tiziana Margaria,et al. Tools and algorithms for the construction and analysis of systems: a special issue for TACAS 2017 , 2001, International Journal on Software Tools for Technology Transfer.
[32] Roberto Sebastiani,et al. Optimization in SMT with LA(Q) Cost Functions , 2012 .
[33] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[34] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[35] M. Wegman,et al. Global value numbers and redundant computations , 1988, POPL '88.
[36] Mark N. Wegman,et al. Efficiently computing static single assignment form and the control dependence graph , 1991, TOPL.