暂无分享,去创建一个
[1] K. Crawford,et al. Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms , 2013 .
[2] S. S. Sane,et al. Preprocessing Technique for Discrimination Prevention in Data Mining , 2014 .
[3] Scott Kushner,et al. The freelance translation machine: Algorithmic culture and the invisible industry , 2013, New Media Soc..
[4] Amanda Askell,et al. AI Safety Needs Social Scientists , 2019, Distill.
[5] Krishna P. Gummadi,et al. The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making , 2016 .
[6] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[7] Solon Barocas,et al. Ten simple rules for responsible big data research , 2017, PLoS Comput. Biol..
[8] Tobias Matzner. Why privacy is not enough privacy in the context of "ubiquitous computing" and "big data" , 2014, J. Inf. Commun. Ethics Soc..
[9] Bernd Carsten Stahl,et al. Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation , 2018, IEEE Security & Privacy.
[10] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004 .
[11] Douglas Walton,et al. A new dialectical theory of explanation , 2004 .
[12] K C Klauer,et al. On belief bias in syllogistic reasoning. , 2000, Psychological review.
[13] Jakob Arnoldi,et al. Computer Algorithms, Market Manipulation and the Institutionalization of High Frequency Trading , 2016 .
[14] Filippo A. Raso,et al. Artificial Intelligence & Human Rights: Opportunities & Risks , 2018 .
[15] K. Mosier,et al. Human Decision Makers and Automated Decision Aids: Made for Each Other? , 1996 .
[16] D. Walton. A Dialogue System Specification for Explanation , 2011 .
[17] Matteo Turilli,et al. The ethics of information transparency , 2009, Ethics and Information Technology.
[18] D. Kahneman. Thinking, Fast and Slow , 2011 .
[19] David Beer. Algorithms: Shaping Tastes and Manipulating the Circulations of Popular Culture , 2013 .
[20] Alan Bundy,et al. Preparing for the future of Artificial Intelligence , 2016, AI & SOCIETY.
[21] Nizan Geslevich Packin. Algorithmic Decision-Making: The Death of Second Opinions? , 2019 .
[22] Danah Boyd,et al. Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.
[23] Luciano Floridi,et al. From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices , 2019, ArXiv.
[24] Tim Miller,et al. A Grounded Interaction Protocol for Explainable Artificial Intelligence , 2019, AAMAS.
[25] Mary E. Thomson,et al. The relative influence of advice from human experts and statistical methods on forecast adjustments , 2009 .
[26] Michael Veale,et al. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.
[27] Robert Seyfert,et al. What are algorithmic cultures , 2016 .
[28] Alex Pentland,et al. Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.
[29] Bart Custers,et al. Responsibly Innovating Data Mining and Profiling Tools: A New Approach to Discrimination Sensitive and Privacy Sensitive Attributes , 2014 .
[30] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[31] Xiao Huang,et al. Multi-label Adversarial Perturbations , 2018, 2018 IEEE International Conference on Data Mining (ICDM).
[32] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[33] David Beer,et al. The social power of algorithms , 2017, The Social Power of Algorithms.
[34] Douglas Walton,et al. Some Artificial Intelligence Tools for Argument Evaluation: An Introduction , 2016 .
[35] M. Bar-Hillel. The base-rate fallacy in probability judgments. , 1980 .
[36] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[37] Gary Klein,et al. Explaining Explanation, Part 2: Empirical Foundations , 2017, IEEE Intelligent Systems.
[38] Malte Ziewitz. Governing Algorithms , 2016 .
[39] Felix Feldmann. Measuring Machine Learning Model Interpretability , 2018 .
[40] Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , 2019 .
[41] Filip Karlo Dosilovic,et al. Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[42] Cynthia Rudin,et al. Supersparse linear integer models for optimized medical scoring systems , 2015, Machine Learning.
[43] Raja Parasuraman,et al. Effects of Imperfect Automation on Decision Making in a Simulated Command and Control Task , 2007, Hum. Factors.
[44] Mariarosaria Taddeo,et al. How AI can be a force for good , 2018, Science.
[45] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[46] Kevin Swingler,et al. The Perils of Ignoring Data Suitability - The Suitability of Data used to Train Neural Networks Deserves More Attention , 2011, IJCCI.
[47] Adrian Weller,et al. Challenges for Transparency , 2017, ArXiv.
[48] Alan Rubel,et al. Four ethical priorities for neurotechnologies and AI , 2017, Nature.
[49] R. Stuart Geiger,et al. Bots, bespoke, code and the materiality of software platforms , 2014 .
[50] Francesca Rossi,et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations , 2018, Minds and Machines.
[51] Kush R. Varshney,et al. Increasing Trust in AI Services through Supplier's Declarations of Conformity , 2018, IBM J. Res. Dev..
[52] Pablo J. Boczkowski,et al. The Relevance of Algorithms , 2013 .
[53] Alexander Wendt,et al. On constitution and causation in International Relations , 1998, Review of International Studies.
[54] Eitan Wilf,et al. Toward an Anthropology of Computer-Mediated, Algorithmic Forms of Sociality , 2013, Current Anthropology.
[55] L J Skitka,et al. Automation bias: decision making and performance in high-tech cockpits. , 1997, The International journal of aviation psychology.
[56] Jordan Crandall,et al. Precision + Guided + Seeing , 2006 .
[57] Jordan Crandall,et al. The Geospatialization of Calculative Operations , 2010 .
[58] James L. Szalma,et al. A Meta-Analysis of Factors Influencing the Development of Trust in Automation , 2016, Hum. Factors.
[59] Jeanna Neefe Matthews,et al. Algorithmic accountability: a primer , 2018 .
[60] Douglas Walton,et al. Speech Acts and Burden of Proof in Computational Models of Deliberation Dialogue , 2016 .
[61] Philip M. Napoli. Automated Media: An Institutional Theory Perspective on Algorithmic Media Production and Consumption , 2014 .
[62] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[63] Cynthia Rudin,et al. This Looks Like That: Deep Learning for Interpretable Image Recognition , 2018 .
[64] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[65] Balázs Bodó,et al. Tackling the Algorithmic Control Crisis – the Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents , 2018 .
[66] Heiko Wersing,et al. Mitigating Concept Drift via Rejection , 2018, ICANN.
[67] Carlos Guestrin,et al. Model-Agnostic Interpretability of Machine Learning , 2016, ArXiv.
[68] Luciano Floridi,et al. Prolegomena to a White Paper on an Ethical Framework for a Good AI Society , 2018 .
[69] Silvia Chiappa,et al. Path-Specific Counterfactual Fairness , 2018, AAAI.
[70] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[71] John L. Faundeen,et al. Developing Criteria to Establish Trusted Digital Repositories , 2017, Data Sci. J..
[72] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[73] Julia Rubin,et al. Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).
[74] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[75] Cynthia Rudin,et al. Optimized Scoring Systems: Toward Trust in Machine Learning for Healthcare and Criminal Justice , 2018, Interfaces.
[76] Abdallah Arioua,et al. Formalizing Explanatory Dialogues , 2015, SUM.
[77] Daan Kolkman,et al. Transparent to whom? No algorithmic accountability without a critical audience , 2018, Information, Communication & Society.
[78] M. I. V. Eale,et al. SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .
[79] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[80] S. Jasanoff,et al. Future Imperfect: Science, Technology, and the Imaginations of Modernity , 2015 .
[81] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[82] Göran Bolin,et al. Heuristics of the algorithm: Big Data, user interpretation and institutional translation , 2015 .
[83] Floris Bex,et al. Combining explanation and argumentation in dialogue , 2016, Argument Comput..
[84] Nuno Lourenço,et al. Fairness and Transparency of Machine Learning for Trustworthy Cloud Services , 2018, 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W).
[85] Zoran Bosni,et al. Detecting concept drift in data streams using model explanation , 2018 .
[86] Franco Turini,et al. Meaningful Explanations of Black Box AI Decision Systems , 2019, AAAI.
[87] Thomas McCarthy,et al. The Operation Called Verstehen: Towards a Redefinition of the Problem , 1972, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association.
[88] Daniel Kahneman,et al. Evaluation by Moments: Past and Future , 2002 .
[89] Douglas Walton,et al. The Use of Argument Maps as an Assessment Tool in Higher Education , 2016 .
[90] Freddy Lécué,et al. Explainable AI: The New 42? , 2018, CD-MAKE.
[91] Solon Barocas,et al. The Intuitive Appeal of Explainable Machines , 2018 .
[92] Jure Leskovec,et al. Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.
[93] Solon Barocas,et al. Problem Formulation and Fairness , 2019, FAT.
[94] Cathy O'Neil,et al. Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification , 2017, Big Data.
[95] Yuekai Sun,et al. Debiasing representations by removing unwanted variation due to protected attributes , 2018, ArXiv.
[96] Krishna P. Gummadi,et al. Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.
[97] D. Dittrich,et al. The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research , 2012 .
[98] I. Manokha,et al. Surveillance, Panopticism, and Self-Discipline in the Digital Age , 2018, Surveillance & Society.
[99] A. Tversky,et al. The framing of decisions and the psychology of choice. , 1981, Science.
[100] A. Tutt. An FDA for Algorithms , 2016 .
[101] Min Kyung Lee. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..
[102] Berkeley J. Dietvorst,et al. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err , 2014, Journal of experimental psychology. General.
[103] Michael Winikoff,et al. Debugging Agent Programs with Why?: Questions , 2017, AAMAS.
[104] Jon M. Kleinberg,et al. Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.
[105] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[106] Carlos D. Castillo,et al. Improving Network Robustness against Adversarial Attacks with Compact Convolution , 2017, ArXiv.
[107] Renu T Bali,et al. Artificial intelligence (AI) in healthcare and biomedical research: Why a strong computational/AI bioethics framework is required? , 2019, Indian journal of ophthalmology.
[108] Charles S. Taber,et al. Motivated Skepticism in the Evaluation of Political Beliefs , 2006 .
[109] Joaquín B. Ordieres Meré,et al. Comparison of Data Preprocessing Approaches for Applying Deep Learning to Human Activity Recognition in the Context of Industry 4.0 , 2018, Sensors.
[110] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[111] Raja Chatila,et al. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems , 2019, Robotics and Well-Being.
[112] Adrian Weller,et al. Transparency: Motivations and Challenges , 2019, Explainable AI.
[113] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[114] Franco Turini,et al. DCUBE: discrimination discovery in databases , 2010, SIGMOD Conference.
[115] S. C. Olhede,et al. The growing ubiquity of algorithms in society: implications, impacts and innovations , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[116] Daniel S. Weld,et al. Intelligible Artificial Intelligence , 2018, ArXiv.
[117] Louise Amoore. Doubtful algorithms : of machine learning truths and partial accounts. , 2018 .
[118] Pinar Alper,et al. Provenance-enabled stewardship of human data in the GDPR era , 2018 .
[119] Saif Mohammad,et al. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems , 2018, *SEMEVAL.
[120] Quan Z. Sheng,et al. Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey , 2019 .
[121] K. Crawford,et al. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice , 2019 .
[122] Bruce G. Coury,et al. Status or Recommendation: Selecting the Type of Information for Decision Aiding , 1990 .
[123] Daniel S. Weld,et al. The challenge of crafting intelligible intelligence , 2018, Commun. ACM.
[124] Fabio Roli,et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.
[125] Carl Ginet,et al. In Defense of a Non-Causal Account of Reasons Explanations , 2008 .
[126] Dietrich Manzey,et al. Misuse of automated decision aids: Complacency, automation bias and the impact of training experience , 2008, Int. J. Hum. Comput. Stud..
[127] Lex Gill,et al. Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System , 2018 .
[128] David Warde-Farley,et al. 1 Adversarial Perturbations of Deep Neural Networks , 2016 .
[129] Raja Parasuraman,et al. Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.
[130] Toon Calders,et al. Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.
[131] S. Eckstein. The Belmont Report: ethical principles and guidelines for the protection of human subjects of research , 2003 .
[132] Trevor Darrell,et al. Attentive Explanations: Justifying Decisions and Pointing to the Evidence , 2016, ArXiv.
[133] Don A. Moore,et al. Organizational Behavior and Human Decision Processes , 2019 .
[134] Harini Suresh,et al. A Framework for Understanding Unintended Consequences of Machine Learning , 2019, ArXiv.
[135] Eldar Shafir,et al. Choosing versus rejecting: Why some options are both better and worse than others , 1993, Memory & cognition.
[136] A. Cavoukian,et al. Privacy by Design: essential for organizational accountability and strong business practices , 2010 .
[137] Geoffrey I. Webb,et al. Analyzing concept drift and shift from sample data , 2018, Data Mining and Knowledge Discovery.
[138] Daniel Neyland,et al. On Organizing Algorithms , 2015 .
[139] Krishna P. Gummadi,et al. On Fairness, Diversity and Randomness in Algorithmic Decision Making , 2017, ArXiv.
[140] D. Citron. Technological Due Process , 2007 .
[141] Taina Bucher,et al. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms , 2017, The Social Power of Algorithms.
[142] Peter Brusilovsky,et al. Designing Explanation Interfaces for Transparency and Beyond , 2019, IUI Workshops.
[143] Chris Reed,et al. How should we regulate artificial intelligence? , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[144] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[145] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[146] Ryan Calo,et al. There is a blind spot in AI research , 2016, Nature.
[147] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[148] Daniel Neyland,et al. Algorithmic IF … THEN rules and the conditions and consequences of power , 2017, The Social Power of Algorithms.
[149] Cynthia Rudin,et al. Please Stop Explaining Black Box Models for High Stakes Decisions , 2018, ArXiv.
[150] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[151] Matt J. Kusner,et al. When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness , 2017, NIPS.
[152] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[153] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[154] Benjamin Edwards,et al. Adversarial Robustness Toolbox v0.2.2 , 2018, ArXiv.
[155] Giovanni Comandé,et al. Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation , 2017 .
[156] John Cheney-Lippold,et al. A New Algorithmic Identity , 2011 .
[157] Suresh Venkatasubramanian,et al. A comparative study of fairness-enhancing interventions in machine learning , 2018, FAT.
[158] Daniel Neyland,et al. Bearing Account-able Witness to the Ethical Algorithmic System , 2016 .
[159] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[160] Charles Taylor,et al. Interpretation and the sciences of man , 1973 .
[161] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[162] Rick Salay,et al. Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262 , 2018, ArXiv.
[163] Miroslav Dudík,et al. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.
[164] Margo I. Seltzer,et al. Learning Certifiably Optimal Rule Lists , 2017, KDD.
[165] Ryan Calo,et al. Artificial Intelligence Policy: A Primer and Roadmap , 2017 .
[166] Nuria Oliver,et al. The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good , 2016, ArXiv.
[167] Natasha Dow Schüll,et al. Self in the Loop: Bits, Patterns, and Pathways in the Quantified Self , 2018, A Networked Self and Human Augmentics, Artificial Intelligence, Sentience.
[168] Gerhard Weikum,et al. Fides: Towards a Platform for Responsible Data Science , 2017, SSDBM.
[169] R. Kitchin,et al. Thinking critically about and researching algorithms , 2014, The Social Power of Algorithms.
[170] Or Biran,et al. Explanation and Justification in Machine Learning : A Survey Or , 2017 .
[171] Steve Whittaker,et al. Progressive Disclosure: Designing for Effective Transparency , 2018, ArXiv.
[172] Luciano Floridi,et al. Transparent, explainable, and accountable AI for robotics , 2017, Science Robotics.
[173] Joshua A. Kroll. The fallacy of inscrutability , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[174] Louise Amoore,et al. Algorithmic Life: Calculative Devices in the Age of Big Data , 2015 .
[175] Juliana Freire,et al. Provenance and scientific workflows: challenges and opportunities , 2008, SIGMOD Conference.
[176] Ahmed Hosny,et al. The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards , 2018, Data Protection and Privacy.
[177] Bart Custers,et al. Data Dilemmas in the Information Society: Introduction and Overview , 2013, Discrimination and Privacy in the Information Society.
[178] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[179] Gianclaudio Malgieri,et al. Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations , 2019, International Data Privacy Law.
[180] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[181] Steve F. Anderson. Technologies of Vision: The War Between Data and Images , 2017 .
[182] Louise Amoore,et al. Securing with algorithms: Knowledge, decision, sovereignty , 2017 .
[183] Francesco Bonchi,et al. Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining , 2016, KDD.
[184] J. Dijck. Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology , 2014 .
[185] Zachary C. Lipton,et al. Troubling Trends in Machine Learning Scholarship , 2018, ACM Queue.
[186] B. Anderson. Preemption, precaution, preparedness: Anticipatory action and future geographies , 2010 .
[187] Klaus-Dieter Althoff,et al. A Preliminary Survey of Explanation Facilities of AI-Based Design Support Approaches and Tools , 2018, LWDA.
[188] R. Gonzales. Dark matters: on the surveillance of blackness , 2016 .
[189] Hannah Lebovits. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , 2018, Public Integrity.
[190] Yochanan E. Bigman,et al. People are averse to machines making moral decisions , 2018, Cognition.
[191] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[192] Jichen Zhu,et al. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[193] Marijn Janssen,et al. The challenges and limits of big data algorithms in technocratic governance , 2016, Gov. Inf. Q..
[194] Valentina Zantedeschi,et al. Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.
[195] Peter A. Flach,et al. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant , 2018, IJCAI.
[196] Kate Crawford,et al. The Anxieties of Big Data , 2014 .
[197] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[198] Indrăź źLiobaităź,et al. Measuring discrimination in algorithmic decision making , 2017 .
[199] Markus Zoppelt,et al. Attacks on Machine Learning: Lurking Danger for Accountability , 2019, SafeAI@AAAI.
[200] Ruth McNally,et al. Living Multiples: How Large-scale Scientific Data-mining Pursues Identity and Differences , 2013 .
[201] Ted Striphas. Algorithmic culture , 2015 .
[202] Maria Helen Murphy,et al. Algorithmic surveillance: the collection conundrum , 2017 .
[203] John Langford,et al. A Reductions Approach to Fair Classification , 2018, ICML.
[204] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[205] Federica Russo,et al. Critical data studies: An introduction , 2016, Big Data Soc..
[206] Elizabeth C. Hirschman,et al. Judgment under Uncertainty: Heuristics and Biases , 1974, Science.
[207] Frank A. Pasquale,et al. [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .
[208] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[209] Scott A. Hale,et al. Challenges and frontiers in abusive content detection , 2019, Proceedings of the Third Workshop on Abusive Language Online.
[210] Michael W. Boyce,et al. Situation Awareness-Based Agent Transparency , 2014 .
[211] Franco Turini,et al. Open the Black Box Data-Driven Explanation of Black Box Decision Systems , 2018, ArXiv.
[212] Ross J. Anderson,et al. The collection, linking and use of data in biomedical research and health care: ethical issues , 2015 .
[213] Andreas Holzinger,et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery , 2019, The international journal of medical robotics + computer assisted surgery : MRCAS.
[214] Miriam A. M. Capretz,et al. Machine Learning With Big Data: Challenges and Approaches , 2017, IEEE Access.
[215] S. Noble. Algorithms of Oppression: How Search Engines Reinforce Racism , 2018 .
[216] A. Tversky,et al. Judgment under Uncertainty: Heuristics and Biases , 1974, Science.
[217] L. Ross,et al. Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence , 1979 .
[218] Cynthia Rudin,et al. Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions , 2017, AAAI.
[219] Aws Albarghouthi,et al. Fairness-Aware Programming , 2019, FAT.
[220] Ivan Leudar,et al. Explaining in conversation: towards an argument model. , 1992 .
[221] Gary Marcus,et al. Deep Learning: A Critical Appraisal , 2018, ArXiv.
[222] Galit Shmueli,et al. To Explain or To Predict? , 2010, 1101.0891.
[223] Laurel Eckhouse,et al. Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment , 2018, Criminal Justice and Behavior.
[224] Michele Willson,et al. Algorithms (and the) everyday , 2017, The Social Power of Algorithms.
[225] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[226] Kush R. Varshney,et al. Optimized Pre-Processing for Discrimination Prevention , 2017, NIPS.
[227] Kipling D. Williams,et al. PROCESSES Social Loafing: A Meta-Analytic Review and Theoretical Integration , 2022 .
[228] Tal Z. Zarsky,et al. The Trouble with Algorithmic Decisions , 2016 .
[229] Luciano Floridi,et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .
[230] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[231] Indră Liobaită. Measuring discrimination in algorithmic decision making , 2017 .
[232] Michael Veale,et al. Algorithms that remember: model inversion attacks and data protection law , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[233] Toon Calders,et al. Data preprocessing techniques for classification without discrimination , 2011, Knowledge and Information Systems.
[234] Nadine B. Sarter,et al. Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information , 2006, Hum. Factors.
[235] Muhammad Shafique,et al. A Roadmap Toward the Resilient Internet of Things for Cyber-Physical Systems , 2018, IEEE Access.
[236] T. Gilovich,et al. How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life , 1991 .
[237] Benoît Frénay,et al. Interpretability of machine learning models and representations: an introduction , 2016, ESANN.
[238] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[239] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[240] P. Todd,et al. Simple Heuristics That Make Us Smart , 1999 .
[241] Yalin E. Sagduyu,et al. Spectrum Data Poisoning with Adversarial Deep Learning , 2018, MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM).
[242] Uri Shalit,et al. Learning Representations for Counterfactual Inference , 2016, ICML.
[243] Michael A. Rupp,et al. Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.
[244] Dietrich Manzey,et al. Human Redundancy in Automation Monitoring: Effects of Social Loafing and Social Compensation , 2007 .
[245] Andrew Guthrie Ferguson,et al. Policing Predictive Policing , 2016 .
[246] Mark Latonero. Governing Artificial Intelligence: upholding human rights & dignity , 2018 .
[247] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[248] Ben Anderson,et al. Security and the future: Anticipating the event of terror , 2010 .
[249] Louise Amoore. Cloud geographies , 2018, Cognitive Code.
[250] Gary S Collins,et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration , 2015, Annals of Internal Medicine.
[251] Mike Ananny,et al. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..
[252] Monika Jena,et al. A Study on WEKA Tool for Data Preprocessing, Classification and Clustering , 2013 .
[253] Kush R. Varshney,et al. On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products , 2016, Big Data.
[254] Elias Bareinboim,et al. Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.
[255] G. Wright,et al. Explanation and understanding , 1971 .
[256] A. Tversky,et al. On the psychology of prediction , 1973 .
[257] Lee A. Bygrave,et al. The Right Not to Be Subject to Automated Decisions Based on Profiling , 2017 .
[258] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[259] Brad Boehmke,et al. Interpretable Machine Learning , 2019 .
[260] Bo An,et al. Data Poisoning Attacks on Multi-Task Relationship Learning , 2018, AAAI.
[261] Christian Sandvig,et al. Infrastructure studies meet platform studies in the age of Google and Facebook , 2018, New Media Soc..
[262] R. Binns,et al. Algorithmic Accountability and Public Reason , 2017, Philosophy & Technology.
[263] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[264] Linda G. Pierce,et al. The Perceived Utility of Human and Automated Aids in a Visual Detection Task , 2002, Hum. Factors.
[265] Nadine B. Sarter,et al. Supporting Decision Making and Action Selection under Time Pressure and Uncertainty: The Case of In-Flight Icing , 2001, Hum. Factors.
[266] Nicholas Diakopoulos,et al. Algorithmic Accountability , 2015 .
[267] Douglas Walton,et al. Dialogical Models of Explanation , 2007, ExaCt.
[268] Mariarosaria Taddeo,et al. The ethics of algorithms: Mapping the debate , 2016, Big Data Soc..
[269] Avi Feller,et al. Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.
[270] Chris Russell,et al. Explaining Explanations in AI , 2018, FAT.
[271] Turan Paksoy,et al. Artificial Intelligence, Robotics and Autonomous Systems in SCM , 2020 .
[272] Jun Sakuma,et al. Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.
[273] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[274] Jenna Burrell,et al. How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .
[275] John M. Cinnamon,et al. Social Injustice in Surveillance Capitalism , 2017 .
[276] Mike Ananny,et al. Toward an Ethics of Algorithms , 2016 .
[277] Neville Moray. Monitoring, complacency, scepticism and eutactic behaviour , 2003 .
[278] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[279] Jure Leskovec,et al. The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables , 2017, KDD.
[280] Veronika Alexander,et al. Why trust an algorithm? Performance, cognition, and neurophysiology , 2018, Comput. Hum. Behav..
[281] L. Floridi,et al. Data ethics , 2021, Effective Directors.
[282] Martin Bichler,et al. Responsible Data Science , 2017, Bus. Inf. Syst. Eng..
[283] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[284] Barbara Hammer,et al. Interpretable machine learning with reject option , 2018, Autom..
[285] Raja Parasuraman,et al. Performance Consequences of Automation-Induced 'Complacency' , 1993 .
[286] Corinne Cath. Governing artificial intelligence: ethical, legal and technical opportunities and challenges , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[287] David Sands,et al. Data Minimisation: a Language-Based Approach (Long Version) , 2016, ArXiv.
[288] Adrian Mackenzie,et al. The production of prediction: What does machine learning want? , 2015 .
[289] Adrian Mackenzie. Machine learning and genomic dimensionality : from features to landscapes , 2015 .
[290] Astrid Mager. Algorithmic Ideology: How Capitalist Society Shapes Search Engines , 2011 .
[291] Adrian Mackenzie,et al. Codes and Codings in Crisis , 2011 .
[292] Kalina Bontcheva,et al. Corpus Annotation through Crowdsourcing: Towards Best Practice Guidelines , 2014, LREC.
[293] Krishna P. Gummadi,et al. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning , 2018, AAAI.
[294] Silvia Mollicchi,et al. Flatness versus depth: A study of algorithmically generated camouflage , 2017 .
[295] Toniann Pitassi,et al. Learning Fair Representations , 2013, ICML.
[296] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[297] Daniel A. Keim,et al. The Role of Uncertainty, Awareness, and Trust in Visual Analytics , 2016, IEEE Transactions on Visualization and Computer Graphics.
[298] Wei Dai,et al. Improving Data Quality through Deep Learning and Statistical Models , 2018, ArXiv.
[299] Marcello D’Agostino,et al. Introduction: the Governance of Algorithms , 2018, Philosophy & Technology.