暂无分享,去创建一个
Kristina Lerman | Fred Morstatter | Aram Galstyan | Ninareh Mehrabi | Nripsuta Saxena | A. Galstyan | Kristina Lerman | Fred Morstatter | Ninareh Mehrabi | N. Saxena
[1] C. E. Gehlke,et al. Certain Effects of Grouping upon the Size of the Correlation Coefficient in Census Tract Material , 1934 .
[2] C. Blyth. On Simpson's Paradox and the Sure-Thing Principle , 1972 .
[3] E. Phelps. The Statistical Theory of Racism and Sexism , 1972 .
[4] Ray Marshall,et al. The Economics of Racial Discrimination: A Survey , 1974 .
[5] P. Bickel,et al. Sex Bias in Graduate Admissions: Data from Berkeley , 1975, Science.
[6] Clinton L. Doggett,et al. The Equal Employment Opportunity Commission , 1990 .
[7] Helen Nissenbaum,et al. Bias in computer systems , 1996, TOIS.
[8] Willy E. Rice. Race, Gender, “Redlining,” and the Discriminatory Access to Loans, Credit, and Insurance: An Historical and Empirical Analysis of Consumers Who Sued Lenders and Insurers in Federal and State Courts, 1950-1995 , 1996 .
[9] Helen Nissenbaum,et al. Defining the Web: The Politics of Search Engines , 2000, Computer.
[10] Amitabha Mukerjee,et al. Multi–objective Evolutionary Algorithms for the Risk–return Trade–off in Bank Loan Management , 2002 .
[11] David B. Mustard. Reexamining Criminal Behavior: The Importance of Omitted Variable Bias , 2003, Review of Economics and Statistics.
[12] Philipp Koehn,et al. Europarl: A Parallel Corpus for Statistical Machine Translation , 2005, MTSUMMIT.
[13] Kevin A. Clarke. The Phantom Menace: Omitted Variable Bias in Econometric Research , 2005 .
[14] Manel Capdevila Capdevila,et al. La reincidència en el delicte en la justícia de menors , 2006 .
[15] Eszter Hargittai,et al. Whose Space? Differences Among Users and Non-Users of Social Network Sites , 2007, J. Comput. Mediat. Commun..
[16] S. Riegg,et al. Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions , 2008 .
[17] Marwan Mattar,et al. Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments , 2008 .
[18] Stanislas Leibler,et al. Simpson's Paradox in a Synthetic Microbial System , 2009, Science.
[19] Ben Y. Zhao,et al. User interactions in social networks and their implications , 2009, EuroSys '09.
[20] Toon Calders,et al. Classifying without discriminating , 2009, 2009 2nd International Conference on Computer, Control and Communication.
[21] Married,et al. Classification with no discrimination by preferential sampling , 2010 .
[22] Toon Calders,et al. Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.
[23] Michael McCarthy,et al. The Routledge Handbook of Corpus Linguistics , 2010 .
[24] Toon Calders,et al. Data preprocessing techniques for classification without discrimination , 2011, Knowledge and Information Systems.
[25] S. Danziger,et al. Extraneous factors in judicial decisions , 2011, Proceedings of the National Academy of Sciences.
[26] Jun Sakuma,et al. Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.
[27] Lauren A. Rivera,et al. Hiring as Cultural Matching , 2012 .
[28] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[29] Huan Liu,et al. Is the Sample Good Enough? Comparing Data from Twitter's Streaming API with Twitter's Firehose , 2013, ICWSM.
[30] D. Borsboom,et al. Simpson's paradox in psychological science: a practical guide , 2013, Front. Psychol..
[31] Salvatore Ruggieri,et al. A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.
[32] Faisal Kamiran,et al. Explainable and Non-explainable Discrimination in Classification , 2013, Discrimination and Privacy in the Information Society.
[33] Dong Nguyen,et al. "How Old Do You Think I Am?" A Study of Language and Age in Twitter , 2013, ICWSM.
[34] Josep Domingo-Ferrer,et al. A Methodology for Direct and Indirect Discrimination Prevention in Data Mining , 2013, IEEE Transactions on Knowledge and Data Engineering.
[35] Ning Wang,et al. Assessing the bias in samples of large online networks , 2014, Soc. Networks.
[36] Kristina Lerman,et al. Leveraging Position Bias to Improve Peer Recommendation , 2014, PloS one.
[37] Mona N. Fouad,et al. Enhancing minority participation in clinical trials (EMPaCT): Laying the groundwork for improving minority clinical trial accrual , 2014 .
[38] Ting Wang,et al. Why Amazon's Ratings Might Mislead You: The Story of Herding Effects , 2014, Big Data.
[39] Zeynep Tufekci,et al. Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls , 2014, ICWSM.
[40] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[41] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[42] S. Ruggieri,et al. Causal Discrimination Discovery Through Propensity Score Analysis , 2016, ArXiv.
[43] Lu Zhang,et al. On Discrimination Discovery Using Causal Networks , 2016, SBP-BRiMS.
[44] Cathy O'Neil,et al. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016, Vikalpa: The Journal for Decision Makers.
[45] Peter Szolovits,et al. Genetic Misdiagnoses and the Potential for Health Disparities. , 2016, The New England journal of medicine.
[46] Christopher T. Lowenkamp,et al. False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .
[47] Loren G. Terveen,et al. "Blissfully Happy" or "Ready toFight": Varying Interpretations of Emoji , 2016, ICWSM.
[48] Dan Cosley,et al. Averaging Gone Wrong: Using Time-Aware Analyses to Better Understand Behavior , 2016, WWW.
[49] Max Welling,et al. The Variational Fair Autoencoder , 2015, ICLR.
[50] Adam Tauman Kalai,et al. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.
[51] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[52] Lu Zhang,et al. Situation Testing-Based Discrimination Discovery: A Causal Inference Approach , 2016, IJCAI.
[53] Thorsten Joachims,et al. Recommendations as Treatments: Debiasing Learning and Evaluation , 2016, ICML.
[54] Krishna P. Gummadi,et al. The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making , 2016 .
[55] Lu Zhang,et al. Anti-discrimination learning: a causal modeling-based framework , 2017, International Journal of Data Science and Analytics.
[56] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[57] Kush R. Varshney,et al. Optimized Pre-Processing for Discrimination Prevention , 2017, NIPS.
[58] Lu Zhang,et al. A Causal Framework for Discovering and Removing Direct and Indirect Discrimination , 2016, IJCAI.
[59] Tom LaGatta,et al. Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification , 2017, Big Data.
[60] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[61] Krishna P. Gummadi,et al. Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.
[62] Avi Feller,et al. Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.
[63] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[64] Nathan Srebro,et al. Learning Non-Discriminatory Predictors , 2017, COLT.
[65] Jieyu Zhao,et al. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints , 2017, EMNLP.
[66] Kristina Lerman,et al. Computational social scientist beware: Simpson’s paradox in behavioral data , 2017, J. Comput. Soc. Sci..
[67] D. Sculley,et al. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World , 2017, 1711.08536.
[68] David Danks,et al. Algorithmic Bias in Autonomous Systems , 2017, IJCAI.
[69] Alexandra Chouldechova,et al. Does mitigating ML's disparate impact require disparate treatment? , 2017, ArXiv.
[70] Lu Zhang,et al. Achieving Non-Discrimination in Data Release , 2016, KDD.
[71] William Welser,et al. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence , 2017 .
[72] Seth Neel,et al. A Convex Framework for Fair Regression , 2017, ArXiv.
[73] Arvind Narayanan,et al. Semantics derived automatically from language corpora contain human-like biases , 2016, Science.
[74] Jon M. Kleinberg,et al. Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.
[75] Bernhard Schölkopf,et al. Avoiding Discrimination through Causal Reasoning , 2017, NIPS.
[76] C. Sudlow,et al. Comparison of Sociodemographic and Health-Related Characteristics of UK Biobank Participants With Those of the General Population , 2017, American journal of epidemiology.
[77] Jon M. Kleinberg,et al. On Fairness and Calibration , 2017, NIPS.
[78] Jieyu Zhao,et al. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods , 2018, NAACL.
[79] Kristina Lerman,et al. Using Simpson's Paradox to Discover Interesting Patterns in Behavioral Data , 2018, ICWSM.
[80] Lise Getoor,et al. Fairness in Relational Domains , 2018, AIES.
[81] Yiannis Kompatsiaris,et al. Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification , 2018, WWW.
[82] Timnit Gebru,et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.
[83] Seth Neel,et al. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.
[84] Adam Tauman Kalai,et al. Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.
[85] Ayanna M. Howard,et al. The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity , 2017, Science and Engineering Ethics.
[86] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[87] James Zou,et al. AI can be sexist and racist — it’s time to make it fair , 2018, Nature.
[88] Ricardo Baeza-Yates,et al. Bias on the web , 2018, Commun. ACM.
[89] Rayid Ghani,et al. Aequitas: A Bias and Fairness Audit Toolkit , 2018, ArXiv.
[90] Premkumar Natarajan,et al. Unsupervised Adversarial Invariance , 2018, NeurIPS.
[91] Mohit Singh,et al. The Price of Fair PCA: One Extra Dimension , 2018, NeurIPS.
[92] Esther Rolf,et al. Delayed Impact of Fair Machine Learning , 2018, ICML.
[93] Andy Way,et al. Getting Gender Right in Neural Machine Translation , 2019, EMNLP.
[94] Barbara E. Engelhardt,et al. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility , 2017, RecSys.
[95] Lu Zhang,et al. FairGAN: Fairness-aware Generative Adversarial Networks , 2018, 2018 IEEE International Conference on Big Data (Big Data).
[96] Alexandra Chouldechova,et al. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.
[97] Boi Faltings,et al. Non-Discriminatory Machine Learning through Convex Fairness Criteria , 2018, AAAI.
[98] Alexandra Chouldechova,et al. The Frontiers of Fairness in Machine Learning , 2018, ArXiv.
[99] Aaron Rieke,et al. Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .
[100] Ilya Shpitser,et al. Fair Inference on Outcomes , 2017, AAAI.
[101] Hany Farid,et al. The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.
[102] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[103] Rachel Rudinger,et al. Gender Bias in Coreference Resolution , 2018, NAACL.
[104] Filippo Menczer,et al. How algorithmic popularity bias hinders or promotes quality , 2017, Scientific Reports.
[105] Aditya Krishna Menon,et al. The cost of fairness in binary classification , 2018, FAT.
[106] M. Kearns,et al. Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.
[107] Matt J. Kusner,et al. Causal Reasoning for Algorithmic Fairness , 2018, ArXiv.
[108] Julia Rubin,et al. Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).
[109] Rob Brekelmans,et al. Invariant Representations without Adversarial Training , 2018, NeurIPS.
[110] Zeyu Li,et al. Learning Gender-Neutral Word Embeddings , 2018, EMNLP.
[111] Kristina Lerman,et al. Can you Trust the Trend?: Discovering Simpson's Paradoxes in Social Data , 2018, WSDM.
[112] Silvia Chiappa,et al. A Causal Bayesian Networks Viewpoint on Fairness , 2018, Privacy and Identity Management.
[113] Blake Lemoine,et al. Mitigating Unwanted Biases with Adversarial Learning , 2018, AIES.
[114] Rachel K. E. Bellamy,et al. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias , 2018, ArXiv.
[115] Lu Zhang,et al. Fairness-aware Classification: Criterion, Convexity, and Bounds , 2018, ArXiv.
[116] Ahmed Hosny,et al. The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards , 2018, Data Protection and Privacy.
[117] Emilia Gómez,et al. Why Machine Learning May Lead to Unfairness: Evidence from Risk Assessment for Juvenile Justice in Catalonia , 2019, ICAIL.
[118] Nisheeth K. Vishnoi,et al. Stable and Fair Classification , 2019, ICML.
[119] Miroslav Dudík,et al. Fair Regression: Quantitative Definitions and Reduction-based Algorithms , 2019, ICML.
[120] David C. Parkes,et al. How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness , 2018, AIES.
[121] Seth Neel,et al. An Empirical Study of Rich Subgroup Fairness for Machine Learning , 2018, FAT.
[122] M. Ghassemi,et al. Can AI Help Reduce Disparities in General Medical and Mental Health Care? , 2019, AMA journal of ethics.
[123] Toniann Pitassi,et al. Flexibly Fair Representation Learning by Disentanglement , 2019, ICML.
[124] Madeleine Udell,et al. Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved , 2018, FAT.
[125] Marta R. Costa-jussà,et al. Equalizing Gender Bias in Neural Machine Translation with Word Embeddings Techniques , 2019, Proceedings of the First Workshop on Gender Bias in Natural Language Processing.
[126] Catherine E. Tucker,et al. Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads , 2019, Manag. Sci..
[127] Inioluwa Deborah Raji,et al. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products , 2019, AIES.
[128] Chandler May,et al. On Measuring Social Biases in Sentence Encoders , 2019, NAACL.
[129] Harini Suresh,et al. A Framework for Understanding Unintended Consequences of Machine Learning , 2019, ArXiv.
[130] David C. Parkes,et al. Fairness without Harm: Decoupled Classifiers with Preference Guarantees , 2019, ICML.
[131] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[132] Carlos Castillo,et al. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries , 2019, Front. Big Data.
[133] Xintao Wu,et al. Causal Modeling-Based Discrimination Discovery and Removal: Criteria, Bounds, and Algorithms , 2019, IEEE Transactions on Knowledge and Data Engineering.
[134] Ilya Shpitser,et al. Learning Optimal Fair Policies , 2018, ICML.
[135] Silvia Chiappa,et al. Path-Specific Counterfactual Fairness , 2018, AAAI.
[136] Yoav Goldberg,et al. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them , 2019, NAACL-HLT.
[137] F. Anders,et al. Yule-Simpson’s paradox in Galactic Archaeology , 2019, Monthly Notices of the Royal Astronomical Society.
[138] Danah Boyd,et al. Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.
[139] John R. Smith,et al. Diversity in Faces , 2019, ArXiv.
[140] William L. Hamilton,et al. Compositional Fairness Constraints for Graph Embeddings , 2019, ICML.
[141] Krzysztof Onak,et al. Scalable Fair Clustering , 2019, ICML.
[142] Shikha Bordia,et al. Identifying and Reducing Gender Bias in Word-Level Language Models , 2019, NAACL.
[143] Catherine Tucker,et al. Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads , 2019 .
[144] Nripsuta Ani Saxena. Perceptions of Fairness , 2019, AIES.
[145] Nanyun Peng,et al. Debiasing Community Detection: The Importance of Lowly Connected Nodes , 2019, 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).
[146] Luís C. Lamb,et al. Assessing gender bias in machine translation: a case study with Google Translate , 2018, Neural Computing and Applications.
[147] Ben Hutchinson,et al. 50 Years of Test (Un)fairness: Lessons for Machine Learning , 2018, FAT.
[148] Kamesh Munagala,et al. Proportionally Fair Clustering , 2019, ICML.
[149] Richard S. Zemel,et al. Understanding the Origins of Bias in Word Embeddings , 2018, ICML.
[150] Ryan Cotterell,et al. Gender Bias in Contextualized Word Embeddings , 2019, NAACL.
[151] Christopher Joseph Pal,et al. Towards Standardization of Data Licenses: The Montreal Data License , 2019, ArXiv.
[152] Luca Oneto,et al. Taking Advantage of Multitask Learning for Fair Classification , 2018, AIES.
[153] Phebe Vayanos,et al. Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making , 2019, AAAI.
[154] Daniela Rus,et al. Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure , 2019, AIES.
[155] Silvia Chiappa,et al. Wasserstein Fair Classification , 2019, UAI.
[156] Kristina Lerman,et al. A Geometric Solution to Fair Representations , 2020, AIES.
[157] Nanyun Peng,et al. Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition , 2019, HT.
[158] Yishay Mansour,et al. Efficient candidate screening under multiple tests and implications for fairness , 2019, FORC.
[159] Fred Morstatter,et al. Statistical Equity: A Fairness Classification Objective , 2020, ArXiv.
[160] Timnit Gebru,et al. Datasheets for datasets , 2018, Commun. ACM.
[161] Fred Morstatter,et al. Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources , 2021, EMNLP.
[162] Fred Morstatter,et al. Attributing Fair Decisions with Attention Interventions , 2021, TRUSTNLP.