Value-laden disciplinary shifts in machine learning

As machine learning models are increasingly used for high-stakes decision making, scholars have sought to intervene to ensure that such models do not encode undesirable social and political values. However, little attention thus far has been given to how values influence the machine learning discipline as a whole. How do values influence what the discipline focuses on and the way it develops? If undesirable values are at play at the level of the discipline, then intervening on particular models will not suffice to address the problem. Instead, interventions at the disciplinary-level are required. This paper analyzes the discipline of machine learning through the lens of philosophy of science. We develop a conceptual framework to evaluate the process through which types of machine learning models (e.g. neural networks, support vector machines, graphical models) become predominant. The rise and fall of model-types is often framed as objective progress. However, such disciplinary shifts are more nuanced. First, we argue that the rise of a model-type is self-reinforcing-it influences the way model-types are evaluated. For example, the rise of deep learning was entangled with a greater focus on evaluations in compute-rich and data-rich environments. Second, the way model-types are evaluated encodes loaded social and political values. For example, a greater focus on evaluations in compute-rich and data-rich environments encodes values about centralization of power, privacy, and environmental concerns.

[1]  Thomas S. Kuhn,et al.  Commensurability, Comparability, Communicability , 1982, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association.

[2]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[3]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[4]  Renée Jorgensen Bolinger The rational impermissibility of accepting (some) racial generalizations , 2018, Synthese.

[5]  Helen E. Longino,et al.  Gender, politics, and the theoretical virtues , 1995, Synthese.

[6]  Arvind Satyanarayan,et al.  The Building Blocks of Interpretability , 2018 .

[7]  Luca Maria Gambardella,et al.  Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images , 2012, NIPS.

[8]  Francesco Bergadano Machine Learning and the foundations of inductive inference , 2004, Minds and Machines.

[9]  H. Sankey Kuhn's Changing Concept of Incommensurability , 1993, The British Journal for the Philosophy of Science.

[10]  Dawn Xiaodong Song,et al.  Making Neural Programming Architectures Generalize via Recursion , 2017, ICLR.

[11]  Francis Crick,et al.  The recent excitement about neural networks , 1989, Nature.

[12]  Martin Frické,et al.  Big data and its epistemology , 2015, J. Assoc. Inf. Sci. Technol..

[13]  Florent Perronnin,et al.  High-dimensional signature compression for large-scale image classification , 2011, CVPR 2011.

[14]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[15]  Ming Yang,et al.  Large-scale image classification: Fast feature extraction and SVM training , 2011, CVPR 2011.

[16]  Kevin B. Korb,et al.  Introduction: Machine Learning as Philosophy of Science , 2004, Minds and Machines.

[17]  Donald Gillies,et al.  Artificial intelligence and scientific method , 1996 .

[18]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[19]  I. Lakatos Falsification and the Methodology of Scientific Research Programmes , 1976 .

[20]  Paul Thagard Philosophy and Machine Learning , 1990 .

[21]  Yang Yang,et al.  Deep Learning Scaling is Predictable, Empirically , 2017, ArXiv.

[22]  Jon Williamson,et al.  The Philosophy of Science and its relation to Machine Learning , 2009, Scientific Data Mining and Knowledge Discovery.

[23]  T. Kuhn,et al.  The Structure of Scientific Revolutions. , 1964 .

[24]  Larry Laudan,et al.  Beyond Positivism And Relativism , 1996 .

[25]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[26]  Andrew McCallum,et al.  Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.

[27]  Tamar Szabó Gendler,et al.  On the epistemic costs of implicit bias , 2011 .

[28]  Bernhard Schölkopf,et al.  Discovering Causal Signals in Images , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Chen Sun,et al.  Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[30]  Luca Maria Gambardella,et al.  Flexible, High Performance Convolutional Neural Networks for Image Classification , 2011, IJCAI.

[31]  K. Elliott A Tapestry of Values: An Introduction to Values in Science , 2017 .

[32]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[33]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[34]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[35]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[36]  Quoc V. Le,et al.  Do Better ImageNet Models Transfer Better? , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Paul Hoyningen-Huene,et al.  Incommensurability and related matters , 2001 .

[38]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[39]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[40]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[41]  I. Lakatos,et al.  Criticism and the Growth of Knowledge: Falsification and the Methodology of Scientific Research Programmes , 1970 .

[42]  Madeleine Udell,et al.  Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved , 2018, FAT.

[43]  David Corfield,et al.  Varieties of Justification in Machine Learning , 2009, Minds and Machines.

[44]  R. Kitchin,et al.  Big Data, new epistemologies and paradigm shifts , 2014, Big Data Soc..

[45]  Paul K. Feyerabend,et al.  Science in a Free Society , 1980 .

[46]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[47]  Nisheeth K. Vishnoi,et al.  Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees , 2018, FAT.

[48]  Hilan Bensusan Is machine learning experimental philosophy of science , 2000 .

[49]  P. Lipton,et al.  Reconstructing Scientific Revolutions: Thomas S. Kuhn's Philosophy of Science , 1993 .

[50]  Vivienne Sze,et al.  Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.

[51]  Philip Kitcher,et al.  Science in a Democratic Society , 2011 .

[52]  Jakob H. Macke,et al.  Analyzing biological and artificial neural networks: challenges with opportunities for synergy? , 2018, Current Opinion in Neurobiology.

[53]  Luc Van Gool,et al.  The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.

[54]  Jon Williamson,et al.  A Dynamic Interaction Between Machine Learning and the Philosophy of Science , 2004, Minds and Machines.

[55]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[56]  Suresh Venkatasubramanian,et al.  A comparative study of fairness-enhancing interventions in machine learning , 2018, FAT.

[57]  Moti Mizrahi Kuhn’s Incommensurability Thesis: What’s the Argument? , 2015 .

[58]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.