Perceptual bias and technical metapictures: critical machine vision as a humanities challenge

In many critical investigations of machine vision, the focus lies almost exclusively on dataset bias and on fixing datasets by introducing more and more diverse sets of images. We propose that machine vision systems are inherently biased not only because they rely on biased datasets but also because their perceptual topology, their specific way of representing the visual world, gives rise to a new class of bias that we call perceptual bias. Concretely, we define perceptual topology as the set of those inductive biases in machine vision systems that determine its capability to represent the visual world. Perceptual bias, then, describes the difference between the assumed “ways of seeing” of a machine vision system, our reasonable expectations regarding its way of representing the visual world, and its actual perceptual topology. We show how perceptual bias affects the interpretability of machine vision systems in particular, by means of a close reading of a visualization technique called “feature visualization”. We conclude that dataset bias and perceptual bias both need to be considered in the critical analysis of machine vision systems and propose to understand critical machine vision as an important transdisciplinary challenge, situated at the interface of computer science and visual studies/Bildwissenschaft.

[1]  Nick Cammarata,et al.  An Overview of Early Vision in InceptionV1 , 2020 .

[2]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Ruha Benjamin Race After Technology: Abolitionist Tools for the New Jim Code , 2019, Social Forces.

[4]  Bolei Zhou,et al.  Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[6]  Matthias Bethge,et al.  ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.

[7]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[8]  Deborah Silver,et al.  Feature Visualization , 1994, Scientific Visualization.

[9]  John R. Smith,et al.  Diversity in Faces , 2019, ArXiv.

[10]  Samuel Ritter,et al.  Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study , 2017, ICML.

[11]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[12]  Minsuk Kahng,et al.  Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.

[13]  Bernhard Schölkopf,et al.  Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.

[14]  Charles Babbage,et al.  Babbage's Calculating Engines: Frontmatter , 2010 .

[15]  W. J. Thomas Mitchell,et al.  Picture Theory: Essays on Verbal and Visual Representation , 1994 .

[16]  Thomas Brox,et al.  Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.

[17]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[18]  Charles Babbage,et al.  Babbage's Calculating Engines : Being a Collection of Papers Relating to Them; Their History and Construction , 2010 .

[19]  Danah Boyd,et al.  Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.

[20]  Lalana Kagal,et al.  Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).

[21]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[22]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Thomas Brox,et al.  Generating Images with Perceptual Similarity Metrics based on Deep Networks , 2016, NIPS.

[24]  Bolei Zhou,et al.  GAN Dissection: Visualizing and Understanding Generative Adversarial Networks , 2018, ICLR.

[25]  Matteo Pasquinelli,et al.  The Nooscope manifested: AI as instrument of knowledge extractivism , 2020, AI & society.

[26]  Lauren Tilton,et al.  Understanding Depth in Deep Learning: Knowledgeable, Layered, Impenetrable , 2020, DH.

[27]  Mariarosaria Taddeo,et al.  The ethics of algorithms: Mapping the debate , 2016, Big Data Soc..

[28]  Amnon Shashua,et al.  Inductive Bias of Deep Convolutional Networks through Pooling Geometry , 2016, ICLR.

[29]  Suresh Venkatasubramanian,et al.  A comparative study of fairness-enhancing interventions in machine learning , 2018, FAT.

[30]  Arvind Satyanarayan,et al.  The Building Blocks of Interpretability , 2018 .

[31]  Martin Wattenberg,et al.  Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure , 2019, ArXiv.

[32]  T. Nagel What is it like to be a Bat , 1974 .

[33]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[34]  Brenden M. Lake,et al.  Learning Inductive Biases with Simple Neural Networks , 2018, CogSci.

[35]  T. Nagel Mortal Questions: What is it like to be a bat? , 2012 .

[36]  Solon Barocas,et al.  The Intuitive Appeal of Explainable Machines , 2018 .

[37]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[38]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[39]  Subarna Tripathi,et al.  Precise Recovery of Latent Vectors from Generative Adversarial Networks , 2017, ICLR.

[40]  Chris Russell,et al.  Explaining Explanations in AI , 2018, FAT.

[41]  Lorraine Daston,et al.  Calculation and the Division of Labor, 1750-1950 , 2018 .

[42]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[43]  Fabian Offert,et al.  Images of Image Machines. Visual Interpretability in Computer Vision for Art , 2018, ECCV Workshops.

[44]  Alexander Mordvintsev,et al.  Inceptionism: Going Deeper into Neural Networks , 2015 .