Introduction Convolutional neural networks (CNNs) trained as classifiers learn by associating visual inputs (e.g., photographs of objects) with appropriate output labels (e.g., “crow”, “dog”, “car”). These complex models, which contain millions of weights, are the state-of-the art in machine vision, rivaling humans in object recognition tasks (LeCun, Bengio, & Hinton, 2015; Krizhevsky, Sutskever, & Hinton, 2012). What these networks learn displays some commonalities with human learning (Kubilius, Bracci, & de Beeck, 2016; Lake, Zaremba, Fergus, & Gureckis, 2015). Furthermore, the layers in these networks have been related to neural activity along the ventral stream (Khaligh-Razavi & Kriegeskorte, 2014; Yamins & DiCarlo, 2016) The similarity spaces created by these models at various network layers allow us to draw parallels with the brain’s neural coding schemes (Guest & Love, 2017). At earlier layers, networks display similarity spaces that reflect the high-level categories found in the input space, e.g., lions and tigers are more similar to one another than to mopeds. At the more advanced layers, similarity structure tends to break down such that representations of different object categories become orthogonal. Can these networks also shed light on how non-human animals categorize? CNNs can be used to determine at what level of representation (i.e., what network layer) animals are coding similarities between images. For example, are animals learning regularities at a very low level, close to the pixels in the image, or are they seizing upon more abstract shape features? In this contribution, we address this question by examining data from pigeons trained to categorize images of cardiograms as normal or abnormal. Pigeons are excellent at classifying visual stimuli (Bhatt, Wasserman, Reynolds, & Knauss, 1988). For example, pigeons trained to discriminate between medical images of nora) b)
[1]
Brandon M. Turner,et al.
Approaches to Analysis in Model-based Cognitive Neuroscience.
,
2017,
Journal of mathematical psychology.
[2]
Olivia Guest,et al.
What the success of brain imaging implies about the neural code
,
2016,
bioRxiv.
[3]
Jonas Kubilius,et al.
Deep Neural Networks as a Computational Model for Human Shape Sensitivity
,
2016,
PLoS Comput. Biol..
[4]
Michael L. Mack,et al.
Dynamic updating of hippocampal object representations reflects new conceptual knowledge
,
2016,
Proceedings of the National Academy of Sciences.
[5]
Geoffrey E. Hinton,et al.
ImageNet classification with deep convolutional neural networks
,
2012,
Commun. ACM.
[6]
Nikolaus Kriegeskorte,et al.
Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation
,
2014,
PLoS Comput. Biol..
[7]
Gyslain Giguère,et al.
Limits in decision making arise from limits in memory retrieval
,
2013,
Proceedings of the National Academy of Sciences.
[8]
Wojciech Zaremba,et al.
Deep Neural Networks Predict Category Typicality Ratings for Images
,
2015,
CogSci.
[9]
Elizabeth A. Krupinski,et al.
Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images
,
2015,
PloS one.
[10]
Edward A. Wasserman,et al.
Conceptual Behavior in Pigeons: Categorization of Both Familiar and Novel Examples From Four Classes of Natural and Artificial Stimuli
,
1988
.
[11]
Michael L. Mack,et al.
Decoding the Brain’s Algorithm for Categorization from Its Neural Implementation
,
2013,
Current Biology.
[12]
Bradley C. Love,et al.
The Algorithmic Level Is the Bridge Between Computation and Brain
,
2015,
Top. Cogn. Sci..
[13]
Guigang Zhang,et al.
Deep Learning
,
2016,
Int. J. Semantic Comput..