Seeing different objects in different ways: Measuring ventral visual tuning to sensory and semantic features with dynamically adaptive imaging

A key challenge of object recognition is achieving a balance between selectivity for relevant features and invariance to irrelevant ones. Computational and cognitive models predict that optimal selectivity for features will differ by object, and here we investigate whether this is reflected in visual representations in the human ventral stream. We describe a new real‐time neuroimaging method, dynamically adaptive imaging (DAI), that enabled measurement of neural selectivity along multiple feature dimensions in the neighborhood of single referent objects. The neural response evoked by a referent was compared to that evoked by 91 naturalistic objects using multi‐voxel pattern analysis. Iteratively, the objects evoking the most similar responses were selected and presented again, to converge upon a subset that characterizes the referent's “neural neighborhood.” This was used to derive the feature selectivity of the response. For three different referents, we found strikingly different selectivity, both in individual features and in the balance of tuning to sensory versus semantic features. Additional analyses placed a lower bound on the number of distinct activation patterns present. The results suggest that either the degree of specificity available for object representation in the ventral stream varies by class, or that different objects evoke different processing strategies. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.

[1]  A. Ishai,et al.  Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex , 2001, Science.

[2]  Keiji Tanaka,et al.  Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey , 2008, Neuron.

[3]  Michel Vidal-Naquet,et al.  Visual features of intermediate complexity and their use in classification , 2002, Nature Neuroscience.

[4]  Eric T. Carlson,et al.  A neural code for three-dimensional object shape in macaque inferotemporal cortex , 2008, Nature Neuroscience.

[5]  Andy C. H. Lee,et al.  Medial temporal lobe activity during complex discrimination of faces, objects, and scenes: Effects of viewpoint , 2009, Hippocampus.

[6]  G. Aguirre,et al.  Different spatial scales of shape similarity representation in lateral and ventral LOC. , 2009, Cerebral cortex.

[7]  L. Tyler,et al.  Neural Basis of Semantic Memory: The conceptual structure account: A cognitive model of semantic memory and its neural instantiation , 2007 .

[8]  J. Duncan An adaptive coding model of neural function in prefrontal cortex , 2001 .

[9]  N. Kanwisher,et al.  The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception , 1997, The Journal of Neuroscience.

[10]  C. Koch,et al.  Invariant visual representation by single neurons in the human brain , 2005, Nature.

[11]  E T Rolls,et al.  Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. , 1995, Journal of neurophysiology.

[12]  Irving Biederman,et al.  Adaptation to objects in the lateral occipital complex (LOC): Shape or semantics? , 2009, Vision Research.

[13]  M. Tarr,et al.  FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise , 2000, Nature Neuroscience.

[14]  Johan Wagemans,et al.  Perceived Shape Similarity among Unfamiliar Objects and the Organization of the Human Object Vision Pathway , 2008, The Journal of Neuroscience.

[15]  Doris Y. Tsao,et al.  A Cortical Region Consisting Entirely of Face-Selective Cells , 2006, Science.

[16]  R. Malach,et al.  Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. , 1995, Proceedings of the National Academy of Sciences of the United States of America.