Comment on Ryder's SINBAD Neurosemantics: Is Teleofunction Isomorphism the Way to Understand Representations?

The merit of the SINBAD model is to provide an explicit mechanism showing how the cortex may come to develop detectors responding to correlated properties and therefore corresponding to the sources of these correlations. Here I argue that, contrary to the article, SINBAD neurosemantics does not need to rely on teleofunctions to solve the problem of misrepresentation. A number of difficulties for the teleofunction theories of content are reviewed and an alternative theory based on categorization performance and statistical relations is argued to provide a better account and to come closer to the practice in neuroscience and to powerful intuitions on swampkinds and on broad/narrow content. The SINBAD model is useful in showing how a neural-type model is able to bootstrap itself, learning to develop specific detectors that respond to correlated properties. As mental representations mediate our ability to categorize and identify substances (natural kinds, individuals, etc; Millikan, 1998), which are characterized by correlated properties, the model provides an excellent illustration of how the brain may come to develop mental representations. At present, it is not obvious that the specific SINBAD-algorithm is the one utilized by the cortex. Although evidenceforbackpropagatingactionpotentialsintothedendritesexists,theSINBAD- algorithm requires a type of non-local information (dividing by the number of dendrites) that may be difficult to realise in biological neurons, especially if the number of dendrites changes, due to growth or degeneration. Furthermore, the model needs to be tested for its ability to learn categories (corresponding to real or natural kinds) that form the basis of human representation. Ideally, the network learning profile should be shown to mimic that of infants and its learning power should be tested relative to experimental data. Special attention should be given to addressing human and animal limitations in category learning: an algorithm that can learn categories characterized by functions that are too complex for humans and animals is unlikely to offer an adequate mechanism for cortical representations. (I doubt, for example, that human observers can predict the solar planetary system from the mere observation of the planets' trajectories, i.e., without computers and