We explore one aspect of meaning, the identification of matching concepts across systems (e.g. people, theories, or cultures). We present a computational algorithm called ABSURDIST (Aligning Between Systems Using Relations Derived Inside Systems for Translation) that uses only within-system similarity relations to find between-system translations. While illustrating the sufficiency of within-system relations to account for translating between systems, simulations of ABSURDIST also indicate synergistic interactions between intrinsic, within-system information and extrinsic information. Conceptual Meaning and Translation There have been two major answers to the question of how our concepts have meaning. The first answer is that concepts’ meanings depend on their connection to the external world (Harnad, 1990). By this account, the concept Dog means what it does because our perceptual apparatus can identify features that characterize, if not define, dogs. Dog is characterized by features that are either perceptually given, or can be reduced to features that are perceptually given. This will be called the “external grounding” account of conceptual meaning. The second answer is that concepts’ meanings depend on their connections to each other (Markman & Stillwell, 2001; Saussure, 1915/1959). By this account, Dog’s meaning depends on Cat, Domesticated, and Loyal, and in turn, these concepts depend on other concepts, including Dog. The dominating metaphor here is of a conceptual web in which concepts all mutually influence each other (Quine & Ullian, 1970). A concept can mean something within a network of other concepts but not by itself. This will be called the “conceptual web” account. The goal of this article is to argue for the synergistic integration of conceptual web and externally grounded accounts of conceptual meaning. However, in pursuing this argument, we will first argue for the sufficiency of the conceptual web account for a particular task associated with conceptual meaning. Then, we will show how the conceptual web account can be ably supplemented by external grounding to establish meanings more successfully than either method could by itself. Our point of departure for exploring conceptual meaning will be a highly idealized and purposefully simplified version of a conceptual translation task. Consider two individuals, Joan and John, who each possesses a number of concepts. Suppose further that we would like some way to tell that Joan and John both have a concept of, say, Mushroom. Joan and John may not have exactly the same concept of Mushroom. John may believe mushrooms grow from seeds whereas Joan believes they grow from spores. More generally, Joan and John will differ in the rest of their conceptual networks because of their different experiences and levels of expertise. Still, it seems desirable to say that Joan and John’s Mushroom concepts correspond to one another. We will describe a network that translates between concepts in two systems, placing, for example, Joan and John’s Mushroom concepts in correspondence with each other. Translation across systems is generally desirable and specifically necessary in order to say things like “John’s concept of mushrooms is less informed than Joan’s.” Fodor and Lepore have taken the existence of this kind of translation as a challenge to conceptual web accounts of meaning (Fodor & Lepore, 1992). By Fodor and Lepore’s interpretation, if a concept’s meaning depends on its role within the larger system, and if there are some differences between the systems, then the concept’s meaning would be different in the two systems. A natural way to try to salvage the conceptual web account is to argue that determining corresponding concepts across systems does not require the systems to be identical, but only similar. However, Fodor (Fodor, 1998; Fodor & Lepore, 1992) insists that the notion of similarity is not adequate to establish that Joan and John both possess a Mushroom concept. Fodor argues that “saying what it is for concepts to have similar, but not identical contents presupposes a prior notion of beliefs with similar but not identical concepts” [Fodor, 1998, p. 32]. The ABSURDIST Algorithm for Crosssystem Translation We will now present a simple neural network called ABSURDIST (Aligning Between Systems Using Relations Derived Inside Systems for Translation) that finds conceptual correspondences across two systems (two people, two time slices of one person, two scientific theories, two developmental age groups, two language communities, etc.) using only inter-conceptual similarities, not conceptual identities, as input. Thus, ABSURDIST will take as input two systems of concepts in which every concept of a system is defined exclusively in terms of its dissimilarities to other concepts in the same system. Laakso and Cottrell (2000) describe another neural network model that uses similarity relations within two systems to compare the similarity of the systems. ABSURDIST produces as output a set of correspondences indicating which concepts from System A correspond to which concepts from System B. These correspondences serve as the basis for understanding how the systems can communicate with each other without the assumption made by Fodor (1998) that the two systems have exactly the same concepts. The existence of ABSURDIST provides evidence against Fodor’s argument that similarities between people’s concepts are an insufficient basis for determining that two people share an equivalent concept. ABSURDIST is not a complete model of conceptual meaning or translation. Our point is that even if the only relation between concepts in a system were simply similarity, this would still suffice to find translations of the concepts in different systems. Elements A1..m belong to System A, while elements B1..n belong to System B. Ct(Aq,Bx) is the activation, at time t, of the unit that represents the correspondence between the qth element of A and the xth element of B. There will be m⋅n correspondence units, one for each possible pair of corresponding elements between A and B. In the current example, every element represents one concept in a system. The activation of a correspondence unit is bound between 0 and 1, with a value of 1 indicating a strong correspondence between the associated elements, and a value of 0 indicating strong evidence that the elements do not correspond. Correspondence units dynamically evolve over time by the equations: if N Ct Aq,Bx ( ) ( ) ≥ 0then Ct+1 Aq,Bx ( ) = Ct Aq,Bx ( ) + N Ct Aq,Bx ( ) ( ) max− Ct Aq,Bx ( ) ( )L elseCt+1 Aq ,Bx ( ) = Ct Aq,Bx ( ) + N Ct Aq,Bx ( ) ( ) Ct Aq ,Bx ( ) −min ( )L (1). If N(Ct(Aq,Bx)), the net input to a unit that links the qth element of A and the xth element of B, is positive, then the unit’s activation will increase as a function of the net input, a squashing function that limits activation to an upper bound of max=1, and a learning rate L (set to 1). If the net input is negative, then activations are limited by a lower bound of min=0. The net input is defined as N Ct Aq,Bx ( ) ( ) = αE Aq,Bx ( ) + βR Aq ,Bx ( ) − χI Aq ,Bx ( ) , (2) where the E term is the external similarity between Aq and Bx, R is their internal similarity, I is the inhibition to placing Aq and Bx into correspondence that is supplied by other developing correspondence units, and a+b+c=1. When a=0, then correspondences between A and B will be based solely on the similarities among the elements within a system, as proposed by a conceptual web account. The amount of excitation to a unit based on withinsystem relations is given by R Aq ,Bx ( ) = S D Aq,Ar ( ),D Bx,By ( ) ( )
[1]
Christin Wirth,et al.
Concepts Where Cognitive Science Went Wrong
,
2016
.
[2]
S. Harnad.
Symbol grounding problem
,
1990,
Scholarpedia.
[3]
Arthur B. Markman,et al.
Role-governed categories
,
2001,
J. Exp. Theor. Artif. Intell..
[4]
Chris Eliasmith,et al.
Integrating structure and meaning: a distributed model of analogical mapping
,
2001,
Cogn. Sci..
[5]
Garrison W. Cottrell,et al.
Content and cluster analysis: Assessing representational similarity in neural systems
,
2000
.
[6]
Shimon Edelman,et al.
Representation and recognition in vision
,
1999
.
[7]
Curt Burgess,et al.
The Dynamics of Meaning in Memory
,
1998
.
[8]
J. Fodor,et al.
Concepts: Where Cognitive Science Went Wrong
,
1998
.
[9]
John E. Hummel,et al.
Distributed representations of structure: A theory of analogical access and mapping.
,
1997
.
[10]
T. Landauer,et al.
A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.
,
1997
.
[11]
Brian Falkenhainer,et al.
The Structure-Mapping Engine: Algorithm and Examples
,
1989,
Artif. Intell..
[12]
Douglas B. Lenat,et al.
On the thresholds of knowledge
,
1987,
Proceedings of the International Workshop on Artificial Intelligence for Industrial Applications.
[13]
Ned Block,et al.
Advertisement for a Semantics for Psychology
,
1987
.
[14]
W. Quine,et al.
The web of belief
,
1970
.
[15]
F. Saussure,et al.
Course in General Linguistics
,
1960
.