Can NLP Systems be a Cognitive Black Box? (Is Cognitive Science Relevant to AI Problems?)

This paper considers whether or not the internals of NLP systems can be a black box with respect to the modeling of how humans process language in answer to the question “Is cognitive science relevant to AI problems?”. Is it sufficient to model the input/output behavior using computational techniques which bear little resemblance to human language processing or is it necessary to model the internals of human language processing behavior in NLP systems? The basic conclusion is that it is important to look inside the black box of the human language processor and to model that behavior at a lower level of abstraction than input/output behavior. The development of functional NLP systems may actually be facilitated, not hindered, by adoption of cognitive constraints on how humans process language. The relevance of this position for the symposium is considered and some suggestions for moving forward are presented. NLP Systems as a Black Box Natural Language Processing (NLP) is a quintessential AI Hard Problem. For an NLP system to be successful, it must mimic human behavior at the level of input and output. Otherwise, successful communication with humans will not be achieved. Unlike many other AI systems, performing better than humans is not a desirable outcome (although performing as well as expert communicators is). The key question is whether or not human-like input and output can be achieved using computational mechanisms that bear little resemblance to what cognitive science and cognitive psychology tell us is going on inside the head of humans when they process language. Can NLP systems be a cognitive black box (Is cognitive science relevant to AI problems)? To date, research in the development of functional NLP systems has largely adopted the black box approach— assuming that modeling the “internals” of human language processing is undesirable, and hopefully, unnecessary. It is undesirable because it would impose severe constraints on the development of functional NLP systems, and, besides, the basics of how humans process language have not been sufficiently worked out to support computational implementation. The NLP problem is too hard for us to make progress if we accept the constraints on human language processing proposed by cognitive science researchers—especially given the many conflicting hypotheses they have put forward—and try to populate the black box with cognitively plausible systems. But there is considerable evidence to suggest that AI researchers ignore the constraints of cognitive science and cognitive psychology at their own peril. The advent of the parallel distributed processing (PDP) strain of connectionism (Rumelhart and McClelland, 1986) was in part the product of cognitive science researchers focused on modeling the “internals” of human cognitive and perceptual behavior. Connectionist researchers highlighted many of the shortcomings of symbolic AI, arguing that they resulted from an inappropriate cognitive architecture, and proposing alternatives that delved inside the black box of cognition and perception, trying to specify what a cognitive system would be composed of at a level of abstraction somewhere above the neuronal level, but definitely inside the black box. The confrontation between connectionism and symbolic AI has largely subsided, and many of the connectionist claims have been shown to be overstated (cf. Pinker and Prince, 1988), but it seems clear that connectionist models do capture important elements of perception and cognition, particularly lower level phenomena. Many AI researchers are now working to integrate connectionist layers into their hybrid symbolic/subsymbolic systems (Sun and Alexandre, 1997) to capture the symbolic irregularities the connectionist systems revealed and which purely symbolic systems cannot easily model. Within NLP, the problems of noisy input, lexical and grammatical ambiguity and non-literal use of language call out for adoption of techniques explored in connectionist and statistical systems, many of which come out of research in cognitive science. For example, Latent Semantic Analysis (LSA) (Landauer & Dumais, 1997), a statistical, data-reduction technique that is being used by psycholinguists to capture the latent (i.e. non-explicit) meaning of words as a multi-dimensional vector, offers hope for solving previously intractable problems in meaning representation—especially the problem of determining similarity of meaning without assuming discrete word senses. If the LSA approach is successful, then an avalanche of NLP research on word sense disambiguation that is based on the identification of discrete word senses will need to be revisited. Psycholinguistically motivated symbolic resources like WordNet—a computational implementation of a large scale mental lexicon—are also being adopted by many AI researchers (Fellbaum, 1998). The primary use of WordNet within the AI community is as a tool for scaling up NLP systems, without necessarily adopting the psycholinguistic principles on which it is based and without making claims of cognitive plausibility in the resulting systems. Interestingly, George Miller, the cognitive scientist leading development of WordNet, laments the fact that WordNet is not being used more extensively by the psycholinguistic community. However, since psycholinguists are not usually concerned with the development of large-scale systems, and since WordNet, like many other computational implementations of cognitive theories, has had to make some admittedly non-psychological choices, it has not had a major impact on this community. As these examples attempt to show, AI researchers have historically paid attention—if somewhat reluctantly—to the major trends in cognitive science and will continue to do so. But AI researchers must adapt the products of cognitive science to their own research agenda—the development of intelligent, functional computer systems. AI researchers are unlikely to spend years of research exploring specific cognitive phenomena. Such research does not lead to the development of functional systems. However, given the complexity of the systems they are building, AI researchers should seek awareness of advances in cognitive science that point the way towards computational implementation of more humanly plausible systems. An awareness of cognitive science research is especially important for the development of functional NLP systems. The search space for possible solutions to the development of NLP systems is huge (perhaps infinite). To date most systems have been searching the part of that space consistent with our basic understanding of computation and symbol manipulation. But if philosophers like Searle (1980) and cognitive scientists like Harnad (1990, 1992) are right, ungrounded symbol systems will only prove suitable for solving a limited range of problems. Ultimately, our symbol systems will need to be grounded if they are to display the full range of human behavior and intelligence. Philosophers like Prinz (2002) and psychologists like Barsalou (1999) and Zwaan (2004) are exploring the implications of the perceptual grounding of symbols, and their research could well have important implications for NLP systems. Their research may open up additional subspaces in the search space for solutions to NLP problems that are ripe with interesting possibilities to be explored. To the extent that research in cognitive science is able to focus the search for solutions on fruitful paths and to prune the search tree by eliminating non-productive branches, it could actually facilitate the development of functional systems. This search argument hinges on the assumption that cognitively implausible systems are unlikely to be able to mimic human input/output behavior in a domain as complex and human centric as language processing. It is the assumption that NLP systems should not be cognitive black boxes (That cognitive science is relevant to AI problems). That we are more likely to be successful in developing NLP systems by modeling the human cognitive behavior inside the box than by applying computational mechanisms that only attempt to mimic input/output behavior. Cognitive and Computational Constraints Having argued for the adoption of cognitive constraints in the development of large-scale functional NLP systems, it must be admitted that there are few successes to date, and very few researchers who are even engaged in this line of research. An obvious way to apply cognitive constraints is to develop NLP systems within a cognitive architecture like ACT-R (Anderson et al., 2004: Anderson and Lebiere, 1998) or Soar (Rosembloom et al., 1993). NL-Soar (Lehman et al., 1995) is one of the very few NLP systems developed in a cognitive architecture. NL-Soar was used in the TacAir-Soar (Laird et al., 1998) project to provide natural language capabilities to synthetic agents which participated as enemy aircraft in a Tactical Air simulation. NL-Soar and TacAir-Soar were among the first successful uses of a cognitive architecture to build functional agents with language capabilities. However, during the course of the TacAir-Soar project, cognitive plausibility was deemphasized in the interest of developing a functional system within the time constraints of the project. Even within a cognitive architecture it is possible to build cognitively implausible systems. The AutoTutor system (Graesser et al., 2001) is another example of an NLP system influenced by cognitive science research. AutoTutor is a intelligent tutor that helps students learn how to solve physics problems. Although AutoTutor is not implemented in a cognitive architecture, it is based on extensive psycholinguistic research in discourse processing (Graesser et al., 2003) and it makes use of LSA to assess the meaning of student responses that cannot be fully processed by the higher level language understanding component. A key feature of AutoTutor is the integration

[1]  Jerry T. Ball,et al.  A Cognitively Plausible Model of Language Comprehension , 2004 .

[2]  Azriel Rosenfeld,et al.  Computer vision and image processing , 1992 .

[3]  T. Landauer,et al.  A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. , 1997 .

[4]  C. Lebiere,et al.  The Atomic Components of Thought , 1998 .

[5]  J. Prinz Furnishing the Mind: Concepts and Their Perceptual Basis , 2004 .

[6]  Christiane Fellbaum,et al.  Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.

[7]  Philipp Slusallek,et al.  Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.

[8]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.

[9]  N. Cowan The magical number 4 in short-term memory: A reconsideration of mental storage capacity , 2001, Behavioral and Brain Sciences.

[10]  Thomas G. Bever,et al.  Sentence Comprehension: The Integration of Habits and Rules , 2001 .

[11]  Rolf A. Zwaan The Immersed Experiencer: Toward An Embodied Theory Of Language Comprehension , 2003 .

[12]  Stevan Harnad The Symbol Grounding Problem , 1999, ArXiv.

[13]  S. Pinker,et al.  On language and connectionism: Analysis of a parallel distributed processing model of language acquisition , 1988, Cognition.

[14]  Mark Huckvale 10 THINGS ENGINEERS HAVE DISCOVERED ABOUT SPEECH RECOGNITION , 1997 .

[15]  Refractor Vision , 2000, The Lancet.

[16]  Deb Roy,et al.  Grounded Situation Models for Robots: Bridging Language, Perception, and Action , 2005 .

[17]  John R Anderson,et al.  An integrated theory of the mind. , 2004, Psychological review.

[18]  Margaret A. Boden,et al.  The philosophy of artificial intelligence , 1990, Oxford readings in philosophy.

[19]  Stephani Foraker,et al.  Memory structures that subserve sentence comprehension , 2003 .

[20]  L. Barsalou,et al.  Whither structured representation? , 1999, Behavioral and Brain Sciences.

[21]  S. Harnad Connecting Object to Symbol in Modeling Cognition , 1992 .

[22]  Louis ten Bosch,et al.  How Should a Speech Recognizer Work? , 2005, Cogn. Sci..

[23]  Stuart M. Rodgers,et al.  Integrating ACT-R and Cyc in a large-scale model of language comprehension for use in intelligent agents , 2004 .

[24]  Tim Morris,et al.  Computer Vision and Image Processing: 4th International Conference, CVIP 2019, Jaipur, India, September 27–29, 2019, Revised Selected Papers, Part I , 2020, CVIP.

[25]  Arthur C. Graesser,et al.  Introduction to the Handbook of Discourse Processes , 2003 .

[26]  John E. Laird,et al.  The soar papers : research on integrated intelligence , 1993 .

[27]  Mark Huckvale,et al.  Opportunities for re-convergence of engineering and cognitive science accounts of spoken word recognition , 1998 .

[28]  G. Seagrim Furnishing the mind , 1980 .

[29]  A. Graesser,et al.  Handbook of discourse processes , 2003 .

[30]  James L. McClelland Parallel Distributed Processing , 2005 .

[31]  Arthur C. Graesser,et al.  Intelligent Tutoring Systems with Conversational Dialogue , 2001, AI Mag..