DECIDER-l: A System that Chooses Among Different Types of Acts

This paper examines a programmed model (called DECIDER-1) that 1) recognizes scenes of things, among which are a) objects and b) words that form commands (or questions or other types of statements), 2) recognizes the import of these commands, 3) decides whether to obey one, and then 4) uses the command to guide the consequent actions, along with any necessary perceptual search. It uses the same mechanisms to handle a) the perceptual processes involved with recognizing objects and describing scenes, b) the linguistic processes involved with parsing sentences and understanding their meaning, and c) the retrieval processes needed to access pertinent facts in pertinent memory. This is in sharp contrast to most of today's systems, which receive the command through one channel, to be "understood" by one special-purpose set of routines, and perceive their environments through an entirely different channel. DECIDER-1 continues to characterize patterns, parse symbol strings, and access facts implied by input questions until an action is chosen, because it is sufficiently implied by this search through the memory net. Then it executes the implied action. Possible actions include Answering, Describing, Finding, Moving, and Naming.

[1]  Robert F. Simmons,et al.  Answering English questions by computer: a survey , 1965, CACM.

[2]  Bertram Raphael,et al.  Programming a robot , 1968, IFIP Congress.

[3]  Richard O. Duda,et al.  Experiments in the Recognition of Hand-Printed Text, Part II-Context Analysis , 1899 .

[4]  Albert L. Zobrist The organization of extracted features for pattern recognition , 1971, Pattern Recognit..

[5]  Nils J. Nilsson,et al.  A mobius automation: an application of artificial intelligence techniques , 1969, IJCAI 1969.

[6]  D. R. Andrews,et al.  The IBM 1975 optical page reader: part III: recognition logic development , 1968 .

[7]  Terry Winograd,et al.  Procedures As A Representation For Data In A Computer Program For Understanding Natural Language , 1971 .

[8]  Robert F. Simmons,et al.  Computational Linguistics Natural Language Question- Answering Systems: 1969 , 2022 .

[9]  Leonard Uhr,et al.  A teachable pattern describing and recognizing program , 1969, Pattern Recognit..

[10]  Leonard Uhr,et al.  A pattern recognition program that generates, evaluates, and adjusts its own operators , 1961, IRE-AIEE-ACM '61 (Western).

[11]  Richard Fikes,et al.  STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.

[12]  Nils J. Nilsson,et al.  A Mobile Automaton: An Application of Artificial Intelligence Techniques , 1969, IJCAI.

[13]  Leonard Uhr,et al.  The description of scenes over time and space , 1973, AFIPS National Computer Conference.

[14]  Jerome A. Feldman,et al.  The Stanford Hand-Eye Project , 1969, IJCAI.

[15]  Sara Reynolds Jordan,et al.  Learning to use contextual patterns in language processing , 1972 .

[16]  D. Premack,et al.  Language in chimpanzee? , 1971, Science.

[17]  Claude L. Fennema,et al.  Scene Analysis Using Regions , 1970, Artif. Intell..

[18]  Leonard Uhr,et al.  Layered "Recognition Cone" Networks That Preprocess, Classify, and Describe , 1972, IEEE Transactions on Computers.

[19]  Leonard Uhr,et al.  Flexible linguistic pattern recognition , 1971, Pattern Recognit..

[20]  J. H. Munson,et al.  Experiments in the recognition of hand-printed text, part I: character recognition , 1968, AFIPS '68 (Fall, part II).

[21]  Beatrice T. Gardner,et al.  Two-Way Communication with an Infant Chimpanzee , 1971 .

[22]  R. Gardner,et al.  Teaching sign language to a chimpanzee. , 1969, Science.