Preventing False Inferences

I I n t r o d u c t i o n In cooperative man-machine interaction, it is taken as necessary that a system truthfully and informatively respond to a user's question. It is not, however, sufficient. In particular, if the system has reason to believe that its planned response nfight lead the user to draw an inference that it knows to be false, then it must block it by nmdifying or adding to its response. The problem is that a system neither can nor should explore all eonchtsions a user might possibly draw: its reasoning must be constrained in some systematic and well-motivated way. Such cooperative behavior was investigated in [5], in which a modification of Griee's Maxim of Quality is proposed: Grice's Maxim of QualityDo not say what you believe to be false or for which you lack adequate evidence. Joshi's Revised Maxim of Quality If you, the speaker, plan to say anything which may imply for the hearer something that you believe to be false, then provide further information to block it. This behavior was studied in the context of interpreting certain definite noun phrases. In this paper, we investigate this revised principle as applied to question answering. In particular the goals of the research described here are to: I. characterize tractable cases in which the system as respondent (R) can anticipate the possibility of the user/questioner (Q) drawing false conclusions from its response and can hence alter or expand its response so as to prevent it happening; 2. develop a formal method for computing the projected inferences that Q may draw from a particular response, identifying those 1This work is partially supported by NSF Grants MCS 81-07290, MCS 8.3-05221, and [ST 83-11,100. 2At present visiting the Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104. factors whose presence or absence catalyzes the inferences; 3. enable the system to generate modifications of its response that can defuse possible false inferences and that [nay provide additional useful information as well. Before we begin, it is important to see how this work differs from our related work on responding when the system notices a discrepancy between its beliefs and those of its user [7, 8, 9, 18]. For example, if a user asks • How many French students failed CSEI21 last t e rm? ' , he shows that he .believes inter alia that the set of French students is non-empty, that there is a course CSEI21, and that it, was given last term. If the system simply answers " N o n e ' , he will assume the system concurs w'ith these b~diefs since the answer is consistent with them. Furthermore, he may conclude that French students do r;'d.her well in a difficult course. But this may be a false conclusion if the system doesn' t hold to all of those beliefs (e.g., it doesn' t know of any French students). Thus while the system's assertion "No French students failed CSEI21 last term" is true, it has misled the user (1) inlo believing it concurs with the user's beliefs and (2) into drawing additional false conclusions from its response. 3 The differences between this related work and the current enterprise are that: 1. It is no_~t assumed in the current enterprise that there is any overt indication that the domain beliefs of the user are in any way at odds with those of the system. 2. In our related work, the user draws a false conclusion from what is said because the presuppositions of the response are not in accord with the system's beliefs {following a nice analysis in [lO]). In the current enterpri.~e, the us~,r draws a false conclusion from what is said because the system's response behavior is not in accord with the user's expectations. It. may or may not also 31t is a feature of Kaplan's CO-OP system [7] that it point~ out the discrepancy by saying "| don't know of any French students °

[1]  Bonnie L. Webber,et al.  Varieties of User Misconceptions: Detection and Correction , 1983, IJCAI.

[2]  E. Prince Topicalization, Focus-Movement, and Yiddish-Movement: A Pragmatic Differentiation , 1981 .

[3]  Sandra Carberry,et al.  Tracking User Goals in an Information-Seeking Environment , 1983, AAAI.

[4]  Candace L. Sidner,et al.  Focusing in the comprehension of definite anaphora , 1986 .

[5]  Barbara J. Grosz,et al.  The representation and use of focus in dialogue understanding. , 1977 .

[6]  Raymond Reiter,et al.  A Logic for Default Reasoning , 1987, Artif. Intell..

[7]  Kathleen F. McCoy Correcting misconceptions: What to say when the user is mistaken , 1983, CHI '83.

[8]  Eric Mays,et al.  Failures in Natural Language Systems: Applications to Data Base Query Systems , 1980, AAAI.

[9]  Bonnie L. Webber,et al.  Living Up to Expectations: Computing Expert Responses , 1984, HLT.

[10]  J. Meigs,et al.  WHO Technical Report , 1954, The Yale Journal of Biology and Medicine.

[11]  Michael Brady,et al.  Cooperative Responses From a Portable Natural Language Database Query System , 1983 .

[12]  Gregory Ward,et al.  A pragmatic analysis of Epitomization: Topicalization it's not∗ , 1983 .

[13]  Scott Weinstein,et al.  Providing a Unified Account of Definite Noun Phrases in Discourse , 1983, ACL.

[14]  J. Allen Recognizing intentions from natural language utterances , 1982 .

[15]  M. Brady,et al.  Recognizing Intentions From Natural Language Utterances , 1983 .

[16]  Julia Hirschberg,et al.  User Participation in the Reasoning Processes of Expert Systems , 1982, AAAI.

[17]  Michael Brady,et al.  Computational Models of Discourse , 1983 .

[18]  Samuel Jerrold Kaplan,et al.  Cooperative responses from a portable natural language data base query system. , 1979 .