Some causal models are deeper than others
暂无分享,去创建一个
Abstract The effort within AI to improve the robustness of expert systems has led to increasing interest in ‘deep’ reasoning, which is representing and reasoning about the knowledge that underlies the ‘shallow’ knowledge of traditional expert systems. One view is that deep reasoning is equivalent to causal reasoning. By analyzing the causal reasoning of a particular medical AI system, we show that this view is naive. Specifically, we show that causal networks omit information relating structure and behavior, and that this information is needed for deeper reasoning. Our conclusion is that deepness is relative to the phenomena of interest, i.e. whether the representation describes the properties and relationships that mediate interactions among the phenomena and whether the reasoning method takes this information into account.
[1] Ramesh S. Patil,et al. Compiling Causal Knowledge for Diagnostic Reasoning , 1988 .
[2] Peter E. Hart,et al. Directions for AI in the eighties , 1982, SGAR.
[3] Jon Sticklen,et al. 'Deep' models and their relation to diagnosis , 1989, Artif. Intell. Medicine.
[4] Peter Szolovits,et al. Causal Understanding of Patient Illness in Medical Diagnosis , 1981, IJCAI.
[5] Donald Michie. High-Road and Low-Road Programs , 1981, AI Mag..