Some causal models are deeper than others

Abstract The effort within AI to improve the robustness of expert systems has led to increasing interest in ‘deep’ reasoning, which is representing and reasoning about the knowledge that underlies the ‘shallow’ knowledge of traditional expert systems. One view is that deep reasoning is equivalent to causal reasoning. By analyzing the causal reasoning of a particular medical AI system, we show that this view is naive. Specifically, we show that causal networks omit information relating structure and behavior, and that this information is needed for deeper reasoning. Our conclusion is that deepness is relative to the phenomena of interest, i.e. whether the representation describes the properties and relationships that mediate interactions among the phenomena and whether the reasoning method takes this information into account.