Analyzing Inconsistencies in Probabilistic Conditional Knowledge Bases using Continuous Inconsistency Measures

Probabilistic conditional logic is a knowledge representation formalism that uses probabilistic conditionals (if-then rules) to model uncertain and incomplete information. By applying the principle of maximum entropy one can reason with a set of probabilistic conditionals in an information-theoretical optimal way, provided that the set is consistent. As in other fields of knowledge representation, consistency of probabilistic conditional knowledge bases is hard to ensure if their size increases or multiple sources contribute pieces of information. In this paper, we discuss the problem of analyzing and measuring inconsistencies in probabilistic conditional logic by investigating inconsistency measures that support the knowledge engineer in maintaining a consistent knowledge base. An inconsistency measure assigns a numerical value to the severity of an inconsistency and can be used for restoring consistency. Previous works on measuring inconsistency consider only qualitative logics and are not apt for quantitative logics because they assess severity of inconsistency without considering the probabilities of conditionals. Here, we investigate continuous inconsistency measures which allow for a more fine-grained and continuous measurement.