What Awareness Isn't: A Sentential View of Implicit and Explicit Belief

In their attempt to model and reason about the beliefs of agents, artificial intelligence (AI) researchers have borrowed from two different philosophical traditions regarding the folk psychology of belief. In one tradition, belief is a relation between an agent and a proposition, that is, a propositional attitude. Formal analyses of propositional attitudes are often given in terms of a possible-worlds semantics. In the other tradition, belief is a relation between an agent and a sentence that expresses a proposition (the sentential approach). The arguments for and against these approaches are complicated, confusing, and often obscure and unintelligible (at least to this author). Nevertheless strong supporters exist for both sides, not only in the philosophical arena (where one would expect it), but also in AI. In the latter field, some proponents of possible-worlds analysis have attempted to remedy what appears to be its biggest drawback, namely the assumption that an agent believes all the logical consequences of his or her beliefs. Drawing on initial work by Levesque, Fagin and Halpern define a logic of general awareness that superimposes elements of the sentential approach on a possible-worlds framework. The result, they claim, is an appropriate model for resource-limited believers. We argue that this is a bad idea: it ends up being equivalent to a more complicated version of the sentential approach. In concluding we cannot refrain from adding to the debate about the utility of possible-worlds analyses of belief.