Undirected behavior without unbounded search

The idea that defines the very heart of “traditional” AI is due to John McCarthy: his imagined ADVICE-TAKER [McCarthy 1959] was a system that would decide how to act (in part) by running formal reasoning procedures over a body of explicitly represented knowledge. The system would not so much be programmed for specific tasks as told what it needed to know and expected to infer the rest somehow. Knowledge and advice would be given declaratively, allowing the system to operate in an undirected manner, choosing what pieces of knowledge to apply when they appeared situationally appropriate. This vision contrasts sharply with that of the traditional programmed computer system, where what information is needed and when is anticipated in advance and embedded directly into the control structlwe of the program. Although this vision has probably been the primary determinant of the course of AI since the late 1950s, many people have come to the conclusion that the approach does not scale up to handle realistic (“real-world”) settings. Despite their success on toy problems, logic and formal reasoning, it is said by some, should be utterly abandoned as the basis for intelligent systems. Recently we have seen a flurry of very different bottom-up approaches to AI, such as insect-like “subsumption” architectures, connectionist models, and others. Each of these has been hailed as a bold and radical departure, but is in fact a somewhat nostalgic return to the good old pre-McCarthy days of trying to build brains from the bottom up. But it is possible to take a less reactionary position: perhaps it is an overly naive version of the logicist approach, one that leans on unbounded search and weak methods (such as full-resolution theorem-proving) that is the problem. In fact, there are numerous examples of systems that handle real-world-scale problems based very firmly on the McCarthy philosophy. AI has succeeded in delivering working systems, which customers (quite outside of AI and CS) want and use, that are clear descendants of McCarthy’s original vision. The key element in the success of these (many) AI systems is their ability to constrain search. By taking advantage of domain and problem constraints, they end up not requiring the full generality of unbounded search or unguided theorem-proving. This happens in a variety of ways: in some cases, the representation language used is expressively limited enough to restrict the power needed to perform inferences; in others, domain and task knowledge is used to guide the search, for example by annotat-