The PEAK Project

The PEAK project at the University of Alberta is a radical attempt to understand world knowledge in terms of a minimal ontology of sensori-motor experience. Experience is defined as the time sequence of low-level signals passing back and forth between the AI agent and its world at some relatively fast rate, say 100 times a second. The signals passing from the world to the agent are termed sensations and from the agent to the world are termed actions. For concreteness, time is taken to be discrete. The minimal ontology is then exactly these three things: sensations, actions, and time steps. The PEAK project explores the hypothesis that all empirical knowledge can be precisely characterized as predictions about the relationships among these three things, without reference to any other concepts or entities except insofar as they themselves can be precisely characterized in terms of the minimal ontology. The primary challenge to the PEAK hypothesis is the mismatch between low-level experience and human-level world knowledge as we normally think of it. The gap between even relatively simple concepts, such as that of a book or a chair, and low-level 100-times-a-second experience can seem immense. Thus the PEAK project is appropriately focused on the issue of abstraction. Its primary objective is to stretch our imagination through examples and implemented systems until bridging the abstraction gap seems possible and plausible. The acronym PEAK stands for Predictive Empirical Abstract Knowledge. Grounding knowledge in experience is extremely challenging, but may bring an equally extreme benefit. Representing knowledge in terms of experience enables it to be compared with experience. Knowledge imparted by human experts can be verified or disproved by this comparison. Existing knowledge can be tuned and new knowledge can be created (learned). The overall effect is that the AI agent may be able to take much more responsibility for maintaining and organizing its knowledge. The ability of an AI system to self-verify its knowledge is indeed a substantial benefit. While large amounts of knowledge is a great strength of AI systems, it is also a great weakness. The problem is that as knowledge bases grow they become brittle and difficult to maintain. There arise inconsistencies in the terminology used by different people or at