On intelligence as memory

On Intelligence by Jeff Hawkins with Sandra Blakeslee has been inspirational for nonscientists as well as some of our most distinguished biologists, as can be seen from the web site (http://www.onintelligence.org). The book is engagingly written as a first person memoir of one computer engineer’s search for enlightenment on how human intelligence is computed by our brains. The central insight is important—much of our intelligence comes from the ability to recognize complex situations and to predict their possible outcomes. There is something fundamental about the brain and neural computation that makes us intelligent and AI should be studying it. Hawkins actually understates the power of human associative memory. Because of the massive parallelism and connectivity, the brain essentially reconfigures itself to be constantly sensitive to the current context and goals [1]. For example, when you are planning to buy some kind of car, you start noticing them. The book is surely right that better AI systems would follow if we could develop programs that were more like human memory. For whatever reason, memory as such is no longer studied much in AI—the Russell and Norvig [3] text has one index item for memory and that refers to semantics. From a scientific AI/Cognitive Science perspective, the book fails to tackle most of the questions of interest. It is certainly true that vision, motor control, language, planning, learning, etc. involve recognizing new situations as similar to known ones, but memory alone does not address any of the core issues in these areas. Scientists (whether in AI or biology) study particular phenomena such as color, grammar, feedback, evidence combination, and neural development. The book should be read as suggesting how one essential component of intelligence might be realized by the human brain as we understand it.