Reply to Lam

Lighthill's division of Artificial Intelligence into three categories—Advanced Automation (A), Bridge Building (B) and Central Nervous System (C)—is of dubious utility, other than to Lighthill himself. As an example, consider rule-based expert systems. Lam and Lighthill would place these firmly in Category A. Thus placed, their indisputable success cannot be used to defend mainstream AI (Category B) against its detractors. Yet production rules have a very mixed ancestry. Their mathematical origins can be traced back to Emil Post's canonical systems, but they have also seen application and embellishment in computer science (e.g. in the analysis of algorithms) and linguistics (e.g. in Chomsky's work on formal grammar). Newell and Simon then used this formalism as a basis for modelling human problem solving, and developed techniques of protocol analysis that contributed to today's knowledge elicitation practices. Undoubtedly this work inspired the subsequent efforts at Stanford that led to DENDRAL and MYCIN. Meanwhile, Forgy's work on RETE pattern matching made rule interpreters run in a finite time, and his OPS architecture provided useful conflict resolution strategies that made programs follow more anthropomorphic lines of reasoning. That is why XCON and XSEL can handle 20-30,000 rules gracefully, not "the accessibility of high-speed computation". The point is that rule-based systems owe substantial intellectual debts to advances in mathematics (A), computer science (A), linguistics (C), and psychology (C). More than that, however, they owe a debt to the despised Bridge Builders (B) who put these ideas together and made them work. If Newell and Simon aren't mainstream AI heroes, then who is? One could repeat this analysis for many other significant developments since 1972, such as neural networks and inductive learning programs. Lam seems determined to misunderstand the synergy of mathematics, computer science, engineering, AI and psychology that gave rise to current work on parallel distributed processing, and it suits his purposes to underestimate its importance. He is also careful to omit any reference to the substantial progress in other areas of machine learning that was made in the 1980s and is now well-documented. Acknowledging any of these achievements would undermine his assertion that AI is always "assisted", and that this assistance constitutes a redefinition of AI. In fact, the goals of AI have changed surprisingly little in the last 20 years, and the distinction between "pure" and "assisted" AI is a red herring, because the line cannot be consistently drawn. How you would distinguish between "pure" and "assisted" human intelligence, given that we all have parents, read books, take courses, accept advice, use machines, etc? Unfortunately, Establishment "experts" are not well placed to understand interdisciplinary developments. Intellectual disciplines are not created, partitioned or destroyed by Research Councils, but formed by patterns of interaction between individual scientists in an international forum of information exchange. Lighthill's attempt to fashion an emerging research area in his own image revealed his own limitations far more than it revealed the limitations of AI.