Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence? Machine Intelligence, AD 1958 On November 23rd, 1958, a diverse group of scientists from all around the world and from many disciplines, gathered near London for a conference that lasted 4 days and involved about 200 people. The topic was: can machines think? The Conference was called “On the Mechanisation of Thought Processes” and its proceedings encapsulate the zeitgeist of those days, and give us a chance to reflect on the achievements and directions of research in Machine Intelligence. That group of engineers, biologists, mathematicians, represented both the early ideas of Cybernetics and the newly emerging ideas of Artificial Intelligence. They were brought together by the common vision that mental processes can be created in machines. Their conviction was that natural intelligence could be understood at the light of the laws of science, a position spelled out in Alan Turing’s 1947 paper “On Intelligent Machinery” [11]. They also believed that it could be reproduced in artefacts. Their common goals were clearly stated: understanding intelligent behaviour in natural systems and creating it in machines. The key challenges were identified and named, in the Preface of the proceedings: “This symposium was held to bring together scientists studying artificial thinking, character and pattern recognition, learning, mechanical language translation, biology, automatic programming, industrial planning and clerical mechanisation. It was felt that a common theme in all these fields was ‘the mechanisation of thought processes’ and that an interchange of ideas between these specialists would be very valuable”. A further look at the two volumes of the Proceedings reveals a general organisation that still is found in modern meetings in this area. Sessions were devoted to: General principles; Automatic Programming; Mechanical Language Translation; Speech Recognition; Learning in Machines; Implications for Biology; Implications for Industry. The list of participants included both members of the Cybernetics movement (both from the UK Ratio club and the US Macy Conferences) and exponents of the newly growing AI movement. It included Frank Rosenblatt (inventor of the Perceptron); Arthur Samuel (inventor of the first learning algorithm); Marvin Minsky (one of the founding fathers of AI); Oliver Selfridge (inventor of the Pandemonium architecture, a paradigm for modern agent systems); John McCarthy (inventor of LISP, and of the name Artificial Intelligence); Donald MacKay (cyberneticist); Warren McCulloch ©2009-2011 See-a-Pattern.org. Created by Vasileios Lampos. All rights reserved. Page 1 of 7
[1]
D. Haussler,et al.
A hidden Markov model that finds genes in E. coli DNA.
,
1994,
Nucleic acids research.
[2]
Phil Husbands,et al.
The Ratio Club: a hub of British cybernetics
,
2008
.
[3]
Nello Cristianini,et al.
An introduction to Support Vector Machines
,
2000
.
[4]
Vladimir N. Vapnik,et al.
The Nature of Statistical Learning Theory
,
2000,
Statistics for Engineering and Information Science.
[5]
Robert L. Mercer,et al.
The Mathematics of Statistical Machine Translation: Parameter Estimation
,
1993,
CL.
[6]
R. A. Brooks,et al.
Intelligence without Representation
,
1991,
Artif. Intell..