Introduction to machine learning

The goal of machine learning is to program computers to use example data or past experience to solve a given problem. Many successful applications of machine learning exist already, including systems that analyze past sales data to predict customer behavior, optimize robot behavior so that a task can be completed using minimum resources, and extract knowledge from bioinformatics data. Introduction to Machine Learning is a comprehensive textbook on the subject, covering a broad array of topics not usually included in introductory machine learning texts. In order to present a unified treatment of machine learning problems and solutions, it discusses many methods from different fields, including statistics, pattern recognition, neural networks, artificial intelligence, signal processing, control, and data mining. All learning algorithms are explained so that the student can easily move from the equations in the book to a computer program. The text covers such topics as supervised learning, Bayesian decision theory, parametric methods, multivariate methods, multilayer perceptrons, local models, hidden Markov models, assessing and comparing classification algorithms, and reinforcement learning. New to the second edition are chapters on kernel machines, graphical models, and Bayesian estimation; expanded coverage of statistical tests in a chapter on design and analysis of machine learning experiments; case studies available on the Web (with downloadable results for instructors); and many additional exercises. All chapters have been revised and updated. Introduction to Machine Learning can be used by advanced undergraduates and graduate students who have completed courses in computer programming, probability, calculus, and linear algebra. It will also be of interest to engineers in the field who are concerned with the application of machine learning methods. Adaptive Computation and Machine Learning series

[1]  Richard M. Friedberg,et al.  A Learning Machine: Part I , 1958, IBM J. Res. Dev..

[2]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[3]  Jaime G. Carbonell,et al.  Machine Learning: A Historical and Methodological Analysis , 1983, AI Mag..

[4]  O. G. Selfridge,et al.  Pandemonium: a paradigm for learning , 1988 .

[5]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[6]  P. Verlinde,et al.  Decision fusion using a multi-linear classifier , 1998 .

[7]  G. Moschis,et al.  Socialization influences on preparation for later life , 1999 .

[8]  David M. Mount,et al.  A local search approximation algorithm for k-means clustering , 2002, SCG '02.

[9]  Gustavo E. A. P. A. Batista,et al.  An analysis of four missing data treatment methods for supervised learning , 2003, Appl. Artif. Intell..

[10]  Nicholas M. Allix Epistemology and Knowledge Management Concepts and Practices , 2003 .

[11]  Richard S. Forsyth ‘The strange story of the Perceptron’ , 2004, Artificial Intelligence Review.

[12]  Huan Liu,et al.  Efficient Feature Selection via Analysis of Relevance and Redundancy , 2004, J. Mach. Learn. Res..

[13]  Victoria J. Hodge,et al.  A Survey of Outlier Detection Methodologies , 2004, Artificial Intelligence Review.

[14]  Alon Orlitsky,et al.  Convergence of profile based estimators , 2005, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005..

[15]  Andreu Català,et al.  Rapid and brief communication: Unified dual for bi-class SVM approaches , 2005 .

[16]  Claire Nédellec,et al.  Learning Language in Logic - Genic Interaction Extraction Challenge , 2005 .