Improving an Adaptive Image Interpretation System by Leveraging

Automated image interpretation is an important task in numerous applications ranging from security systems to natural resource inventorization based on remote-sensing. Recently, a second generation of adaptive machine-learned image interpretation system (ADORE) has shown expertlevel performance in several challenging domains. Its extension, MR ADORE, aims at removing the last vestiges of human intervention still present in the original design of ADORE. Both systems treat the image interpretation process as a sequential decision making process guided by a machine-learned heuristic value function. This paper employs a new leveraging algorithm for regression (RESLEV) to improve the learnability of the heuristics in MR ADORE. Experiments show that RESLEV improves the system’s performance if the base learners are weak. Further analysis discovers the difference between regression and decisionmaking problems, and suggests an interesting research direction.

[1]  Richard E. Korf,et al.  Depth-First Iterative-Deepening: An Optimal Admissible Tree Search , 1985, Artif. Intell..

[2]  Bruce A. Draper,et al.  From Knowledge Bases to Markov Models to PCA , 2003 .

[3]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[4]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[5]  Vadim Bulitko,et al.  Towards Automated Creation of Image Interpretation Systems , 2003, Australian Conference on Artificial Intelligence.

[6]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[7]  Christopher M. Brown,et al.  Control of selective perception using bayes nets and decision theory , 1994, International Journal of Computer Vision.

[8]  Russell Greiner,et al.  Adaptive Image Interpretation : A Spectrum Of Machine Learning Problems , 2003 .

[9]  Leo Breiman,et al.  Classification and Regression Trees , 1984 .

[10]  Gunnar Rätsch,et al.  An Introduction to Boosting and Leveraging , 2002, Machine Learning Summer School.

[11]  Harris Drucker,et al.  Improving Regressors using Boosting Techniques , 1997, ICML.

[12]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[13]  David P. Helmbold,et al.  Leveraging for Regression , 2000, COLT.

[14]  David P. Helmbold,et al.  Potential Boosters? , 1999, NIPS.

[15]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[16]  J. Freidman,et al.  Multivariate adaptive regression splines , 1991 .

[17]  Bruce A. Draper,et al.  Knowledge-directed vision: control, learning, and integration , 1996, Proc. IEEE.

[18]  Bruce A. Draper,et al.  ADORE: Adaptive Object Recognition , 1999, ICVS.

[19]  Christopher M. Bishop,et al.  Advances in Neural Information Processing Systems 8 (NIPS 1995) , 1991 .

[20]  Michael A. Arbib,et al.  The metaphorical brain : an introduction to cybernetics as artificial intelligence and brain theory , 1972 .

[21]  Alexander Reinefeld,et al.  Complete Solution of the Eight-Puzzle and the Benefit of Node Ordering in IDA , 1993, IJCAI.

[22]  Satinder Singh,et al.  An upper bound on the loss from approximate optimal-value functions , 1994, Machine Learning.

[23]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[24]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[25]  David P. Helmbold,et al.  Boosting Methods for Regression , 2002, Machine Learning.

[26]  Peter L. Bartlett,et al.  Neural Network Learning - Theoretical Foundations , 1999 .

[27]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[28]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.