The missing information principle in computer vision

Central problems in the field of computer vision are learning object models from examples, classification, and localization of objects. Jn this paper we will motivate the use of a classical statistical approach to deal with these problems: the missing information principle. Based on this general technique we derive the Expectation Maximization algorithm and deduce statistical methods for learning objects from invariant features using Hidden Markov Models and from non-invariant features using Gaussian mixture density functions. The derived training algorithms will also include the problem of learning 3D objects from two-dimensional views. Furthermore, it is shown how the position and orientation of a three-dimensional object can be computed. The paper concludes with some experimental results.