Disparity space: A parameterisation for Bayesian triangulation from multiple cameras

Estimating the position of an object from cameras is a key requirement in many computer vision and robotics applications. In sensor fusion applications, where we integrate data from multiple observations and cameras over time and estimate the uncertainty in the state estimate, a Bayesian approach is more applicable than the usual Maximum Likelihood approach. This paper presents a means of Bayesian triangulation from multiple arbitrarily-oriented cameras. As the Bayesian triangulation in the world co-ordinate frame is complicated by the high degree of uncertainty in the distance of the object from the camera, we instead choose a different parameterisation, called disparity space. We compare our approach with an alternative parameterisation, known as inverse depth. Our simulation results demonstrate better estimation accuracy when using the disparity space.