Distributed Coding of Spherical Images with Jointly Refined Decoding

This work addresses the coding of 3-dimensional scenes, as captured by distributed vision sensors with catadioptric cameras. Spherical images allow for avoiding distortion due to the common euclidian assumption in the representation of the plenoptic function. We consider here low complexity encoding of the sensor outputs, in a framework where the cameras could be placed anywhere in the scene, and where the sensors do not communicate to each other. Since multiple spherical images of the same scene most probably provide a redundant representation, we propose to have different compression ratios for different cameras, in order to reduce the overhead of information. The decoder performs a joint decoding of the multiples images, by motion estimation, and joint refinement by consistent inverse quantization. It is finally shown that, even in the absence of any information about the scene or the position of the cameras, the proposed scheme offers improved performance with respect to an independent encoding of the spherical images, especially at low coding rate