Demo: satellites in our pockets: an object positioning system using smartphones

We attempt to localize a distant object by looking at it through a smartphone. As an example use-case, while driving on a highway entering New York, we want to look at one of the skyscrapers through the smartphone camera, and compute its GPS location. While the problem would have been far more difficult five years back, the growing number of sensors on smartphones, combined with advances in computer vision, have opened up important opportunities. We harness these opportunities through a system called Object Positioning System (OPS) [1] that achieves reasonable localization accuracy. Our core technique uses computer vision to create an approximate 3D structure of the object and camera, and applies mobile phone sensors to scale and rotate the structure to its absolute configuration. Then, by solving (nonlinear) optimizations on the residual (scaling and rotation) error, we ultimately estimate the object's GPS position. We present a demonstration of OPS, a system to appear in the MobiSys 2012 main conference. The user is expected to bring the object of interest near the center of her view finder, and take as few as four photographs. GPS and compass readings are also recorded during the process. The position of the photographs can be seperated by a few steps from each other in any direction. OPS uses Structure from Motion to extract keypoints across the photographs to come up with a 3D structure, composed of the object and the camera locations. Finally, OPS minimizes errors in GPS and compass readings with help of 3D structure to converge on the object location.