Abstract Continuous innovations in automotive lighting technology pose the problem of how to assess new headlights systems. For car manufacturers, assessment is mostly relative: given a headlights system to be tested, how does it compare with another, maybe from a different supplier, in terms of features such as light intensity, homogeneity or reach? This comparison is best performed dynamically, asking experts actually to drive along a certain testing track to write down later the visual impressions that they remember. However, this procedure suffers from several drawbacks: comparisons cannot be repeated, are not retrospective, and cannot be properly shared with other people since the only record is a paper form. To overcome these, it is proposed to record, for each headlights system, a video sequence of what the driver sees with a camera attached to the windshield screen. The problem becomes now how to compare a pair of such sequences. Two issues must be addressed: the temporal alignment or synchronization of the two sequences, and then the spatial alignment or registration of all the corresponding frames. In this paper a semiautomatic but fast procedure for the former, and an automatic method for the later are proposed. In addition, an alternative to the joint visualization of corresponding frames called the bird's-eye view transform is explored, and a simple fusion technique for better visualization of the headlights differences in two sequences is proposed. Results are provided for a number of headlights with different light sources and from several vehicle brands, in the form of both still images and video sequences.
[1]
A. Baddeley.
Human Memory: Theory and Practice, Revised Edition
,
1990
.
[2]
ZhangZhengyou.
A Flexible New Technique for Camera Calibration
,
2000
.
[3]
Hesham T. Eissa,et al.
ON THE POTENTIAL OF COMPUTATIONALLY RENDERED SCENES FOR LIGHTING QUALITY EVALUATION
,
2001
.
[4]
Bernhard P. Wrobel,et al.
Multiple View Geometry in Computer Vision
,
2001
.
[5]
P. Lecocq,et al.
Interactive headlight simulation
,
1999
.
[6]
Zhengyou Zhang,et al.
A Flexible New Technique for Camera Calibration
,
2000,
IEEE Trans. Pattern Anal. Mach. Intell..
[7]
Lihi Zelnik-Manor,et al.
Multi-Frame Estimation of Planar Motion
,
2000,
IEEE Trans. Pattern Anal. Mach. Intell..
[8]
E. Petsa,et al.
GEOMETRIC INFORMATION FROM SINGLE UNCALIBRATED IMAGES OF ROADS
,
2002
.
[9]
Peter Boyce,et al.
Automotive Lighting and Human Vision
,
2007
.
[10]
P. Anandan,et al.
A unified approach to moving object detection in 2D and 3D scenes
,
1996,
Proceedings of 13th International Conference on Pattern Recognition.
[11]
Simon Baker,et al.
Lucas-Kanade 20 Years On: A Unifying Framework
,
2004,
International Journal of Computer Vision.