Special section on Advances in Machine Vision Beyond the Visible Spectrum (BVS)

It is our pleasure to welcome you to this special section of Computer Vision and Image Understanding on Advances in Machine Vision Beyond the Visible Spectrum (BVS). This special section highlights recent state-of-the-art research in this diverse and dynamic field which is an emerging subset of computer vision and pattern recognition. BVS requires processing data from different types of sensors, including infrared, far infrared, millimeter-wave, microwave, radar, sonar, LIDAR and SAR sensors. The dynamics of technology ‘‘push’’ and ‘‘pull’’ in this field of endeavor have resulted from increasing demand from potential users of this technology by defense, security, and commercial entities. Our BVS community has been growing over the past years with larger participation and higher quality submissions to our well-established IEEE CVPR workshop series on BVS (2004–2013). This BVS special section is organized by a team of four guest editors (GEs) with diverse backgrounds in academia, industry and federal research laboratories. Our hope is that this special section will enhance the visibility of the contributions from the BVS community and increase the number of subscribers from our community to the CVIU journal. The first call-for-papers was announced late 2011 and the submission deadline was set in June 2012 after the CVPR 2012 conference. We solicited both theoretical and application oriented papers. In total we received twenty-one full submissions from eight countries and involved more than 50 international reviewers in the review process over 9 months. The fact that numerous papers were submitted and many reviewers volunteered their time to review submissions is evidence of the increasing interest in the BVS field. After two rounds of rigorous reviews and a final check by GEs, 9 out of the 21 submissions have been accepted in this special section. The first paper, by Bo Zheng, Ryo Ishikawa, Jun Takamatsu, Takeshi Oishi, and Katsushi Ikeuchi, entitled ‘‘A Coarse-to-fine IPdriven Registration for Pose Estimation from Single Ultrasound Image,’’ proposes a new method for pose estimation from single ultrasound image and presents a new a 3D registration method using implicit polynomial (IP) model. The authors improve the robustness, accuracy and computational efficiency for registration by a coarse-to-fine process with multiple IPs from low degree to high degree. The second paper, by Tarek Elguebaly and Nizar Bouguila, entitled ‘‘Finite Asymmetric Generalized Gaussian Mixture Models Learning for Infrared Object Detection,’’ introduces a multidimensional asymmetric generalized Gaussian mixture (AGGM) for infrared object detection. The authors present an online learning algorithm for pedestrian detection in infrared images, and also propose a Multiple-target tracking (MTT) framework using AGGM where the importance of the fusion of both visible and thermal images for MTT is demonstrated by experiments.