Special issue (part II) on parallel computing for real-time image processing

The performance requirements of image processing applications have continuously increased, especially when they are executed under real-time constraints. We have organized this special issue on Parallel Computing for Real-Time Image Processing to present the current state-of-the-art in the field of parallel programming and the future trends in real-time image and video processing as related to parallel computing or real-time implementation of embedded image processing applications on parallel architectures including multi-core platforms, GPUs and dedicated parallel architectures based upon FPGAs. Due to the overwhelming number and their wide scope of submissions received for this special issue and thus the difficulty associated with finding expert reviewers, it was decided to offer this special issue in three parts. We are very grateful to the reviewers who provided valuable comments and suggestions to improve the quality of the accepted papers. This third part of this special issue on Parallel Computing for Real-Time Image Processing presents five papers addressing different parallel applications including 2D wavelet transform, tone mapping, object recognition, face tracking and dynamic video applications. The realtime implementation of these applications uses GPU, parallel multi-core architecture and dedicated architecture based upon FPGA. Brief outlines of these papers are stated below: The first paper by Franco et al. presents a GPU implementation of the 2D Fast Wavelet Transform, based on a pair of Quadrature Mirror Filters. It also investigates hardware improvements including multicores on the CPU side, and exploits them at thread-level parallelism using OpenMP and Pthreads. Overall, the GPU exhibits better scalability and parallel performance on large-scale images to become a solid alternative for computing the 2D-FWT versus those thread-level methods run on emerging multicore architectures. The second paper by Liu et al. proposes a multi-cue-based face tracking algorithm with the supporting framework using parallel multi-core and one graphic processing unit (GPU). This paper explores two parallel computing techniques to speed up the racking process, especially the most computation-intensive observational steps. One is a multi-core-based parallel algorithm with a Map-Reduce thread model, and the other is a GPU-based speedup approach. The GPU-based technique uses features-matching and particle weight computations, which have been put into the GPU kernel. The results demonstrate that the proposed face tracking algorithm can work robustly with cluttered backgrounds and differing illuminations; the multi-core parallel scheme can increase the speed by 2–6 times compared with that of the corresponding sequential algorithms. Furthermore, a GPU parallel scheme and co-processing scheme can achieve a greater increase in speed (8x–12x) compared with the corresponding sequential algorithms. The third paper by Akil et al. presents the parallel implementation on a GPU of a Real-Time Dynamic Tone Mapping Operator. It describes a generic operator that may be used by any application. However, the goal of this work is to integrate this operator into the graphic rendering process of a car driving simulator, based on its real-time implementation. The tone mapping operator outputs a low M. Akil (&) L. Perroton Université Paris-Est, LIGM, Equipe A3SI, ESIEE Paris Cité Descartes, BP 99, 93162 Noisy-le-Grand Cedex, France e-mail: m.akil@esiee.fr