Special Issue on Video Analysis on Resource-Limited Systems

Camera sensors, embedded in surveillance cameras, webcams, cameras in mobile devices, wearable cameras, and so on, are becoming pervasive in our society. Moreover, it is believed that even more cameras will be embedded everywhere as smart sensors in the future, driven by the decreasing cost and important applications in safety and security, smart environments, gaming and entertainment, and visual communication. To automatically analyze the massively increasing visual data, computer vision and video analysis techniques have received much attention in the recent years. In many real-world video analysis applications, the available resources are limited. This could mean low-quality data (e.g., limited imaging resolution/sensor size/frame rate), such as video footage from surveillance cameras and videos captured by consumers via mobile or wearable cameras. Another dimension comes from limited amount of processing power, for example, on mobile phones. The low-quality data could be caused by the poor sensor performance, simple optics, motion blur, or environmental factors such as illumination. Due to the limitations on video storage and transmission, the captured videos are often compressed, which may also result in lowquality video data. The sensors in the nonvisible spectrum (near infrared, far infrared, and so on) generate low-resolution videos with much noise, too. On the other hand, the processing power on the mobile or wearable devices is still limited for traditional video analysis tasks. There is a huge demand for video analysis and computer vision techniques on resourcelimited systems, which could enable many practical applications, for example, face identification for law enforcement, abnormal behavior detection for security, place/sign recognition for mapping and localization, augmented reality on mobile phones, to name a few. However, video analysis on resourcelimited systems is still an under-explored field. The existing video analysis research mainly focuses on high-performance systems, where high-quality video data or powerful computing platforms are considered. There are many challenges when addressing video analysis on resource-limited systems. For example, how to effectively extract representative visual features from low-quality data? How to fuse multiple low-resolution frames for reliable recognition? How to accelerate vision algorithms for use on mobile platforms? How to mitigate degrading factors caused by the low-quality data? We have to adapt the existing techniques developed for high-performance systems or find new approaches suitable for resource-limited systems. This special issue seeks to present and highlight the recent developments in the area of video analysis on resource-limited systems.