Compression using self-similarity-based temporal super-resolution for full-exposure-time video

In order to allow sufficient amount of light into the image sensor, videos captured in poor lighting conditions typically have low frame rate and frame exposure time equals to inter-frame period—commonly called full exposure time (FET). FET low-frame-rate videos are common in situations where lighting cannot be improved a priori due to practical (e.g., large physical distance between camera and captured objects) or economical (e.g., long duration of nighttime surveillance) reasons. Previous work in computer vision has shown that content at a desired higher frame rate can be recovered (to some extent) from the captured FET video using self-similarity-based temporal super-resolution. From an end-to-end communication standpoint, however, the following practical question remains: what is the most compact representation of the captured FET video at encoder, given that a higher frame rate reconstruction is desired at the decoder? In this paper, we present a compression strategy, where, for a given targeted rate-distortion (RD) tradeoff, FET video frames at appropriate temporal resolutions are selected for encoding using standard H.264 tools at encoder. At the decoder, temporal super-resolution is performed on the decoded frames to synthesize the desired high frame rate video. We formulate the selection of individual FET frames at different temporal resolutions as a shortest path problem to minimize Lagrangian cost of the encoded sequence. Then, we propose a computation-efficient algorithm based on monotonicity in predictor's temporal resolution to find the shortest path. Experiments show that our strategy outperforms an alternative naïve approach of encoding all FET frames as is and performing temporal super-resolution at decoder by up to 1.1dB at the same bitrate.