Synthesis of multiple video streams through multi-thread programming

In some scenarios, we need to combine multiple video streams and pictures into a single video. In this paper, we develop an application to synthesize two channel video stream and two channel pictures. Our application can be used for synthetic display of multiple network video streams and offline video synthetic storage. As the single thread decoding of multiple video streams takes a lot of time, we propose the use of a multithreading producer-consumer pattern. It makes full use of the advantages of multi-core CPU and improves the programming efficiency. In our experiments, we compare video synthesis through single thread and multithreading. Multithreading is much faster than single threading.

[1]  Mohammad Saber Iraji,et al.  Skin Color Segmentation in YCBCR Color Space with Adaptive Fuzzy Neural Network (Anfis) , 2012 .

[2]  Zhiyong Zhang,et al.  Research and Implementation of Video Codec Based on FFmpeg , 2016, 2016 International Conference on Network and Information Systems for Computers (ICNISC).

[3]  Josep-Lluís Larriba-Pey,et al.  Producer-Consumer: The Programming Model for Future Many-Core Processors , 2013, ARCS.

[4]  Ajay Luthra,et al.  Overview of the H.264/AVC video coding standard , 2003, IEEE Trans. Circuits Syst. Video Technol..