Guest Editorial: Deep Learning for Multimedia Computing

The twenty papers in this special section aim at providing a forum to present recent advancements in deep learning research that directly concerns the multimedia community. Specifically, deep learning has successfully designed algorithms that can build deep nonlinear representations to mimic how the brain perceives and understands multimodal information, ranging from low-level signals like images and audios, to high-level semantic data like natural language. For multimedia research, it is especially important to develop deep networks to capture the dependencies between different genres of data, building joint deep representation for diverse modalities.