Learning from social media network

Recent years have witnessed the popularity of Web 2.0 content. Examples include Flickr, YouTube, Facebook, MySpace, etc. The proliferation of such applications on social web and social networks have produced a new type of multimedia content, termed as ‘‘social media’’ here as it is created by people using highly accessible and scalable publishing technologies for sharing via the web. The intrinsic attributes of social media is to facilitate interactive information sharing, interoperability and collaboration on the internet. By virtue of that, web images, videos and audios are generally accompanied by user-contributed contextual information such as, tag, category, title, metadata, comments, and viewer ratings, etc. Massive emerging social media data offer new opportunities for resolving the long-standing challenges such as how can we build video indexing and search benefit from the shared videos and other metadata? How can we deal with the large scale web videos by leveraging the video content and the user contributed information? Furthermore, this new media also introduces many challenging and new research problems and many exciting real-world applications (e.g. social image search, social group recommendation, etc.). This special issue is organized with the purpose of introducing novel research work on learning from social media network, i.e., how to use learning technology to facilitate the analysis of social media network and further make application under the scenario of Web 2.0 benefit from that. Submissions have come from an open call for paper. With the assistance of professional referees, 14 papers are selected after at least two rounds of rigorous reviews. These papers cover widely subtopics of learning from social media network, including social image search, image and video concept detection, geo-information mining based on social media network, social media data mining, and so on. The first part of the special issue contains four papers on the image and video concept detection. In the first paper ‘‘Constructing Visual Tag Dictionary by Mining Community-Contributed Images’’, Wang et al. construct a corpus named visual tag dictionary by mining community-contributed images. With this fully automatically constructed dictionary, tags and images are connected via visual words and many applications can be facilitated such as tag-based image search, tag ranking, image annotation and tag graph construction. In the second paper ‘‘Exploring Multi-Modality Structure for Cross Domain Adaptation in Video Concept Annotation’’, Xu et al. leverage multi-modality knowledge generalized by auxiliary classifier in the source domains to assist multi-graph optimization in the target domain for video concept annotation. The third paper ‘‘Collaborative Visual Modeling for Automatic Image Annotation via Sparse Model Coding’’ focuses on the exploiting the visual relatedness information among different