Waseda at TRECVID 2016: Ad-hoc Video Search

Waseda participated in the TRECVID 2016 Ad-hoc Video Search (AVS) task [1]. For the AVS task, we submitted four manually assisted runs. Our approach used the following processing steps: manually creating several search keywords based on the given query phrase, calculating a score for each concept using visual features, and combining the semantic concepts to obtain the final scores. Our best run achieved a mean Average Precision (mAP) of 17.7%. It was ranked the highest among all the runs submitted. 1 System Description Our method consists of three steps: 1. Manually select several search keywords based on the given query phrase (Subsection 1.1). 2. Calculate a score for each concept using visual features (Subsection 1.2). 3. Combine the semantic concepts to get the final scores (Subsection 1.3). 1.1 Manual search keyword selection Given a query phrase, we manually picked out some important keywords. For example, given the query phrase “any type of fountains outdoors”, we extracted the keywords “fountain” and “outdoor”. Here, we explicitly distinguished and from or; that is, given the query phrase “one or more people walking or bicycling on a bridge during daytime”, we created the new search query “people” and (“walking” or “bicycling”) and “bridge” and “daytime”. In this case, there is no need for a video to include both “walking” and “bicycling”; it is sufficient if one of these is included in the video. 1.2 Score calculation using visual features In our submission, we extracted visual features from pre-trained convolutional neural networks (CNNs). First, we selected at most 10 frames from each shot at regular intervals, and the corresponding images were input to the CNN to obtain the respective feature vectors from hidden or output layers. These (at most 10) feature vectors were then bound to one feature vector by element-wise max-pooling. We used a total of nine kinds of pre-trained models to calculate scores of concepts as shown in Table 1. 1. TRECVID346 We extracted 1,024-dimensional vectors from pool5 layers of the pre-trained GoogLeNet model [6], which was trained with the ImageNet database. Then we trained support vector machines (SVMs) for each concept using the annotation provided by collaborative annotation [2]. The shot score for each concept was calculated as the distance to the hyperplane in the SVM model.