It is still a pipe dream that personal AI assistants on the phone and AR glasses can assist our daily life in addressing our questions like ``how to adjust the date for this watch?'' and ``how to set its heating duration? (while pointing at an oven)''. The queries used in conventional tasks (i.e. Video Question Answering, Video Retrieval, Moment Localization) are often factoid and based on pure text. In contrast, we present a new task called Task-oriented Question-driven Video Segment Retrieval (TQVSR). Each of our questions is an image-box-text query that focuses on affordance of items in our daily life and expects relevant answer segments to be retrieved from a corpus of instructional video-transcript segments. To support the study of this TQVSR task, we construct a new dataset called AssistSR. We design novel guidelines to create high-quality samples. This dataset contains 3.2k multimodal questions on 1.6k video segments from instructional videos on diverse daily-used items. To address TQVSR, we develop a simple yet effective model called Dual Multimodal Encoders (DME) that significantly outperforms several baseline methods while still having large room for improvement in the future. Moreover, we present detailed ablation analyses. Code and data are available at \url{https://github.com/StanLei52/TQVSR}.
[1]
Hailin Jin,et al.
Video Question Answering on Screencast Tutorials
,
2020,
IJCAI.
[2]
Jun Yu,et al.
ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering
,
2019,
AAAI.
[3]
Hao-Yu Wu,et al.
Classification is a Strong Baseline for Deep Metric Learning
,
2018,
BMVC.
[4]
Ali Farhadi,et al.
From Recognition to Cognition: Visual Commonsense Reasoning
,
2018,
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5]
Zhongwen Xu,et al.
Uncovering the Temporal Context for Video Question Answering
,
2017,
International Journal of Computer Vision.
[6]
Tao Mei,et al.
MSR-VTT: A Large Video Description Dataset for Bridging Video and Language
,
2016,
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).