论文引用

Junfei Liu, Zhiyi Ma, Hongjie Fan et al.,
2018 2nd International Conference on Imaging, Signal Processing and Communication (ICISPC)

With the development of image vision technology, video data emerges in large numbers. Even though varieties of methods have achieved excellent performance, how to quickly and accurately retrieve the v...

Wen Gao, Ling-Yu Duan, Alex ChiChung Kot et al.,
2017,
IEEE Transactions on Multimedia

With emerging demand for large-scale video analysis, MPEG initiated the compact descriptor for video analysis (CDVA) standardization in 2014. Beyond handcrafted descriptors adopted by the current MPEG...

When detecting semantic concepts in video, much of the existing research in content-based classification uses keyframe information only. Particularly the combination between local features such as SIF...

X. Halkias, H. Glotin, Sébastien Paris et al.,
2012,
MediaEval

We propose a violence detector based on the dynamics of new multi-scale local binary pattern histogram features (MSLB P), that generate high-dimensional space (20 480 dimensions), trai ned on linear S...

Koichi Shinoda, Nakamasa Inoue, Nakamasa Inoue et al.,
2018,
ACM Multimedia

We propose a few-shot adaptation framework, which bridges zero-shot learning and supervised many-shot learning, for semantic indexing of image and video data. Few-shot adaptation provides robust param...

Hervé Bredin, H. Bredin,
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

We deal with the issue of combining dozens of classifiers into a better one. Our first contribution is the introduction of the notion of communities of classifiers. We build a complete graph with one ...

Georges Quénot, Patrick Lambert, Alexandre Benoit et al.,
2012,
ECCV Workshops

We deal with the issue of combining dozens of classifiers into a better one, for concept detection in videos. We compare three fusion approaches that share a common structure: they all start with a cl...

We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Ou...

Bernd Girod, Andre F. de Araújo, Jason Chaves et al.,
2015 IEEE International Conference on Image Processing (ICIP)

We address the challenge of using image queries to retrieve video clips from a large database. Using binarized Fisher Vectors as global signatures, we present three novel contributions. First, an asym...

Vasileios Mezaris, Milan Dojchinovski, Tomás Kliegr et al.,
2015,
Multimedia Data Mining and Analytics

Visual concept detection is one of the most active research areas in multimedia analysis. The goal of visual concept detection is to assign to each elementary temporal segment of a video, a confidence...

B. Girod, A. F. de Araujo, F. Silveira et al.,
2012,
TRECVID

Video search has become a very important tool, with the ever-growing size of multimedia collections. This work introduces our Video Semantic Indexing system. Our experiments show that Residual Vectors...

Andrei Bursuc, Titus B. Zaharia, Françoise J. Prêteux et al.,
2011 Seventh International Conference on Signal Image Technology & Internet-Based Systems

This paper tackles the issue of retrieving different instances of an object of interest within a given video document or in a video database. The principle consists of considering a semi-global image ...

This paper tackles the issue of retrieving different instances of an object of interest within a given video document or in a video database. The principle consists in considering a semi-global image ...

Hervé Le Borgne, Aymen Shabou, Nicolas Ballas et al.,
2012,
TRECVID

This paper reports the experiments carried out for the semantic indexing (SIN) and the instance search (INS) tasks at TRECVID 2012. For the SIN task, we evaluated two recently proposed features with a...

Patrick Lambert, Alice Caplier, Alexandre Benoit et al.,
2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA)

This paper proposes to investigate the potential benefit of the use of low-level human vision behaviors in the context of high-level semantic concept detection. A large part of the current approaches ...

Marcus Liwicki, Andreas Dengel, Koichi Kise et al.,
2015,
UbiComp/ISWC Adjunct

This paper presents an automatic video annotation method which utilizes the user's reading behaviour. Using a wearable eye tracker, we identify the video frames where the user reads a text document an...

Georges Quénot, Bahjat Safadi, G. Quénot et al.,
2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)

This paper presents a set of improvements for SVM-based large scale multimedia indexing. The proposed method is particularly suited for the detection of many target concepts at once and for highly imb...

Jenny Benois-Pineau, Jean-François Dartigues, Julien Pinquier et al.,
2011,
Multimedia Tools and Applications

This paper presents a method for indexing activities of daily living in videos acquired from wearable cameras. It addresses the problematic of analyzing the complex multimedia data acquired from weara...

Patrick Lambert, Alexandre Benoit, Sabin Tiberius Strat et al.,
2013 11th International Workshop on Content-Based Multimedia Indexing (CBMI)

This paper investigates how the detection of diverse high-level semantic concepts (objects, actions, scene types, persons etc.) in videos can be improved by applying a model of the human retina. A lar...

This paper describes our participation to the TRECVID 2011 challenge [1]. This year, we focused on a stacking fusion with Domain Adaptation algorithm. In machine learning, Domain Adaptation deals with...