A Method for Visual Model Learning During Tracking
暂无分享,去创建一个
In this paper, we propose a new method for visual model learning. The algorithm learns an
object representation by one-shot and adaptively extends a set of saliency filters. The filter coefficients
are extracted from the environment by different views. In addition, the algorithm fuses already learned
visual filters and derives new visual classifiers in order to gain generalized object concepts. We evaluate
our method on tracked sequences that are resulted from a processing with a visual bottom-up attention
model.