本文主要是基于SiamFC跟踪器,考虑如何使tracker能够对目标外观随时间的变化做出实时响应以及如何对背景噪声干扰进行抑制。虽然Siamese网络在跟踪器的匹配精度以及速度保持上具有巨大的潜力。但是它们在分类上仍有很大的差距。本文作者提出一种动态的Siamese网络,通过快速转换学习模型,能够有效的在线学习历史数据上的目标外观变化以及背景抑制。并给出一种基于多层特征自适应融合的方法。同时所提出的动态Siamese网络可以直接在标记的视频序列上进行训练,能够充分利用运动目标的时空信息,而不是随机选取两帧来进行训练。

Abstract.

How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking.

如何有效地学习目标外观随时间的变化,排除杂波背景的干扰,同时保持实时响应,是视觉目标跟踪的核心问题。

Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond real-time speed.

当前,基于Siamese的跟踪器(基于matching)在精度和速度上获得了一个平衡。

(之前的基于深度网络的跟踪器精度不错但是速度很慢,基于相关滤波等传统方法的跟踪器速度较快,但是性能较差。基于Siamese的跟踪器在保证精度的同时,跟踪速度很快)

However, they still have a big gap to classification and updating based trackers in tolerating the temporal changes of objects and imaging conditions.

但是,在针对目标随时间变化等方面,它们与基于分类和更新的跟踪器还有很大的差距。(当前阶段基于Siamese的跟踪器只采用第一帧目标作为template,而不更新,所以在目标随着时间的变化而发生较大变化时,可能会降低跟踪器的性能。)

In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames.

本文作者提出了一种动态的Siamese网络,通过快速变换学习模型可以有效地在线学习目标的外观变化,并通过之前的视频帧对目标背景进行抑制。

We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features.

作者同时也提出了多层融合策略以自适应地使用多层深度特征来整合网络的输出。

Unlike state-of-the-art trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG.

与其他跟踪器不同,本文的方法允许使用任何可行特征,如SiamFC和VGG。

More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects.

本文提出的方法可以使用视频序列进行训练,这可以有效利用运动目标的时空信息。

As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.

在OTB13和VOT15测试了跟踪性能可以很好地平衡跟踪精度和速度。

Framework

  Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读  Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

 

Dynamic Siamese Network

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

简要介绍了SiamFC跟踪器。然后基于式(1)式引出了Dynamic Siamese Network匹配函数:

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

作者通过RLR来计Vlt-1Wlt-1,给定两个变量X和Y,目的是找到一个最优的线性转换矩阵R,使得X与Y最接近。上式可以通过傅里叶变换在频域进行快速计算。

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Target appearance variation V: 在跟踪了t-1帧后得到了目标Ot-1,学习第一帧与第t-1帧目标的外观变换(假设目标的变换是平滑的).

Background suppression W: 作者指出减少无关背景的干扰将有助于进一步提高跟踪精度,图3中Gt-1表示第t-1帧的原始图像(包括背景和前景), Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读使用高斯权值映射后的前景图(是想要通过学习Wlt-1后获得的目标图像)

Elementwise multi-layer fusion: 介绍了如何进行多层的深度特征融合,以便更好的实现目标定位。设置一个elementwise weight map Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读,最终的响应图:Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读 Learning Dynamic Siamese Network for Visual Object Tracking—ICCV2017 阅读

 

参考

1. Learning Dynamic Siamese Network for Visual Object Tracking (https://openaccess.thecvf.com/content_ICCV_2017/papers/Guo_Learning_Dynamic_Siamese_ICCV_2017_paper.pdf

2. https://zhuanlan.zhihu.com/p/104948990

3. https://blog.csdn.net/aiqiu_gogogo/article/details/79429071

4. https://blog.csdn.net/fzp95/article/details/80895204

 

相关文章: