Parameter-Efficient Tuning for Object Tracking by Migrating Pre-Trained Decoders
Ruijuan Zhang, Li Wang, Song YangVideo object tracking has taken advantage of pre-trained weights on large-scale datasets. However, most trackers fully fine-tune all the backbone’s parameters for adjusting to tracking-specific representations, where the utilization rate of parameter adjustment is inefficient. In this paper, we aim to explore whether a better balance can be achieved between parameter efficiency and tracking performance, and fully utilize the weight advantage of training on large-scale datasets. There are two main differences from a normal tracking paradigm: (i) We freeze the pre-trained weights of the backbone and add a dynamic adapter structure for every transformer block for tuning. (ii) We migrate the pre-trained decoder blocks to the tracking head for better generalization and localization. Extensive experiments are conducted on both mainstream challenging datasets and datasets for special scenarios or targets such as night-time and transparent objects. With the full utilization of pre-training knowledge, we found through experiments that a few tuned parameters can compensate for the gap between the pre-trained representation and the tracking-specific representation, especially for large backbones. Even better performance and generalization can be achieved. For instance, our AdaDe-B256 tracker achieves 49.5 AUC on the LaSOText which contains 150 sequences.