Abstract
Capitalizing on large pre-trained models for various downstream tasks of
interest have recently emerged with promising performance. Due to the
ever-growing model size, the standard full fine-tuning based task adaptation
strategy becomes prohibitively costly in terms of model training and storage.
This has led to a new research direction in parameter-efficient transfer
learning. However, existing attempts typically focus on downstream tasks from
the same modality (e.g., image understanding) of the pre-trained model. This
creates a limit because in some specific modalities, (e.g., video
understanding) such a strong pre-trained model with sufficient knowledge is
less or not available. In this work, we investigate such a novel cross-modality
transfer learning setting, namely parameter-efficient image-to-video transfer
learning. To solve this problem, we propose a new Spatio-Temporal Adapter
(ST-Adapter) for parameter-efficient fine-tuning per video task. With a
built-in spatio-temporal reasoning capability in a compact design, ST-Adapter
enables a pre-trained image model without temporal knowledge to reason about
dynamic video content at a small (~8%) per-task parameter cost, requiring
approximately 20 times fewer updated parameters compared to previous work.
Extensive experiments on video action recognition tasks show that our
ST-Adapter can match or even outperform the strong full fine-tuning strategy
and state-of-the-art video models, whilst enjoying the advantage of parameter
efficiency. The code and model are available at
https://github.com/linziyi96/st-adapter