Sampling Salient Clips from Video for Efficient Action Recognition

Paper Poster SCSampler Clips

Bruno Korbar, Du Tran, Lorenzo Torresani

Facebook AI


While many action recognition datasets consist of collections of brief, trimmed videos each containing a relevant action, videos in the real-world (e.g., on YouTube) exhibit very different properties: they are often several minutes long, where brief relevant clips are often interleaved with segments of extended duration containing little change. Applying densely an action recognition system to every temporal clip within such videos is prohibitively expensive. Furthermore, as we show in our experiments, this results in suboptimal recognition accuracy as informative predictions from relevant clips are outnumbered by meaningless classification outputs over long uninformative sections of the video. In this paper we introduce a lightweight "clip-sampling" model that can efficiently identify the most salient temporal clips within a long video. We demonstrate that the computational cost of action recognition on untrimmed videos can be dramatically reduced by invoking recognition only on these most salient clips. Furthermore, we show that this yields significant gains in recognition accuracy compared to analysis of all clips or randomly/uniformly selected clips. On Sports1M, our clip sampling scheme elevates the accuracy of an already state-of-the-art action classifier by 7% and reduces by more than 15 times its computational cost.


Qualitative examples

Here, we give a few examples of videos and the highest and lowest ranked clip according to SCSampler; first three columns contain the visualization of the “most salient” 3 clips for a given video, and the following three columns contain the “least salient” clips.
Top-ranked clips Bottom-ranked clips
Video 1
“Cycling” “Cycling” “Cycling” “Cycling” “Cycling” “Cycling”
Video 2
Dog Agility
Dog Agility Dog Agility Dog Agility Dog Agility Dog Agility Dog Agility
Video 3
Beach Volleyball
Beach Volleyball Beach Volleyball Beach Volleyball Beach Volleyball Beach Volleyball Beach Volleyball

Short talk

We are grateful to the ICCV 2019 reviewers and the area chair for letting us present our work as an oral presentation at the main conference. In case you have missed it, please find the pre-recorded talk below:

Evaluate your model on SCSampler sorted clips

Please download the ranking list of clips for each video of the validation set on Kinetics400 dataset attached here. The CSV cotains two columns, one with the basename of the video, and the second containing presentation timestamp (PTS) of every clip in the video, sorted in the descending order according to their SCSampler score. If you want to evaluate your model on top-k clips, simply select first k clips from the list of PTSs. These PTS values are computed using torchvision's VideoReader backend with "pts" option inclueded. All overlapping 32 frame clips were extracted at 15fps and subsequntly ranked by our best SCSampler model. For details on feature extraction, please check the tooling in the video model zoo.