Video understanding is a difficult drawback that requires reasoning about each spatial info (e.g., for objects in a scene, together with their areas and relations) and temporal info for actions or occasions proven in a video. There are numerous video understanding functions and duties, similar to understanding the semantic content material of internet movies and robotic notion. Nevertheless, present works, similar to ViViT and TimeSFormer, densely course of the video and require important compute, particularly as mannequin dimension plus video size and backbone improve.
In “Rethinking Video ViTs: Sparse Video Tubes for Joint Picture and Video Studying”, to be introduced at CVPR 2023, we introduce a easy method that turns a Imaginative and prescient Transformer (ViT) mannequin picture encoder into an environment friendly video spine utilizing sparse video tubes (learnable visible representations of samples from the video) to cut back the mannequin’s compute wants. This strategy can seamlessly course of each photographs and movies, which permits it to leverage each picture and video knowledge sources throughout coaching. This coaching additional permits our sparse tubes ViT mannequin to coalesce picture and video backbones collectively to serve a twin position as both a picture or video spine (or each), relying on the enter. We reveal that this mannequin is scalable, will be tailored to massive pre-trained ViTs with out requiring full fine-tuning, and achieves state-of-the-art outcomes throughout many video classification benchmarks.
![]() |
Utilizing sparse video tubes to pattern a video, mixed with a regular ViT encoder, results in an environment friendly visible illustration that may be seamlessly shared with picture inputs. |
Constructing a joint image-video spine
Our sparse tube ViT makes use of a regular ViT spine, consisting of a stack of Transformer layers, that processes video info. Earlier strategies, similar to ViViT, densely tokenize the video after which apply factorized consideration, i.e., the eye weights for every token are computed individually for the temporal and spatial dimensions. In the usual ViT structure, self-attention is computed over the entire token sequence. When utilizing movies as enter, token sequences grow to be fairly lengthy, which might make this computation sluggish. As a substitute, within the methodology we suggest, the video is sparsely sampled utilizing video tubes, that are 3D learnable visible representations of varied sizes and styles (described in additional element beneath) from the video. These tubes are used to sparsely pattern the video utilizing a massive temporal stride, i.e., when a tube kernel is barely utilized to a couple areas within the video, relatively than each pixel.
By sparsely sampling the video tubes, we will use the identical international self-attention module, relatively than factorized consideration like ViViT. We experimentally present that the addition of factorized consideration layers can hurt the efficiency because of the uninitialized weights. This single stack of transformer layers within the ViT spine additionally permits higher sharing of the weights and improves efficiency. Sparse video tube sampling is completed by utilizing a big spatial and temporal stride that selects tokens on a set grid. The big stride reduces the variety of tokens within the full community, whereas nonetheless capturing each spatial and temporal info and enabling the environment friendly processing of all tokens.
Sparse video tubes
Video tubes are 3D grid-based cuboids that may have completely different shapes or classes and seize completely different info with strides and beginning areas that may overlap. Within the mannequin, we use three distinct tube shapes that seize: (1) solely spatial info (leading to a set of 2D picture patches), (2) lengthy temporal info (over a small spatial space), and (3) each spatial and temporal info equally. Tubes that seize solely spatial info will be utilized to each picture and video inputs. Tubes that seize lengthy temporal info or each temporal and spatial info equally are solely utilized to video inputs. Relying on the enter video dimension, the three tube shapes are utilized to the mannequin a number of occasions to generate tokens.
A set place embedding, which captures the worldwide location of every tube (together with any strides, offsets, and many others.) relative to all the opposite tubes, is utilized to the video tubes. Completely different from the earlier discovered place embeddings, this fastened one higher permits sparse, overlapping sampling. Capturing the worldwide location of the tube helps the mannequin know the place every got here from, which is particularly useful when tubes overlap or are sampled from distant video areas. Subsequent, the tube options are concatenated collectively to type a set of N tokens. These tokens are processed by a regular ViT encoder. Lastly, we apply an consideration pooling to compress all of the tokens right into a single illustration and enter to a totally linked (FC) layer to make the classification (e.g., taking part in soccer, swimming, and many others.).
Scaling video ViTs
The method of constructing video backbones is computationally intensive, however our sparse tube ViT mannequin permits computationally environment friendly scaling of video fashions, leveraging beforehand educated picture backbones. Since picture backbones will be tailored to a video spine, massive picture backbones will be changed into massive video backbones. Extra particularly, one can switch the discovered video characteristic representations from a small tube ViT to a big pre-trained picture ViT and practice the ensuing mannequin with video knowledge for only some steps, versus a full coaching from scratch.
Outcomes
We consider our sparse tube ViT strategy utilizing Kinetics-400 (proven beneath), Kinetics-600 and Kinetics-700 datasets and examine its efficiency to a protracted record of prior strategies. We discover that our strategy outperforms all prior strategies. Importantly, it outperforms all state-of-the-art strategies educated collectively on picture+video datasets.
![]() |
Efficiency in comparison with a number of prior works on the favored Kinetics-400 video dataset. Our sparse tube ViT outperforms state-of-the-art strategies. |
Moreover, we check our sparse tube ViT mannequin on the One thing-One thing V2 dataset, which is often used to guage extra dynamic actions, and likewise report that it outperforms all prior state-of-the-art approaches.
![]() |
Efficiency on the One thing-One thing V2 video dataset. |
Visualizing some discovered kernels
It’s fascinating to grasp what sort of rudimentary options are being discovered by the proposed mannequin. We visualize them beneath, exhibiting each the 2D patches, that are shared for each photographs and movies, and video tubes. These visualizations present the 2D or 3D info being captured by the projection layer. For instance, within the 2D patches, varied widespread options, like edges and colours, are detected, whereas the 3D tubes seize fundamental shapes and the way they might change over time.
Conclusions
We have now introduced a brand new sparse tube ViT, which might flip a ViT encoder into an environment friendly video mannequin, and might seamlessly work with each picture and video inputs. We additionally confirmed that giant video encoders will be bootstrapped from small video encoders and image-only ViTs. Our strategy outperforms prior strategies throughout a number of in style video understanding benchmarks. We imagine that this easy illustration can facilitate way more environment friendly studying with enter movies, seamlessly incorporate both picture or video inputs and successfully get rid of the bifurcation of picture and video fashions for future multimodal understanding.
Acknowledgements
This work is carried out by AJ Piergiovanni, Weicheng Kuo and Anelia Angelova, who at the moment are at Google DeepMind. We thank Abhijit Ogale, Luowei Zhou, Claire Cui and our colleagues in Google Analysis for his or her useful discussions, feedback, and help.