Temporal DINO: A Self-supervised Video Strategy to Enhance Action Prediction

Izzeddin Teeti*, Rongali Sai Bhargav, Vivek Singh, Andrew Bradley, Biplab Banerjee, Fabio Cuzzolin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceedings published in a bookpeer-review

Abstract

The emerging field of action prediction - the task of forecasting action in a video sequence - plays a vital role in various computer vision applications such as autonomous driving, activity analysis and human-computer interaction. Despite significant advancements, accurately predicting future actions remains a challenging problem due to high dimensionality, complex dynamics and uncertainties inherent in video data. Traditional supervised approaches require large amounts of labelled data, which is expensive and time-consuming to obtain. This paper introduces a novel self-supervised video strategy for enhancing action prediction inspired by DINO (self-distillation with no labels). The approach, named Temporal-DINO, employs two models; a 'student' processing past frames; and a 'teacher' processing both past and future frames, enabling a broader temporal context. During training, the teacher guides the student to learn future context by only observing past frames. The strategy is evaluated on ROAD dataset for the action prediction downstream task using 3D-ResNet, Transformer, and LSTM architectures. The experimental results showcase significant improvements in prediction performance across these architectures, with our method achieving an average enhancement of 9.9% Precision Points (PP), which highlights its effectiveness in enhancing the backbones' capabilities of capturing long-term dependencies. Furthermore, our approach demonstrates efficiency in terms of the pretraining dataset size and the number of epochs required. This method overcomes limitations present in other approaches, including the consideration of various backbone architectures, addressing multiple prediction horizons, reducing reliance on hand-crafted augmentations, and streamlining the pretraining process into a single stage. These findings highlight the potential of our approach in diverse video-based tasks such as activity recognition, motion planning, and scene understanding. Code can be found at https://github.com/IzzeddinTeeti/ssl-pred.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3273-3283
Number of pages11
ISBN (Electronic)9798350307443
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023 - Paris, France
Duration: 2 Oct 20236 Oct 2023

Publication series

NameProceedings - 2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023

Conference

Conference2023 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2023
Country/TerritoryFrance
CityParis
Period2/10/236/10/23

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Temporal DINO: A Self-supervised Video Strategy to Enhance Action Prediction'. Together they form a unique fingerprint.

Cite this