Most video captioning systems can only describe a single event in short videos. But natural videos may contain numerous events. So we focus on the dense video captioning task, which requires temporally localizing and captioning all events in untrimmed minutes-long videos 🎞️.
2/5
Avoiding any task-specific design, the Vid2Seq model predicts all event captions and boundaries by simply generating a single sequence of tokens, given visual and speech inputs. Special time tokens interleave the text sentences to temporally ground them in the video ⌛️.
3/5
Vid2Seq is pretrained on millions of unlabeled narrated videos by formulating speech sentences 💬 and timestamps ⏲️ as pseudo event captions and boundaries. Given visual inputs, Vid2Seq is pretrained to predict the speech sequence and to denoise a corrupted speech sequence.
4/5
After finetuning, Vid2Seq achieves SoTA results on dense video captioning and video paragraph captioning benchmarks (YouCook2, ViTT, ActivityNet Captions). Vid2Seq also generalizes well to the few-shot setting and the standard video clip captioning task (MSR-VTT, MSVD).
5/5
• • •
Missing some Tweet in this thread? You can try to
force a refresh