Abstract
Generating grammatically and semantically correct captions in video
captioning is a challenging task. The captions generated from the existing
methods are either word-by-word that do not align with grammatical structure or
miss key information from the input videos. To address these issues, we
introduce a novel global-local fusion network, with a Global-Local Fusion Block
(GLFB) that encodes and fuses features from different parts of speech (POS)
components with visual-spatial features. We use novel combinations of different
POS components - 'determinant + subject', 'auxiliary verb', 'verb', and
'determinant + object' for supervision of the POS blocks - Det + Subject, Aux
Verb, Verb, and Det + Object respectively. The novel global-local fusion
network together with POS blocks helps align the visual features with language
description to generate grammatically and semantically correct captions.
Extensive qualitative and quantitative experiments on benchmark MSVD and MSRVTT
datasets demonstrate that the proposed approach generates more grammatically
and semantically correct captions compared to the existing methods, achieving
the new state-of-the-art. Ablations on the POS blocks and the GLFB demonstrate
the impact of the contributions on the proposed method.