Abstract
Medical Image Computing and Computer-Assisted Intervention 2023
(MICCAI 2023) The problem of missing modalities is both critical and non-trivial to be
handled in multi-modal models. It is common for multi-modal tasks that certain
modalities contribute more compared to other modalities, and if those important
modalities are missing, the model performance drops significantly. Such fact
remains unexplored by current multi-modal approaches that recover the
representation from missing modalities by feature reconstruction or blind
feature aggregation from other modalities, instead of extracting useful
information from the best performing modalities. In this paper, we propose a
Learnable Cross-modal Knowledge Distillation (LCKD) model to adaptively
identify important modalities and distil knowledge from them to help other
modalities from the cross-modal perspective for solving the missing modality
issue. Our approach introduces a teacher election procedure to select the most
``qualified'' teachers based on their single modality performance on certain
tasks. Then, cross-modal knowledge distillation is performed between teacher
and student modalities for each task to push the model parameters to a point
that is beneficial for all tasks. Hence, even if the teacher modalities for
certain tasks are missing during testing, the available student modalities can
accomplish the task well enough based on the learned knowledge from their
automatically elected teacher modalities. Experiments on the Brain Tumour
Segmentation Dataset 2018 (BraTS2018) shows that LCKD outperforms other methods
by a considerable margin, improving the state-of-the-art performance by 3.61%
for enhancing tumour, 5.99% for tumour core, and 3.76% for whole tumour in
terms of segmentation Dice score.