Abstract
The training of deep learning models generally requires a large amount of
annotated data for effective convergence and generalisation. However, obtaining
high-quality annotations is a laboursome and expensive process due to the need
of expert radiologists for the labelling task. The study of semi-supervised
learning in medical image analysis is then of crucial importance given that it
is much less expensive to obtain unlabelled images than to acquire images
labelled by expert radiologists. Essentially, semi-supervised methods leverage
large sets of unlabelled data to enable better training convergence and
generalisation than using only the small set of labelled images. In this paper,
we propose Self-supervised Mean Teacher for Semi-supervised (S$^2$MTS$^2$)
learning that combines self-supervised mean-teacher pre-training with
semi-supervised fine-tuning. The main innovation of S$^2$MTS$^2$ is the
self-supervised mean-teacher pre-training based on the joint contrastive
learning, which uses an infinite number of pairs of positive query and key
features to improve the mean-teacher representation. The model is then
fine-tuned using the exponential moving average teacher framework trained with
semi-supervised learning. We validate S$^2$MTS$^2$ on the multi-label
classification problems from Chest X-ray14 and CheXpert, and the multi-class
classification from ISIC2018, where we show that it outperforms the previous
SOTA semi-supervised learning methods by a large margin.