Abstract
Semi-supervised 3D medical image segmentation aims to achieve accurate
segmentation using few labelled data and numerous unlabelled data. The main
challenge in the design of semi-supervised learning methods consists in the
effective use of the unlabelled data for training. A promising solution
consists of ensuring consistent predictions across different views of the data,
where the efficacy of this strategy depends on the accuracy of the
pseudo-labels generated by the model for this consistency learning strategy. In
this paper, we introduce a new methodology to produce high-quality
pseudo-labels for a consistency learning strategy to address semi-supervised 3D
medical image segmentation. The methodology has three important contributions.
The first contribution is the Cooperative Rectification Learning Network (CRLN)
that learns multiple prototypes per class to be used as external knowledge
priors to adaptively rectify pseudo-labels at the voxel level. The second
contribution consists of the Dynamic Interaction Module (DIM) to facilitate
pairwise and cross-class interactions between prototypes and multi-resolution
image features, enabling the production of accurate voxel-level clues for
pseudo-label rectification. The third contribution is the Cooperative Positive
Supervision (CPS), which optimises uncertain representations to align with
unassertive representations of their class distributions, improving the model's
accuracy in classifying uncertain regions. Extensive experiments on three
public 3D medical segmentation datasets demonstrate the effectiveness and
superiority of our semi-supervised learning method.