Abstract
Deep neural networks currently deliver promising results for microscopy image
cell segmentation, but they require large-scale labelled databases, which is a
costly and time-consuming process. In this work, we relax the labelling
requirement by combining self-supervised with semi-supervised learning. We
propose the prediction of edge-based maps for self-supervising the training of
the unlabelled images, which is combined with the supervised training of a
small number of labelled images for learning the segmentation task. In our
experiments, we evaluate on a few-shot microscopy image cell segmentation
benchmark and show that only a small number of annotated images, e.g. 10% of
the original training set, is enough for our approach to reach similar
performance as with the fully annotated databases on 1- to 10-shots. Our code
and trained models is made publicly available