Abstract
Automatic cell segmentation in microscopy images works well with the support
of deep neural networks trained with full supervision. Collecting and
annotating images, though, is not a sustainable solution for every new
microscopy database and cell type. Instead, we assume that we can access a
plethora of annotated image data sets from different domains (sources) and a
limited number of annotated image data sets from the domain of interest
(target), where each domain denotes not only different image appearance but
also a different type of cell segmentation problem. We pose this problem as
meta-learning where the goal is to learn a generic and adaptable few-shot
learning model from the available source domain data sets and cell segmentation
tasks. The model can be afterwards fine-tuned on the few annotated images of
the target domain that contains different image appearance and different cell
type. In our meta-learning training, we propose the combination of three
objective functions to segment the cells, move the segmentation results away
from the classification boundary using cross-domain tasks, and learn an
invariant representation between tasks of the source domains. Our experiments
on five public databases show promising results from 1- to 10-shot
meta-learning using standard segmentation neural network architectures.