Abstract
MICCAI 2021 Highly imbalanced datasets are ubiquitous in medical image classification
problems. In such problems, it is often the case that rare classes associated
to less prevalent diseases are severely under-represented in labeled databases,
typically resulting in poor performance of machine learning algorithms due to
overfitting in the learning process. In this paper, we propose a novel
mechanism for sampling training data based on the popular MixUp regularization
technique, which we refer to as Balanced-MixUp. In short, Balanced-MixUp
simultaneously performs regular (i.e., instance-based) and balanced (i.e.,
class-based) sampling of the training data. The resulting two sets of samples
are then mixed-up to create a more balanced training distribution from which a
neural network can effectively learn without incurring in heavily under-fitting
the minority classes. We experiment with a highly imbalanced dataset of retinal
images (55K samples, 5 classes) and a long-tail dataset of gastro-intestinal
video frames (10K images, 23 classes), using two CNNs of varying representation
capabilities. Experimental results demonstrate that applying Balanced-MixUp
outperforms other conventional sampling schemes and loss functions specifically
designed to deal with imbalanced data. Code is released at
https://github.com/agaldran/balanced_mixup .