Abstract
Delivering meaningful uncertainty estimates is essential for a successful
deployment of machine learning models in the clinical practice. A central
aspect of uncertainty quantification is the ability of a model to return
predictions that are well-aligned with the actual probability of the model
being correct, also known as model calibration. Although many methods have been
proposed to improve calibration, no technique can match the simple, but
expensive approach of training an ensemble of deep neural networks. In this
paper we introduce a form of simplified ensembling that bypasses the costly
training and inference of deep ensembles, yet it keeps its calibration
capabilities. The idea is to replace the common linear classifier at the end of
a network by a set of heads that are supervised with different loss functions
to enforce diversity on their predictions. Specifically, each head is trained
to minimize a weighted Cross-Entropy loss, but the weights are different among
the different branches. We show that the resulting averaged predictions can
achieve excellent calibration without sacrificing accuracy in two challenging
datasets for histopathological and endoscopic image classification. Our
experiments indicate that Multi-Head Multi-Loss classifiers are inherently
well-calibrated, outperforming other recent calibration techniques and even
challenging Deep Ensembles' performance. Code to reproduce our experiments can
be found at \url{https://github.com/agaldran/mhml_calibration} .