Abstract
Wearable devices capable of acquiring photoplethys-mography (PPG) signals are increasingly used to monitor patient health outside of typical clinical settings. PPG signals encode information about relative changes in blood volume and, in principle, can be used to assess various aspects of cardiac health non-invasively, e.g. to detect atrial fibrillation (AF). Machine learning based techniques have clear potential to automate diagnostic protocols for AF, where deep networks have been shown particularly effective. However, these models are prone to learning biases and lack interpretability, leaving considerable risk for poor generalisability and misdiagnosis. In order to make these models suitable for routine use in clinical workflows, the uncertainty of a model's output should be quantified to establish whether it can reliably inform diagnoses. Here, we describe the use of Monte Carlo Dropout to estimate the uncertainties of deep learning models trained to predict AF from PPG time series.