Abstract
Recent progress in machine learning research is gradually shifting its focus towards human - AI cooperation due to the advantages of exploiting the reliability of human experts and the efficiency of AI models. One of the promising approaches in human - AI cooperation is learning to defer (L2D), where the system analyses the input data and decides to make its own decision or defer to human experts. Although L2D has demonstrated state-of-the-art performance, in its standard setting, L2D entails a severe limitation: all human experts must annotate the whole training dataset of interest, resulting in a slow and expensive annotation process which can subsequently influence the size and diversity of the training set. Moreover, the current L2D does not have a principled way to control workload distribution among human experts and the AI classifier that is important to optimise resource allocation. We, therefore, propose a new probabilistic modelling approach inspired from mixture-of-experts, where the Expectation - Maximisation algorithm is leveraged to address the issue of missing expert's annotations. Furthermore, we introduce a constraint, which can be solved efficiently during the E-step, to control the workload distribution among human experts and the AI classifier. Empirical evaluation on synthetic and real-world datasets show that our proposed probabilistic approach performs competitively, or even surpasses previously proposed methods assessed on the same benchmarks.