Abstract
In this golden age of artificial intelligence, transparency and responsible decision-making are paramount. While machine learning (ML) and operational research (OR) optimisations are fundamental aspects of AI, the benefits of explainable AI (XAI) for combinatorial optimisations remain underexplored. This study investigates the convergence of XAI and OR, emphasising the importance of transparency in combinatorial optimisations. Using the Knapsack problem as an example, we demonstrate that interpretable ML models can effectively solve combinatorial optimisation challenges and enhance transparency. Additionally, we illustrate the application of post-hoc XAI methods to OR optimisations solved with ML, providing transparent, human-friendly explanations. The key contributions of this work include proposing the application of the SAGE framework for transparent OR, demonstrating the integration of XAI with combinatorial optimisations, and offering practical guidelines for creating transparent explanations. These contributions can aid decision-makers in understanding, communicating, and trusting combinatorial optimisation solutions, paving the way for enhanced transparency in operational research across various sectors.