Abstract
Artificial neural networks, widely recognized for their role in machine learning, are also transforming the study of ordinary differential equations (ODEs), bridging data-driven modeling with classical dynamical systems as well as enabling the development of infinitely deep neural models. However, their practical applicability remains, in this context, constrained by the opacity of the learned dynamics, which operate as black-box systems with limited explainability, thereby hindering trust in their deployment. Existing approaches for the analysis of neurally driven dynamical systems are now scarce and restricted to first-order gradient information due to computational constraints, thereby limiting the depth of achievable insight. Here, we introduce event transition tensors as a new tool containing high-order differential information that provides a rigorous mathematical description of NeuralODE dynamics on event manifolds. We demonstrate its versatility across diverse applications: characterizing uncertainties in a data-driven prey-predator control model, analyzing neural optimal feedback dynamics, and mapping landing trajectories in a three-body neural Hamiltonian system. In all cases, our method allows for the interpretability of NeuralODEs and their analytical verification by expressing their behavior through previously unknown explicit mathematical structures. The neural dynamics are thereby fully encapsulated within a set of compact, computationally efficient tensors, which retain all the necessary information for rigorous system analysis and certification. Our findings contribute to a deeper theoretical foundation for event-triggered neural differential equations and provide a mathematical construct for explaining complex system dynamics.