Abstract
Deep learning serves as a crucial component in computer vision, enabling accurate predictions from raw data. However, unlike human cognition, deep learning models are vulnerable to adversarial attacks. This paper introduces a new method for traffic sign recognition that employs Inductive Logic Programming (ILP) to generate logical rules from a limited set of examples. These rules are used to assess the logical consistency of predictions, which is then incorporated into the neural network through the loss function. The study investigates the effect of incorporating logical rules into deep learning models on the ro-bustness of vision tasks in autonomous vehicles (AV). The experimental results show that the proposed method significantly improves the accuracy of traffic sign recognition in the presence of adversarial attacks.