Abstract
Network Intrusion Detection Systems (NIDSes) are increasingly incorporating Machine Learning (ML) and Deep Learning (DL) algorithms for detecting network intrusions. However, ML/DL algorithms are susceptible to adversarial examples, which can lead to the misclassification of input data. This vulnerability poses a significant threat to the reliability of NIDSes in security-sensitive domains. To address this concern, we propose a novel defense framework called Moving Target Defence as Adversarial Defence (MTD-AD) to protect anomaly-based NIDS models from adversarial attacks by stochastically altering the decision boundary of NIDS. Our approach capitalizes on the observation that adversarial examples reside in close proximity to the decision boundary of the model and exhibit sensitivity to slight perturbations of that boundary. We demonstrate the effectiveness of MTD-AD against practical adversarial attacks and evaluate its resilience against adaptive adversaries using an IoT intrusion detection dataset.