An Exploration of Adversarial Attack Models in Artificial Intelligence
DOI:
https://doi.org/10.64252/yjyaxa50Keywords:
Adversarial Attacks, Artificial Intelligence, Machine Learning Security, Threat Modelling, Deep Neural Networks.Abstract
Adversarial attacks have emerged as a critical challenge in the deployment of Artificial Intelligence (AI) systems. In this paper, we present an in-depth exploration of adversarial attack models, discussing the fundamental techniques used to manipulate machine learning (ML) models and the consequent vulnerabilities inherent in AI applications. We survey state-of-the-art attacks—including evasion, poisoning, and inference attacks—and present a taxonomy that categorizes these approaches based on attacker objectives and constraints. While our discussion focuses on the shortcomings of existing defence mechanisms as well as potential directions for future research, our experimental evaluation looks at how adversarial perturbations affect well-known neural network topologies. In light of changing adversarial tactics, the insights provided here are meant to guide the building of greater resilience AI systems.




