An Exploration of Adversarial Attack Models in Artificial Intelligence

Authors

  • V. Christy Author
  • Chandramouli H Author
  • I. Manimozhi Author
  • Heena Kousar Author

DOI:

https://doi.org/10.64252/yjyaxa50

Keywords:

Adversarial Attacks, Artificial Intelligence, Machine Learning Security, Threat Modelling, Deep Neural Networks.

Abstract

Adversarial attacks have emerged as a critical challenge in the deployment of Artificial Intelligence (AI) systems. In this paper, we present an in-depth exploration of adversarial attack models, discussing the fundamental techniques used to manipulate machine learning (ML) models and the consequent vulnerabilities inherent in AI applications. We survey state-of-the-art attacks—including evasion, poisoning, and inference attacks—and present a taxonomy that categorizes these approaches based on attacker objectives and constraints. While our discussion focuses on the shortcomings of existing defence mechanisms as well as potential directions for future research, our experimental evaluation looks at how adversarial perturbations affect well-known neural network topologies. In light of changing adversarial tactics, the insights provided here are meant to guide the building of greater resilience AI systems.

Downloads

Download data is not yet available.

Downloads

Published

2025-08-20

Issue

Section

Articles

How to Cite

An Exploration of Adversarial Attack Models in Artificial Intelligence. (2025). International Journal of Environmental Sciences, 2857-2862. https://doi.org/10.64252/yjyaxa50