A Method To Study The Robustness Of ML Models Against Adversarial Attacks

Authors

  • Archana Yashwant Panpatil Author
  • Dr. Vishesh Pratap Gaikwad Author

DOI:

https://doi.org/10.64252/dpk19t90

Keywords:

Adversarial Robustness, Model Vulnerability, Defence Mechanisms, Perturbation Analysis.

Abstract

This invention presents a novel method for evaluating and enhancing the robustness of machine learning (ML) models against adversarial attacks. Adversarial attacks, which involve small perturbations to input data that mislead models into making incorrect predictions, pose significant risks, particularly in safety-critical applications such as autonomous vehicles, healthcare systems, and security frameworks. Traditional methods for assessing model robustness, such as accuracy and precision, fail to account for adversarial vulnerabilities, leavingĀ  ML systems susceptible to exploitation. The proposed method introduces a comprehensive evaluation framework that rigorously tests models under various adversarial attack scenarios, providing a more accurate and realistic assessment of their resilience.

This approach ensures that models are subjected to a diverse range of adversarial threats, helping identify weaknesses that may not be apparent under standard conditions. In addition, the invention incorporates refined adversarial training techniques that expose models to a broad spectrum of adversarial examples, enabling them to learn robust patterns while maintaining performance on non-adversarial inputs. Complementing adversarial training, advanced optimization techniques are employed to enhance the model's inherent resistance to adversarial perturbations, thereby improving overall security. A significant contribution of this invention is the real-time adversarial attack detection system, which allows models to identify and mitigate adversarial manipulations during deployment, adding an extra layer of protection. Moreover, the invention supports custom defense mechanisms tailored to specific machine learning architectures, ensuring that defense strategies are optimized for different model types. This method offers a scalable, adaptable, and practical solution for enhancing the security of machine learning models, thereby making them more reliable and resilient against adversarial threats in real-world applications.

Downloads

Download data is not yet available.

Downloads

Published

2025-07-02

Issue

Section

Articles

How to Cite

A Method To Study The Robustness Of ML Models Against Adversarial Attacks. (2025). International Journal of Environmental Sciences, 707-716. https://doi.org/10.64252/dpk19t90