AI-Powered Cyber Vigilance: Explainable Threat Detection for Next-Gen Security

Authors

  • Sandeep Singh Author
  • Tripti Rathee Author

Keywords:

Cyber security, Network Intrusion Detection Systems, Explainable Artificial Intelligence, SHAP (Shapley Additive Explanations), Trust in Cyber security Tools, LIME (Local Interpretable Model-agnostic Explanations)

Abstract

Network Intrusion Detection Systems (NIDS) are essential for fighting cyber threats, but many use "black-box" machine learning models that lack transparency, making them harder to trust. This research introduces a framework that combines Explainable Artificial Intelligence (XAI) tools like SHAP and LIME with models such as Deep Neural Networks, Random Forest, and LightGBM to improve both accuracy and transparency. Tested on datasets like CICIDS-2017 and NSL-KDD, the framework achieved 96% accuracy and identified key features like "src_bytes" and "duration." A 4% drop in accuracy was observed during testing when these features were altered, showing their importance. The system also provides real-time explanations in just 1.5 seconds. By balancing accuracy and clarity, this framework helps security teams detect and understand threats effectively, offering a reliable and flexible solution for modern cyber security challenges.

Downloads

Download data is not yet available.

Downloads

Published

2025-05-10

How to Cite

AI-Powered Cyber Vigilance: Explainable Threat Detection for Next-Gen Security . (2025). International Journal of Environmental Sciences, 11(4s), 811-833. https://theaspd.com/index.php/ijes/article/view/633