Artificial Intelligence-Based Malicious Accounts Detection Model Using Machine Learning
DOI:
https://doi.org/10.64252/btx4g436Keywords:
Website that is harmful, Machine learning, Matrix of confusion, a decision tree, logistic regression, ROC curve, A neural network, and support vector machine The future of computing: quantum.Abstract
One of the most important lines of defence against phishing attacks is the detection of fake URLs, which, although seeming to be from legitimate websites, really connect to dangerous websites. In light of the fact that Internet of Things devices often have Internet connections and are thus vulnerable to phishing attacks, this is of utmost significance at the present time. In this article, a summary of the most essential ways for accurately identifying counterfeit URLs is presented. These approaches include the most generallyused DL (Deep Learning) and ML (Machine Learning) techniques, as well as the proof-of-concept use of quantum machine learning categorisation models. The first and most important part of the data preparation process, we concentrate on contrasting a number of traditional machine learning models. We put these models to the test on a variety of datasets, and we get encouraging results with true positive rates that are more than 90%. After a brief introduction to the fundamentals of the first technique, the study then moves on to investigate the most recent advancements in quantum machine learning and the promise that it has for spotting potentially hazardous URLs. This study fills a vacuum in the research by identifying malicious URLs and other cybersecurity concerns. Additionally, it gives fresh insights into bringing these two ideas together via cybersecurity algorithms. Quantum machine learning has been mentioned seldom in the literature; this work fills that gap. The findings obtained from the examination of a large number of algorithms are encouraging, and they open the way for more research into the possible uses of quantum computing in the field of cybersecurity.