Independent Publisher, USA.
World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 1344-1356
Article DOI: 10.30574/wjaets.2025.15.1.0338
Received on 04 March 2025; revised on 13 April 2025; accepted on 15 April 2025
Artificial intelligence systems face important challenges in adversarial machine learning because smooth yet carefully constructed disturbances to data inputs make models display wrong behavior, resulting in prediction mistakes or system malfunctions. The author of this research paper investigates how adversarial attacks affect AI systems within three primary sectors: autonomous driving, security systems, and healthcare. The paper discusses white-box and black-box adversarial attacks while analyzing machine learning model vulnerabilities. The paper evaluates existing defense methods, including adversarial training and robust optimization, and discusses the difficulties of achieving security without affecting model performance. The existing defense approaches perform poorly against state-of-the-art adversarial techniques, so researchers must develop stronger protection methods. The paper ends by providing security solutions for AI systems through explainable AI integration alongside advanced adversarial training methods so AI models can identify and guard against advancing adversarial threats.
Adversarial Attacks; Machine Learning; Model Robustness; Defense Mechanisms; AI Security; Deep Learning
Preview Article PDF
Swapnil Chawande. Adversarial machine learning and securing AI systems. World Journal of Advanced Engineering Technology and Sciences, 2025, 15(01), 1344-1356. Article DOI: https://doi.org/10.30574/wjaets.2025.15.1.0338.