Independent Publisher, USA.
Received on 18 November 2024; revised on 24 December 2024; accepted on 26 December 2024
Existing artificial intelligence (AI) systems for cybersecurity face growing complexity from human insiders who pose threats to automated networks. The research investigates how authorized users take advantage of the weaknesses present in AI-based cybersecurity systems. The research seeks to discover the processes through which insiders commit intelligent system breaches while also avoiding conventional security protocols. The investigation focuses on understanding unique display patterns of insider threats within systems operated by AI technology. The current models that detect insider activities face barriers that prevent them from recognizing such behavior. A combination of case studies, incident analysis, and expert consultation methods was integrated to develop an extensive concept of the problem. AI systems serve in threat detection, yet their ability to identify human interactions behind attacks has diminished because of excessive dependence on automation. Results show that behavior-based monitoring and enhanced AI-human supervision systems must become priorities for cybersecurity safety. The study supports cybersecurity and AI governance by showing insider risks and recommending defenses that strengthen the resilience accompanying growing automation across systems.
Insider Threats; AI Systems; Behavior Analysis; Data Poisoning; Threat Detection; Automation Vulnerabilities
Get Your e Certificate of Publication using below link
Preview Article PDF
Swapnil Chawande. Insider threats in highly automated cyber systems. World Journal of Advanced Engineering Technology and Sciences, 2024, 13(02), 807-820. Article DOI: https://doi.org/10.30574/wjaets.2024.13.2.0642