Large Language Models (LLMs) for Cybersecurity: A Systematic Review

Yazi Gholami *

University of North Florida.
 
Review
World Journal of Advanced Engineering Technology and Sciences, 2024, 13(01), 057–069.
Article DOI: 10.30574/wjaets.2024.13.1.0395
Publication history: 
Received on 30 July 2024; revised on 07 September 2024; accepted on 09 September 2024
 
Abstract: 
The rapid evolution of artificial intelligence (AI), particularly Large Language Models (LLMs) such as GPT-3 and BERT, has transformed various domains by enabling sophisticated natural language processing (NLP) tasks. In cybersecurity, the integration of LLMs presents promising new capabilities to address the growing complexity and scale of cyber threats. This paper provides a comprehensive review of the current research on the application of LLMs in cybersecurity. Leveraging a systematic literature review (SLR), it synthesizes key findings on how LLMs have been employed in tasks such as vulnerability detection, malware analysis, and phishing detection. The review highlights the advantages of LLMs, such as their ability to process unstructured data and automate complex tasks, while also addressing challenges related to scalability, false positives, and ethical concerns. By exploring domain-specific techniques and identifying limitations, this paper proposes future research directions aimed at enhancing the effectiveness of LLMs in cybersecurity. Key insights are offered to guide the continued development and application of LLMs in defending against evolving cyber threats.
 
Keywords: 
Large Language Models (LLMs); Cybersecurity; Vulnerability Detection; Malware Analysis; Phishing Detection; Deep learning
 
Full text article in PDF: