Search for a command to run...
This book provides a balanced perspective on the application of large language models (LLMs) to cybersecurity, arguing that while they are powerful tools for augmenting human expertise, they are not yet capable of functioning as independent threat hunters when given only a relatively small number of examples. We explore the inherent difficulties of cybersecurity tasks and detail the technical limitations of current LLMs, including their reliance on static training data, weaknesses in logical reasoning, and a propensity for hallucinations. We also highlight the significant challenges of applying LLMs to the domain, such as the difficulty of obtaining high-quality cyber-specific training data and the quadratic scaling of computational power required for large context windows. We present a framework for reimagining LLMs as threat-hunting assistants, capable of assisting humans with tasks like data analysis, alert prioritization, and knowledge sharing. Using their capabilities for language processing and pattern recognition while acknowledging their limitations, can effectively integrate LLMs into a human-in-the-loop security workflow to enhance efficiency and scalability in modern threat hunting.