Search for a command to run...
Security Operations Centers (SOCs) process thousands of Security Information and Event Management (SIEM) alerts daily, many of which are false positives, placing significant burden on Tier-1 analysts during alert classification. In practice, analysts rely heavily on organisational context, such as asset criticality, user roles, network environment, schedules, and security policies, to interpret and classify alerts. Attempts to encode this contextual information directly into SIEM rules or downstream classification logic often result in duplicated logic, frequent updates as environments evolve, and limited scalability.This paper evaluates the ability of a Large Language Model (LLM), Meta LLaMA 3.3-70B-Instruct-Turbo, to support alert classification in a Tier-1 SOC setting by incorporating organisational context while decoupling context management from the alert classification logic encoded in the LLM prompt. Rather than estimating population-level performance, the objective is to isolate the effect of organisational context on alert classification under controlled conditions. The experimental environment is based on the open-source SIEM Wazuh monitoring Windows and Ubuntu hosts subjected to adversary-emulation scenarios generated with CALDERA, alongside legitimate system activity. A manually labelled corpus of alerts is used to evaluate two configurations: one relying solely on raw SIEM metadata and another in which alert classification is enriched with organisational context injected directly into the model prompt as a structured JSON file.Without contextual information, the model achieves an accuracy of 85.45 percent, with a precision of 80 percent, a recall of 96.55 percent, and an F1 score of 87.5 percent, while false positives arise from missing organisational context. When contextual information is provided, the model produces no false positives on the evaluated corpus and yields consistent classification outcomes across repeated executions.