Search for a command to run...
This article justifies the need to implement software code security controls using large language models (LLMs), driven by the rapid growth in the volume of software code, the emergence of new security risks associated with AI-generated code, and the need to integrate individual code components into complex architectural solutions. The algorithms of existing static code analysis (SAST) tools are prone to errors due to their inability to fully account for code execution logic and its contextual relationships. Using LLMs as a verifier that confirms or refutes the results of static code analysis has the potential to address these shortcomings. This paper presents a comparative analysis of the effectiveness of detecting security vulnerabilities in C# code using the Roslyn Analyzers static code analysis tool, large language models such as DeepSeek and Grok, and an integrated approach that combines the advantages of static analyzers and LLMs. The research methodology is based on conducting an experimental study using a test sample of C# code fragments containing various types of security vulnerabilities. In the first stage of the study, the code fragments were tested using the static code analysis tool Roslyn Analyzers. In the next stage, the code fragments were analyzed for vulnerabilities using the DeepSeek V3 and Grok 4.1 models. In the final stage, the effectiveness of the proposed hybrid approach was evaluated, which involves an initial code check by a static analyzer followed by the transmission of its reports to the input of selected generative AI models. The results of the study show that the hybrid approach using DeepSeek and Roslyn Analyzers provides an increase in performance metrics compared to the independent use of these tools. A comparative analysis of the performance metrics of the models’ standalone use also established that Grok performs worse than the DeepSeek model and is not the best option for application in tasks of this type. The study demonstrates that integrating the analytical capabilities of large language models into classical static code analysis processes by confirming or refuting the results of static analysis is a potential step toward self-correcting software security analysis processes.