Search for a command to run...
The article provides an overview of the practice of using large language models (LLMs) in cyberattacks. Artificial intelligence models (machine learning and deep learning) are applied across various fields, with cybersecurity being no exception. One aspect of this usage is offensive artificial intelligence, specifically in relation to LLMs. Generative models, including LLMs, have been utilized in cybersecurity for some time, primarily for generating adversarial attacks on machine learning models. The analysis focuses on how LLMs, such as ChatGPT, can be exploited by malicious actors to automate the creation of phishing emails and malware, significantly simplifying and accelerating the process of conducting cyberattacks. Key aspects of LLM usage are examined, including text generation for social engineering attacks and the creation of malicious code. The article is aimed at cybersecurity professionals, researchers, and LLM developers, providing them with insights into the risks associated with the malicious use of these technologies and recommendations for preventing their exploitation as cyber weapons. The research emphasizes the importance of recognizing potential threats and the need for active countermeasures against automated cyberattacks.