Kaspersky DFI experts found almost 3,000 posts on the darknet about the use of chatbots in cyberattacks.
Kaspersky Digital Footprint Intelligence (DFI) specialists have published the results of a study on how the use of chatbots for cyberattacks is discussed on the darknet. In 2023, 2,890 such reports were detected, with a peak of more than 500 reports in April.
Criminals offer to use chatbots with artificial intelligence in different ways. In particular, for the generation of polymorphic malicious code – a VPO that is able to change its code while preserving the basic functionality.
“The detection and analysis of such programs is much more difficult than in the case of ordinary pests. The author of the message suggests using the OpenAI API to generate code with the given functionality. Thus, by accessing a legitimate domain (openai.com) from an infected object, an attacker can generate and run malicious code, bypassing a number of standard security checks. At the moment, we have not detected malicious programs that act in this way, but they may appear in the future,” writes Kaspersky Digital Footprint Intelligence.
To force the model to generate answers related to illegal activities, attackers invent sets of commands (jailbreaks), which are actively distributed and refined by participants of darknet forums. A total of 249 offers were seen in 2023 for both distribution and sale of similar selected teams.
Also, as noted by Kaspersky Digital Footprint Intelligence, cybercriminals actively adopt open source projects that are created for information security specialists.
One of the open source utilities hosted on the GitHub platform is designed to obfuscate code written in PowerShell. This is often used by both cyber security professionals and attackers when trying to penetrate and root a system. Obfuscation increases the chance of remaining unnoticed by monitoring systems and anti-virus solutions. With the help of Kaspersky Digital Footprint Intelligence, a post on a cybercrime forum was found, in which criminals share this utility, show how it can be used for malicious purposes and what results it will give.”
Such projects as WormGPT, XXXGPT, FraudGPT and others also attract the attention of cybercriminals. These are language models advertised as a replacement for ChatGPT with the missing limitations of the original and additional functionality.
The experts point out that the examples given by them do not suggest that chatbots and other tools are dangerous in their own right, but help to understand how attackers can use them for illegitimate purposes.
“To sum up, let’s say that the trend towards the popularity of the use of AI by criminals is alarming. Information becomes more accessible, many problems are solved with a single request. All this makes life easier for people, but at the same time lowers the entry threshold for various fraudsters. Although many of the considered solutions do not pose a real threat due to their ingenuity, technologies are developing at a rapid pace, and it is quite likely that soon the capabilities of language models will allow for complex attacks,” the experts conclude.