ChatGPT lowers the entry barrier to cybercrime, and improves phishing abilities but remains unlikely to facilitate sophisticated malware attacks.

At the end of January, OpenAI released an update for its natural language tool ChatGPT, a variant of the GPT (Generative Pre-training Transformer) language model. ChatGPT is designed to be able to hold human-like conversations on a range of topics, enabling users to ask simple questions or suggest tasks such as writing articles, emails, essays, poems, or code. ChatGPT is one of several natural language and other publicly available Artificial Intelligence (AI) tools being developed both in the US and globally. Many of these tools can automatically derive code and information, making it easier to carry out computer network attacks.

Since ChatGPT’s launch in November, cybercriminals on dark web hacking forums have demonstrated a high interest in how they can use it to create exploits and malware, including by repurposing techniques described by researchers in cyber security whitepapers. This means threat actors with limited technical knowledge and no coding skills will increasingly be able to develop malicious tools using real-time examples and guidance.

Mentions of ChatGPT on underground cybercriminal forums (last 60 days)

Figure 1: Mentions of ChatGPT on underground cybercriminal forums (19 December 2022 to 06 March 2023).

Test case

ChatGPT has access to a separate OpenAI model that is trained to generate code in a range of programming languages – such ability to translate natural language to code is aimed at helping software developers. OpenAI continues to put controls in place to prevent ChatGPT from producing malicious code, and this will likely remain a key priority as its coding abilities improve.

However, it remains possible to ask it questions that appear to be for research purposes. In Figure 2 and Figure 3, we tested ChatGPT’s ability to craft a phishing email targeting an oil and gas company and generate code that can target vulnerabilities in devices running legacy systems.

Fake email written by ChatGPT that could be misused for a phishing campaign

Figure 2: Fake email written by ChatGPT that could be misused for a phishing campaign

ChatGPT showing potentially malicious code that could be used to target vulnerable devices (redacted)

Figure 3: ChatGPT showing potentially malicious code that could be used to target vulnerable devices (redacted)

Although criminals on dark web forums have demonstrated how they used ChatGPT to recreate malware strains for an information stealer, we have not observed any cyber attacks being carried out with ChatGPT’s help. That is partly because attributing code to it is difficult, similar to the challenge that educational institutions face with plagiarism and cheating in assessments.

Threat trajectory

Although the development of ChatGPT and its rivals is significant on several fronts, this is unlikely to lead to a large-scale increase in cyber threats to companies globally. ChatGPT is likely to help threat actors craft convincing phishing emails (for example, by improving the writing skills of threat actors whose first language is not English). However, crafting sophisticated exploits or developing malware will still require specialist knowledge of how to apply those tools in an attack. Further, it does not enable a threat actor to understand the specific vulnerabilities and security measures applied by a target company or network.

ChatGPT’s limited ability to produce sophisticated malicious code is reinforced by the number of errors it currently produces in software development testing environments. Stack Overflow, a question and answer website for developers, has banned users from sharing answers generated by ChatGPT over concerns around responses that appear convincing but are ultimately wrong.

However, as the costs of large language models (LLMs) decrease and their use proliferates in the long term, highly capable threat actors will increasingly find it more cost-efficient to focus on training data that can generate custom malware and exploits. Some cybercriminals and state actors could also increasingly explore LLM technology to, for example, use their access to a CEO’s emails to tailor an LLM to their writing style and conduct business email compromise more effectively.

For the time being, highly capable attackers likely find it easier to write their own malicious code, and much of the interest in ChatGPT on underground cybercriminal forums originates from less capable threat actors. As such, tools like ChatGPT will likely increase the number of moderately capable threat actors, but it is unlikely in isolation to significantly affect state actors and advanced ransomware groups that already possess high capabilities.

Planning ahead

The onset of LLMs, ChatGPT and rival AI solutions is not currently a gateway to more sophisticated tools and attack techniques for threat actors globally. However, it should act as a warning to chief information security officers (CISOs) and security teams that these tools are drawing the interest of threat actors and will begin to be used in attacks.

It is key that security teams monitor developments surrounding ChatGPT and the use of LLMs and AI more broadly. This will help them understand if the threat landscape is changing, if tools are generating useful capabilities for network penetration and if threat incidents are being enabled by these tools. Further, monitoring the third party provider landscape for developments designed to counter malicious use of AI tools should also be a growing focus from a security perspective.

Get in touch

Can our experts help you?