Artificial intelligence (AI) is rapidly transforming the cyber threat landscape by lowering the entry barriers for cybercriminals. AI's potential to facilitate highly sophisticated malware attacks is still in its infancy. However, its role in enhancing social engineering tactics, such as phishing and deepfake technology, is becoming increasingly evident.
These tools expand the pool of less technically skilled threat actors who can execute more complex and convincing attacks. They will increasingly be able to develop malicious tools with real-time examples and guidance.
One significant way AI lowers these barriers is through large language models (LLMs) such as ChatGPT. LLMs can generate human-like text that can be used to socially engineer victims, produce computer code, and provide information to make it easier to carry out cyber attacks. Many AI platforms include safeguards to prevent misuse. However, it's possible to “trick” the LLMs into providing malignant outputs by making the question appear to be for research purposes.
Cybercriminals on dark web forums have shown a high interest in how to use AI tools to create exploits and malware. These forums have seen a surge in interest from threat actors seeking to create harmful software or launch phishing campaigns with the help of AI.
AI-driven tools have led to the emergence of harder-to-detect attack techniques. AI can make phishing campaigns more effective by creating emails that are well-written and therefore more convincing and more likely to bypass phishing and spam email detection rules. This can enable a threat actor to create sophisticated phishing lures to target victims in their non-native language and that are geomarket and/or sector specific.
Looking ahead
Despite the growing capabilities of AI, creating sophisticated malware still requires specialist knowledge. Highly capable threat actors, such as state-sponsored groups and advanced ransomware gangs, have the expertise to develop their own malicious code without relying on AI.
However, as the cost of LLMs decreases and their availability increases, even advanced actors may find it more efficient to use AI to enhance their operations. For example, they could use AI to tailor phishing emails based on a target’s communication style, or to automate reconnaissance processes.
Chief information security officers (CISOs) and security teams must be aware of the growing accessibility of AI tools and their use by threat actors. Security teams need to monitor the developments to understand if tools are generating useful capabilities for network penetration and if threat incidents are being enabled by these tools.
Understanding how AI is lowering the entry barriers to cybercrime is crucial for protecting organisations today. Download our whitepaper to learn more about digital threats in the age of artificial intelligence.
Digital Threats in the Age of Artificial Intelligence
Understand the threats and risks posed by the rapid evolution of AI in today's world