Threat trajectory over the next 12 months
We expect a proliferation of AI tools that will lower the barrier for cybercriminals to launch more sophisticated attacks. Deepfake-enabled social engineering tactics, particularly financially motivated, are becoming an increasingly serious threat with some organisations already defrauded for large sums. Nation-state actors are also leveraging AI to develop more sophisticated malware in addition to enhancing the credibility and effectiveness of disinformation and influence operations.
Further, threat actors have begun to target AI tools themselves. Some attacks focus on extracting sensitive information through deliberate prompt engineering, manipulating the AI to reveal confidential data. Others aim to deceive AI systems into generating inaccurate or inappropriate content. AI training data is also being targeted in an attempt to degrade or manipulate its integrity, leading to significant operational disruption.
Cybercriminals typically run a lean operation, prioritising profitability and reducing costs. Reduced costs mean smaller operations and a lower risk of law enforcement disruption. We expect cybercriminal groups to wield AI tools to boost their operations, as well as to understand how to improve their targeting, their operations, increase profits and reinvest those gains into improving their capabilities.
Threat development over the next 12-24 months
Over the following 12 months, we anticipate a significant increase in the use of AI tools by nation-states and cybercriminals with medium-level capabilities. Deepfake-enabled social engineering will become normalised among lower-capability cybercriminals, particularly for business email compromise (BEC) and financial fraud. Threat actors will also move closer to near real-time data exfiltration. This will drastically reduce the time available for internal security teams to identify and react to attacks.
In the longer term, we expect to see a significant increase in both the volume and sophistication of attacks as the baseline capabilities of cybercriminals improve. Increasing use of automated reconnaissance and target selection will optimise attacks based on strategic goals and tactical factors such as vulnerability likelihood.
Preparation considerations
As they learn and evolve, the underlying AI models will become more accurate and effective. They will potentially be able to breach even the most mature security defences, compromising critical information with precision. To defend themselves, organisations should implement comprehensive AI security measures, including ongoing monitoring, robust data validation and regular security audits. Regular training sessions should be conducted to keep security teams informed about the latest developments in AI threats.
Human factors are still the primary weaknesses leveraged by cybercriminal groups. This will remain the case for the foreseeable future, despite defensive technological advancements. We expect a general increase in phishing, fraudulent phone calls and deepfake technologies to be used in scams and cybercriminal attacks, enabling successful ransomware infections and payment diversion fraud. Organisations should conduct awareness campaigns to educate all employees about the potential risks and threats associated with AI and how they can contribute to mitigating these.
To understand more about how AI is changing the future landscape of threats for organisations, download our whitepaper ‘Digital threats in the age of artificial intelligence’.
Digital Threats in the Age of Artificial Intelligence
Understand the threats and risks posed by the rapid evolution of AI in today's world