The implications of AI for the compliance industry have fuelled animated discussions for years.

Technological advancements — particularly large language models (LLMs) and the ever-increasing availability of data – have led technology providers and compliance teams to envision an AI-enabled future for risk assessments and data analytics. Despite the initial excitement, the pace of AI adoption has been substantially slower than anticipated.

Compliance teams are inundated with regulatory pressures and expanding compliance burdens. They also face decreasing budgets. Risk aversion is another key reason: compliance professionals are, understandably, sceptical of technology-enabled solutions due to concerns about effectiveness and the legal uncertainty surrounding AI governance. Fortunately, regulatory guidance is beginning to catch up with the technology, providing much-needed clarity.

Here we will unpack the prominent legal developments, why compliance teams may find AI adoption challenging and some practical steps for moving forward.

Regulatory developments

Despite that recent drive for regulatory guidance, the technology continues to evolve, causing concerns for both direct and third-party integration of AI into compliance programmes.

Three regulatory developments exemplify the trends across frameworks, rulings and law making:

  • DOJ Guidance on the evaluation of compliance programmes


    In September 2024, the US Department of Justice (DOJ) released updated guidance on evaluating corporate compliance programmes. The guidance asked key questions regarding technology-enabled solutions. It does not discourage AI; rather, it highlights the necessity of embedding clear governance rules, evaluating the risks and limitations of the technology, and developing effective risk management procedures.
  • The EU AI Act


    Across the Atlantic, the EU has taken significant steps with the EU AI Act. The act guides the adoption of AI and is directly applicable across member states. It takes a risk-based approach to regulating AI, ensuring that higher-risk systems are subject to more rigorous scrutiny. Proportionality is a central tenet of the regulation: AI systems that have wider-reaching impacts will face stricter governance.
  • SCHUFA ruling


    Another critical development was the Court of Justice of the European Union (CJEU) preliminary ruling on the SCHUFA case, which involved the use of AI in credit scoring. The CJEU held that fully automated decisions — such as those based on credit scores — are subject to article 22 of the GDPR, which governs automated decision-making and profiling. The court emphasised the importance of explainability (also known as “interpretability”) and ensuring that statistical procedures used by AI are transparent, accurate and free from discriminatory effects.

These legal frameworks are shaping how AI can be responsibly integrated into compliance processes. They serve as blueprints for evaluating potential AI technologies in corporate governance

Adoption challenges

AI adoption in compliance is fraught with technical challenges. Many of these challenges stem from the complexity of AI models and the disconnect between compliance and technology professionals.

Black box dilemma and explainability

One of the largest challenges is the black box nature of many AI models. Complex algorithms, especially in machine learning, can make decisions without a clear explanation of how they arrived at that conclusion. This is an accepted fact among those working on the programs, but it can be hard to quantify and explain the functioning of advanced algorithms within the language constraints and acceptable benchmarks of existing regulatory frameworks. This lack of transparency is challenging in a regulatory environment where compliance teams must be able to justify their decisions and processes to regulatory authorities.

The EU’s focus on explainability, as seen in the SCHUFA ruling, stresses the need for AI systems to provide clear, understandable reasoning behind their outputs. This incompatibility of legal frameworks with reality will continue to be a central obstacle.

Data management and bias

AI models rely on vast amounts of data to function effectively, and managing and securing this data presents challenges. Making assessments on data processing and the security of data in a system that is not fully understood in its processing can be tricky, and for some teams, downright impossible. The processing can have impacts not just on how (personal) data may be used in data processing but also where the delineation of intellectual property sits once data has been processed. There is also the potential for bias in AI systems, which could inadvertently result in discriminatory practices. The SCHUFA ruling, for instance, emphasised the requirement that systems be free from discriminatory effects, a challenge for many compliance teams that must mitigate such risks when evaluating AI solutions.

Implications and solutions

AI adoption in compliance will require more than just technological upgrades. As highlighted by both the DOJ and the EU, governance structures must first be established to oversee AI systems. Ironically, the burden of setting up these frameworks often falls on compliance teams themselves, increasing their workload and underscoring the necessity of AI adoption in the first place.

One thing is clear: governance is key to AI adoption. Compliance teams must implement robust governance frameworks that ensure transparency, accountability and proportionality in AI decision-making. Inputs must be easily explained and defended, particularly in areas involving automated decision-making that may significantly affect individuals.

Much of the traditional approach to compliance and AI governance is rooted in an older understanding of AI systems. Historically, compliance teams have dealt with more deterministic and easier to explain systems that followed a clear set of rules. This made it easier to create policies, audit performance and ensure regulatory adherence. Emerging AI technologies, such as deep learning, do not operate in the same way and are quickly rendering compliance frameworks obsolete. These newer models learn from vast datasets and make probabilistic decisions, often in ways that are not easily explainable through the accepted vernacular.

Compliance frameworks need to evolve with the technology by developing a new vocabulary that can encapsulate the complexity and capabilities of modern AI systems. The only way we can ensure emerging technologies are deployed in ways that are secure, transparent and compliant is if compliance professionals, data scientists and regulatory bodies invest in and commit to listening and learning from each other. Companies must create cross-functional teams and invest in upskilling.

Action points for compliance teams

Currently regulatory developments are lagging behind the reality of technology, creating uncertainty. Compliance teams need an understanding of how AI can uplift and accelerate. They also need a common vocabulary and framework for approaching the associated risks. Then they can develop programmes in lock step with the implementation of technology.

But what can compliance teams do right now?

Evaluating current compliance programmes is a good place to start — assessing where AI would fit, which roles it might take and what added controls might be needed. Another smart move would be to start developing teams with both technical and compliance expertise. This will help facilitate AI adoption and ensure systems are both effective and compliant. Ongoing education on AI can help close the gap between tech and compliance functions. And the building of governance frameworks that prioritise explainability, transparency and proportionality is crucial.

But remember, not every problem can be solved with AI. Compliance teams should pilot AI solutions, looking at where the need is and then assessing and reframing. AI is most effective when it enhances, not when it replaces, human decision-making.

Control Risks VANTAGE

Third-party supply chain risk management

Get in touch

Can our experts help you?