A boutique UK-based law firm specializing in complex litigation and advisory services and Control Risks turned to AI to identify misclassified documents in a misconduct probe.

AI was an indispensable quality control measure:

  • 40% of documents classified “Not Relevant” during manual review found to be material
  • Review hours cut significantly, accelerating disclosure and saving costs
  • Review accuracy strengthened the client’s case

Evidence misclassification during manual review

A boutique UK law firm was acting for administrators pursuing several former directors of a UK company that entered administration in 2018. The administrators alleged that the directors had engaged in misconduct, including transferring company assets at undervalue, receiving preferential treatment, breaching governance protocols and failing to meet their fiduciary responsibilities.

The goal of the legal action was to recover financial losses or secure compensation for the company. As part of the disclosure process, the law firm was tasked with manually reviewing approximately 250,000 keyword-responsive documents.  While the initial manual review by the firm was underway, senior subject matter experts raised concerns about the accuracy of coding decisions made by junior reviewers, potentially leading to the misclassification of key documentary evidence.

Enhancing quality control measures with generative AI

To address the challenge, Control Risks designed a targeted quality control workflow using AI. A sample of 2,000 documents marked as “Not Relevant” by junior reviewers was selected for reassessment.

Control Risks collaborated closely with the client to engineer tailored prompts that defined what constituted relevance, key issues or important documents. These prompts mirrored traditional review protocols or case briefs. The AI framework, powered by generative AI and large language models (LLMs), simulated a first-level review process.

For each document, the AI provided a relevance prediction along with a rationale explaining its decision, supported by citations from the document itself. The AI flagged approximately 40% of the sample as potentially misclassified and were predicted to be “Relevant” to the matter. These flagged documents were then reviewed by senior experts, who confirmed the misclassification and validated the AI’s assessment.

From framework to second-level implementation

With the framework validated, Control Risks was instructed by the client to then apply the AI model across the entire set of 250,000 documents. The law firm focused its second-level review efforts on documents categorized by the AI as “Very Relevant”, “Relevant”, or “Borderline Relevant”.

To ensure continued quality assurance, a 10% sample of the AI-classified “Not Relevant” documents was also reviewed. A manual first-level review of 250,000 documents would have taken an estimated 3,500 hours to complete. By leveraging AI, the review was completed significantly faster, enabling the team to meet tight disclosure deadlines while reducing overall costs.

AI-driven precision for evidence integrity

The AI framework proved to be a trusted partner in the client’s document review process. The approach:

  • Significantly reduced the number of documents requiring manual review;
  • Lowered costs for the end client
  • Enabled the client to meet its fast-approaching disclosure deadline without compromising accuracy or thoroughness

In this matter, aiR for Review's generative AI capabilities enabled the client to uncover evidence that was nearly overlooked, ultimately strengthening the client’s position.

Get in touch

Can our experts help you?