Donald Trump’s freewheeling approach
Based on President-elect Donald Trump’s first presidency and campaign rhetoric, we can anticipate three general themes shaping his administration’s approach to AI: deregulation, market-driven innovation, and national security focus.
Deregulation
Trump has made it clear that he will take a deregulatory approach. He has pledged to repeal Biden-era executive orders and curtail federal oversight of AI. For instance, his promise to "ban the use of AI to censor the speech of American citizens" signals a shift toward minimizing regulatory constraints.
Market-driven innovation
Trump’s 2019 executive order on “Maintaining American Leadership in Artificial Intelligence” underscores a commitment to fostering innovation through a hands-off, market-driven approach. His focus on the US staying competitive on the global stage suggests continued support for the development of cutting-edge AI technologies.
National security
Even if broad deregulation ultimately characterises Trump’s second term, national security applications of AI will likely remain tightly controlled. Export controls and restrictions on AI use in sensitive industries could impact how compliance teams operate, especially in cross-border contexts.
The EU: accountability, transparency and risk mitigation
Anticipated deregulation in the US contrasts sharply with the direction of the EU, as shown by the EU AI Act, which emphasises accountability, transparency and a risk-based approach. Uncertainties remain, but the act paves the way for regulated and responsible AI adoption. Compliance teams must balance adhering to internal governance and aligning with the EU’s evolving regulatory expectations.
Teams should focus on implementing robust internal governance while aligning with more stringent frameworks like the EU AI Act where possible. While the Trump administration’s approach may not demand immediate changes, adherence to more rigorous standards will minimise long-term risks and ensure preparedness for global operations.
Practical implications: innovation without compromising compliance
In anticipation of the incoming Trump administration’s less cautious attitude to AI, companies may take a similarly freewheeling approach to AI adoption. This more permissive atmosphere makes adhering to best practices even more important.
Compliance teams should proactively implement internal governance structures and closely monitor state-level regulations in the US, which can vary widely. And staying aligned with more developed frameworks like the EU AI Act will help companies build resilience and prepare for future regulation changes.
When selecting third-party providers, compliance teams should ensure the providers have considered their adherence to varying regulatory regimes regarding any AI tools. Governance and due diligence in vendor selection are critical for tools to meet operational needs without compromising on transparency, accountability, or best practice.
With increasing pressure to manage complex risks, regulatory expectations and limited resources, AI has the potential to transform how compliance teams operate. AI tools can streamline workflows and risk detection, as well as automate routine tasks. However, AI adoption for compliance teams is a delicate balancing act between innovation and accountability — particularly when also juggling local, state, national and supranational regulatory requirements.