As of early 2026, the United States and the European Union are consolidating two structurally different approaches to AI regulation, setting the stage for confrontation between public and private sector players on both sides of the Atlantic. The divergence is not merely procedural. It reflects competing industrial strategies, national security priorities and political ideologies, and it is reshaping market access, investment flows and corporate strategy in real time.

The US: centralised but contested

The Trump administration has moved aggressively to establish a single federal framework for AI. A December 2025 executive order – “Ensuring a National Policy Framework for Artificial Intelligence” – directed the Department of Justice to create an AI Litigation Task Force charged with challenging state laws deemed inconsistent with national objectives. [1] The order also tasked the FCC and FTC with developing federal standards that would pre-empt conflicting state requirements, and conditioned certain federal funding on states’ willingness to avoid enacting restrictive AI legislation. [2]

The intent is clear: centralise authority, reduce compliance friction, and preserve US competitiveness in frontier AI. Yet the centralisation drive masks deeper tensions. Congress has not enacted a federal AI law, and the executive order itself cannot overturn existing state legislation – only Congress or the courts can do that. [3] At least two major state regimes – California and Colorado – are in force or soon will be in 2026. Previous legislative attempts at a ten-year moratorium on state AI laws failed in the Senate, and constitutional challenges to federal pre-emption are widely anticipated. [4]

At the same time, the relationship between Washington and leading AI labs has grown more fraught. Public tensions between federal agencies and frontier AI companies – including debates over model access, safety commitments and potential national security designations – illustrate that AI policy is no longer purely an innovation question. It is becoming a sovereign capability debate with direct implications for supply chain risk, government contract terms, and the political positioning of AI companies.

The standoff between the Pentagon and Anthropic over the use of Claude models for military applications underscores this dynamic: the US government has signalled a willingness to invoke Cold War-era authorities such as the Defence Production Act, and to designate AI developers as supply chain risks, when companies resist state demands. [5] Even if these measures are never taken, for companies working with federal agencies and critical technologies, this means preparing for abrupt policy shifts and technology access driven by national security or political concerns, rather than market logic.

The EU: risk-based enforcement

The EU’s AI Act, the world’s most comprehensive AI regulatory framework, is moving through phased implementation in 2026. High-risk AI systems face mandatory documentation, human oversight and conformity assessment obligations, while the regime’s extraterritorial reach means non-EU providers serving the single market must also comply. [6]

The November 2025 Digital Omnibus proposal introduced targeted simplifications. Compliance deadlines for high-risk AI systems may be extended by up to 16 months – with a long-stop date of December 2027 – pending the readiness of harmonised technical standards. [7] Smaller enterprises would benefit from streamlined quality-management obligations, and a new “legitimate interest” basis for processing personal data during AI development aims to ease friction around data use. [8] An EU-level regulatory sandbox, and expanded real-world testing provisions, signal a willingness to encourage innovation under supervision. [9]

Critically, however, the core risk-based architecture remains intact. High-risk AI is still tightly governed, and Brussels continues to enforce its broader digital framework assertively. In recent months, EU enforcement actions under the Digital Services Act and Digital Markets Act have targeted major US platforms, drawing sharp criticism from Washington and prompting threats of retaliatory tariffs and visa sanctions against former EU officials. [10] The EU’s competition commissioner Teresa Ribera has characterised US pressure tactics as “blackmail” and stated that the European regulatory framework is not subject to external negotiation. [11] 

Tech regulations as levers of power

These divergent approaches have turned AI regulation into a strategic instrument. Brussels’ insistence that companies meet EU standards to access its market gives it disproportionate influence over global AI practices. Washington’s willingness to challenge – and, in some cases, sanction – regulatory actions it views as discriminatory signals that compliance with US policy is also a market imperative. The US Trade Representative has warned that it will respond to EU measures it considers unreasonable and apply similar treatment to other countries pursuing comparable regulatory strategies. [12]

A broader pattern of bloc formation is emerging. Countries such as the UK, Singapore, Canada and the Gulf states are increasingly aligning with either the EU’s governance-led model, the US innovation-forward approach, or blending elements of both. [13] In Southeast Asia, Vietnam became the first ASEAN member to enact a formal AI law in December 2025, while Singapore and India are among the most aggressive adopters of agentic AI deployment globally. [14] The result is a fragmenting landscape in which regulatory alignment increasingly maps onto geopolitical allegiance and technology supply chain bottlenecks.

What this means for companies

Be ready to deal with multiple frameworks. The EU regime is binding and largely settled. US consolidation is advancing, but state laws remain active, litigation looms, and executive orders do not carry the force of legislation. Convergence is unlikely in the near term. Companies operating globally should assume continued cross-border tension, and design governance systems and technology architectures that can flex across jurisdictions.

Technology strategy and investment must reflect political realities. Governance tightening, enforcement posture and geopolitical friction should inform where capital is deployed, compute is located and frontier models are scaled. The direction of regulatory travel and politicisation of technology now shapes returns.

Governance must be designed, not retrofitted. Boards need clear visibility over where models are trained, what data is used, which jurisdiction’s rules apply, and how accountability is structured across supply chains. The EU demands structured documentation and demonstrable controls; the US demands agility amid shifting federal-state dynamics. Both require proactive governance architecture.

Plan for regulatory friction as a feature, not an anomaly. Transatlantic alignment will not converge in the near term. Companies should anticipate sustained tension – particularly around frontier models, data flows and critical compute infrastructure – and build operating models that can absorb political and regulatory volatility.

Strategic questions for leadership

In this environment, boards and executive teams should be asking:

  • Where is our AI exposure concentrated geographically, and how does that align with regulatory trajectory?
  • Are we building to one global governance standard, or designing modular systems that flex by jurisdiction?
  • Do we understand which of our models could fall into “high-risk” categories under the EU AI Act?
  • How exposed are we to federal-state regulatory divergence in the US?
  • Are national security, export controls or compute governance debates likely to affect our roadmap?
  • Are our legal and product teams talking? Is regulatory readiness embedded in product design, or sitting in legal as a reactive function?

The organisations that outperform will not be those with the most advanced models alone, but those that align innovation, governance and market strategy deliberately.

Right now, Control Risks is helping organisations turn AI governance into a competitive differentiator. As the US and EU diverge, we support leadership teams to build practical oversight and assurance that can flex across jurisdictions, align teams around accountability, and reduce exposure without slowing delivery.

If the regulatory split is forcing hard choices on your roadmap, data strategy, or operating model, our Digital Risks team can help you design governance that keeps you compliant and competitive.

Sources

[1] White House, “Ensuring a National Policy Framework for Artificial Intelligence,” Executive Order, December 11, 2025. whitehouse.gov
[2] White House Fact Sheet, “President Donald J. Trump Ensures a National Policy Framework for Artificial Intelligence,” December 12, 2025. whitehouse.gov
[3] Paul Hastings, “President Trump Signs Executive Order Challenging State AI Laws,” December 2025. paulhastings.com
[4] NPR, “Trump is trying to preempt state AI laws via an executive order. It may not be legal,” December 12, 2025. npr.org
[5] See reporting on Pentagon–Anthropic tensions over military AI applications and Defence Production Act threats, 2025–2026.
[6] European Commission, “Digital Omnibus on AI Regulation Proposal,” November 2025. digital-strategy.ec.europa.eu
[7] IAPP, “EU Digital Omnibus: Analysis of key changes.” iapp.org
[8] PwC, “EU’s Digital Omnibus offers AI regulatory relief, but questions remain.” pwc.com
[9] Cooley, “EU AI Act: Proposed Digital Omnibus on AI,” November 2025. cooley.com
[10] CNN, “Trump administration’s vision of US tech dominance is colliding with Europe,” January 2026. cnn.com
[11] European Business Magazine, “EU-US Tech Regulation Clash: Enforcement and Retaliation Loom,” January 2026. europeanbusinessmagazine.com
[12] The Register, “EU vows to stand firm as US steps up attacks on tech regs,” January 2026. theregister.com
[13] Morgan Lewis, “The New Rules of AI: A Global Legal Overview,” December 2025. morganlewis.com
[14] ISEAS, “What is Shaping AI Governance Policies in Southeast Asia?” February 2026. iseas.edu.sg

Get in touch

Can our experts help you?