Competition for compute will be one of the defining drivers of geopolitics and business in 2026. Countries and companies without sufficient compute to develop and deploy leading-edge AI risk being left behind. But access to compute is increasingly contingent on technology, infrastructure, and talent bottlenecks. And in 2026, the biggest compute challenge for companies may be permission: legal, political, and geopolitical.

Compute refers to the technologies including hardware, software, and infrastructure used to process data. It encompasses all components from calculations performed by advanced microchips to the software used by computers, the networking, storage, power, and water infrastructure that makes modern economies function.

It's a numbers game: more compute can run more complex applications produce more sophisticated products, and generate more valuable insights. Over time, the gap between the compute "haves" and "have nots" will widen as investment and talent flock to the technology frontier.



Unsurprisingly, governments have their thumb on the scale: access to leading-edge hardware and software that powers compute is geopolitically gated. Export controls and other regulatory and national security barriers on design software, fabrication tools, critical minerals, microchips, and quantum computing are exacerbating an already unequal landscape of capability.

In 2026, getting access to compute will require diplomacy as much as money. For firms deploying AI in a decentralised way, this will mean uneven performance, rising costs, and growing uncertainty over which markets can access advanced compute.


The physical behind the digital 

Underpinning the competition for compute are physical constraints: power capacity and water availability. Data centre power consumption is growing by over 10% annually – 30% annually for AI-focused data centres – and is forecast to double over the next five years, according to the International Energy Agency (IEA). Data centre buildouts worldwide are colliding with infrastructure limits, environmental impacts, and local community opposition.

As a result, the compute contest in 2026 carries more social and political risks. Competition for power and water supplies is activating societies against construction projects worldwide. Concerns about AI's impact on labour markets will provoke additional backlash. As they race for first mover advantages in compute, companies will have to ensure their social licenses to operate are not undermined by moving fast and breaking things.



Having compute is only half the battle 

The compute contest is deeply entangled with cyber risks. Model weights, training data, and infrastructure vulnerabilities are prized intelligence assets. Meanwhile, companies are outsourcing compute tasks to managed service providers, often in emerging markets, making these third-party ecosystems valued targets for espionage and cybercrime. In anticipation of quantum computing, "harvest-now, decrypt-later" operations are rising.

As a result, cyber resilience underpins compute competitiveness: a company's ability to protect its environment and vendor network will determine its ability to benefit from innovation. This includes preparing now for a post-quantum future: organisations holding sensitive, long-lived data must begin re-keying now or risk future exposure.


And then there's regulation

AI regulation is now a reality. China introduced several regulations for AI in late 2025, covering data security, labelling, and model training and complementing existing regulation in the digital space. The EU AI Act, in force since 2024, will cover all AI systems, including high-risk and general-purpose models by August 2026. While the US federal government's AI Action Plan focuses on promoting AI innovation, several US state laws regulating AI products will take effect in 2026.

For many corporates, the real pressure is contractual and reputational. Customers are demanding evidence of appropriate data use, transparent training, and secure deployment of AI systems. Industry groups and standards bodies are advancing AI guidelines and standards. The next two years will see due diligence, contracting, and assurance practices modernise fast.

Investment and innovation are moving faster than oversight can adapt. As capital floods into AI application and platforms, valuations are inflating beyond proven performance. When new regulations start to bite and compliance costs mount, the market's winners will be those that build governance into their growth, before investors and customers start asking harder questions.



    What this means for business

  • Compute will depend heavily on permission. For global organisations, the major bottleneck to accessing compute in 2026 will be permission: legal and geopolitical. Managing this risk means aligning AI ambition with resilience: treating AI, quantum, and compute supply as one dependency; mapping exposure to export controls; securing supplier ecosystems; and planning for physical and policy constraints.
  • Power and water are not a given. The compute buildout has significant implications for economic growth and infrastructure reliability. As countries race to establish themselves as innovation hubs and capture a slice of the AI boom, they are running headlong into power and water constraints. Data centres are already competing with residential, industrial, and commercial consumers for limited power availability, leading to higher prices and grid disruption. Companies will need to understand their supply chain's exposure to these disruptions to ensure resilience for their operations.
  • With possibilities, AI brings vulnerabilities. There are risks of relying uncritically on AI for decision making. Generative AI is improving rapidly but remains error-prone and unreliable for some tasks. It may also be introducing obscure vulnerabilities directly into workflows and systems through hallucinated data or insecure coding practices.

Explore the Top Risks for business in 2026

Explore more

You may also be interested in