For much of the 20th century, artificial intelligence (AI) struggled not because researchers lacked ambition, but because the hardware available to run it simply wasn’t powerful enough. Early AI systems hit hard limits on processing speed and memory, contributing to repeated “AI winters” as progress stalled and funding dried up.
That problem is mostly gone now. Today, AI models are trained on specialized chips in huge data centers and can be scaled up in weeks instead of years. Compute, which used to be the main bottleneck, is now something that can be bought with enough money. Companies like Nvidia or AMD are also mass-producing even more powerful graphics processing units (GPUs) – components conventionally used for gaming or visualization, but also well-suited to processing AI calculations – as each year passes.
So, beyond the basic architectures at the heart of these models, what is preventing AI from becoming even more advanced? The new border is far more physical – and far more difficult to circumvent. It’s electricity.
The article continues below
Why AI’s energy appetite is exploding
Modern AI models don’t just train once and then stop. They run all the time, powering things like chatbots, search tools, image generators and more autonomous agents. This change has made AI a constant heavy user of electricity.
according to Sampa Samilaacademic director of the AI and the Future of Management Initiative at Barcelona’s IESE Business School, the problem is not a lack of energy in absolute terms. “It’s not the total energy supply, but having reliable, fixed capacity at the right place and the right time that’s in short supply,” he told Live Science.
Predictions for AI energy consumption clearly show this strain. The International Energy Agency (IEA) expects data centers to consume more than twice as much electricity by the end of the decade, reaching levels similar to those of major industrial economies. In some parts of the US, data centers already use as much electricity as heavy industry.
How AI is actually used matters as much as how it is trained. Training large language models (LLM) still consumes a lot of power, but it tends to occur in large, infrequent runs. What grows faster is the day-to-day work – models that respond to users, over and over again. Samila notes that newer “reasoning” systems, which take more time to find an answer, push energy use into normal operations rather than occasional bursts of exercise.
A grid built for a slower world
The power grid was designed for gradual growth, not for city loads that appear almost overnight.
Juan Arismendi-Zambranoan assistant professor at the Michael Smurfit Graduate Business School at Ireland’s University College Dublin (UCD), said the main issue is timing. Large AI campuses are growing faster than network upgrades or government approvals can keep up. This creates a real bottleneck: getting enough power, when and where it is needed.

The “short supply” of AI electricity, in my view, is less about an absolute global shortage of electricity and more about local bottlenecks created by the rapid deployment of large data centers, Arismendi-Zambrano told LiveScience.
“These campuses scale faster than grid upgrades or bureaucracy can respond. Especially when they land in rural areas chosen for cheap land and political ‘lobbying’ for states, but not engineered for sudden, concentrated load. The result is a very physical constraint: access to a lot of electricity power, on time, at the right node,” he said.
Clustering data centers in one area makes the problem worse. Jens Förderera professor at the University of Mannheim Business School in Germany, pointed to Northern Virginia’s “Data Center Alley,” where many facilities draw huge amounts of power from the same grid. Power plants, transmission lines and substations take years to build, but AI companies often start using computing much earlier, sometimes even before their buildings are finished.
“When many city loads draw from the same local grid, it becomes far more difficult to scale power supply,” said Förderer.
How the industry is trying to respond
There is no simple solution to AI’s energy problem. Instead, companies follow several strategies at the same time.
One builds power closer to the data centers themselves. Big tech firms have signed long-term contracts to support new power generation, including nuclear plants, and are exploring on-site power where grid upgrades are too slow.
Google, for example, has done this in Texas through its acquisition by energy developer Intersect, which builds large-scale solar and storage projects alongside data center demand rather than waiting for grid upgrades. Microsoft, meanwhile, has signed a long-term agreement with Constellation Energy related to the planned restart of a nuclear reactor at Pennsylvania’s Three Mile Island site to supply power to the data centers.
Another is to choose locations based on power, rather than users. As Förderer noted, data centers are increasingly located where power is easiest to scale, even if that means moving further from major population centers.
Then there’s reuse—including a surprising source. Former cryptocurrency mining facilities are emerging as candidates for AI workloads. Once criticized for their energy use, these sites already have what AI needs most: large network connections, cooling systems, and experience running power-hungry hardware around the clock. The intersection of Bitcoin and AI may look strange, but the underlying physics is the same.
“These facilities already have large grid connections, and some former miners may turn to AI workloads,” Förderer said.
The Canadian miner Bitfarms has recently announced plans to shift its facilities away from Bitcoin mining towards high-performance data centers and AI, while Hut 8 – originally a Bitcoin mining company – hit a big 7 billion dollar lease agreement in late 2025 to provide data center capacity for AI computing
Some ideas look even further afield. Space-based data centers are sometimes set up as a way to bypass the Earth’s grid entirely, using constant solar energy and the cold of space for cooling. Samila said the idea works on paper, but the numbers quickly become frightening.
Energy is necessary, but not sufficient
Sampsa Samila, Academic Director of the AI and the Future of Management Initiative at Barcelona’s IESE Business School
A single 5-gigawatt facility would require about 2.5 by 2.5 miles (4 by 4 kilometers) of solar arrays in orbit. It’s “doable in principle,” he added, but only with some serious engineering. Latency, maintenance and launch logistics remain open questions.
Efficiency may be the fastest lever of all. Förderer pointed out that advances in chips, model design and system architecture have already reduced the energy required per unit of intelligence. Some recent efforts include a MIT breakthrough that aims to cut energy use by stacking components verticallyas well as one “rainbow-on-a-chip” that uses lasers to transfer data in components.
Such gains will not eliminate the need for more power, but they may slow the rate at which demand grows.
Does the energy unlock smarter AI?
The increasing demand placed on the electricity grid by AI also raises environmental concerns. Engineer Aoife Foleyprofessor and chairman of Net Zero Infrastructure at the University of Manchester in the UK, pointed out that the wider IT sector already accounts for around 1.4% of global carbon emissions.
AI workloads use much more energy than regular cloud computing, and while major technology companies are investing in renewable energy and better cooling, Foley said these efforts alone are not enough. “These impacts can be reduced through smarter model optimization and a closer alignment between data center strategy and regional renewable generation,” she told Live Science.
Despite the scale of the challenge, none of the experts see electricity as a shortcut to it artificial general intelligence (AGI) – a hypothetical form of AI that can simulate behavior as intelligent as or more intelligent than that of a human. More energy makes it easier to build and operate larger systems, but it does not solve the more difficult problems. Instead, Förderer argued that the real limits lie elsewhere—in access to data, in new model architectures, and in genuine advances in reasoning.
“Energy is necessary but not sufficient,” Samila agreed, adding that today’s dominant approach to improving AI relies on massive amounts of power, but more electricity alone won’t magically produce AGI.
More energy does not guarantee smarter machines, but it does change who gets to participate. Access to power will shape where AI is built, who can afford to run it, and how widely it is deployed. The bottleneck has moved away from silicon and towards the physical world, where grids, permits and power plants move at a completely different pace than code.






