Legal experts Eliza Martin and Ari Peskoe explain how data centers' massive electricity demands could shift billions in infrastructure costs onto regular utility customers.
Not nearly as much as the data centers. Crypto can be located anywhere, the data centers are focused on being 'as close as possible' to the bulk of internet. Milliseconds matter to them. That means highly localized grid strain requiring lots of new distribution build outs.
Not exactly. The latency to the datacenter doesn't matter much within reason (10s of ms almost never a problem). There are already massive datacenter facilities scattered in very remote areas in the US. And every workload is not hosted in every region.
The concentration in the NoVa area is largely due to historical reasons - that is where a lot of it got started - and, once a datacenter region is established most of the growth for those established customers tend to land there. So, places like NoVa simply grew that large over time because they were first - not because there is some latency requirement.
Of course that doesn't mean there aren't scenarios that aren't as sensitive or have other priorities, but it's a factor that overrides cost/salary/taxes etc in a large number.
And the historical factor also makes latency here lower than Montana. Physical proximity to the backbones reduces latency simply because this is where they were first built.
I have been a network architect for decades, and led network architecture in a major hyperscaler and multiple large carriers. The *vast* majority of apps do not care about what is mentioned in the article - not to say some things might care - but they are in an incredibly small minority. Some of the examples given are just flat-out wrong:
1. High-frequency trading latency - here they are discussing latency between servers in a site. This has nothing to do with some end-user talking to a cloud datacenter app. Similarly, the GPUs in an AI training cluster are interconnected with a very low-latency (100s of ns) "back end" network so that the GPUs can all talk to each other. But, this has nothing to do with how end-users access anyting.
2. Video conferencing (Zoom, Teams, etc) - we regularly ran calls across the ocean and back - i.e with servers on one continent, and all the clients on some other content. In fact, most videoconferencing is hosted out of a few regions in the world - and all sessions go through them.
3. Streaming services - these are not latency sensitive. These are located close to you to avoid building out long-haul capacity given the intensity of the data streams.
In fact, we can traverse the US and back in under 50ms for most things - and most applications do not care about this level of latency. The vast majority of people don't even know where their apps "live". Another way to look at it is that if latency actually mattered, down to say 1ms, for an end-user to talk to a server in the cloud datacenter, then that user would need to be within about 30 or 40 miles of said datacenters. If the application failed or ran poorly outside of that radius, then there would be no way for that user to move his end-client out of that radius. And, of course, we don't have datacenters every 30 to 40 miles spread across the country, and the data for all those apps is certainly not replicated like that. How many apps do you know that suddenly stop working when you get on a plane and go to another country? Those wouldn't be very popular use cases.
Certainly, you can create situations where latency really matters - such as trying to split compute-storage or compute-compute traffic across distant sites. That is "server-to-server" traffic can be very sensitive to latency. In general, these apps are located in a single DC, or maybe between AZs in a region to avoid more than a ms or two of latency.
Finally, there are clearly absolutely massive datacenter infrastructure in remote areas around the country (eastern Washington, southern Virginia, northern Oregon, western Nevada, southern Wyoming, just to name a few). These facilities service gigawatts of DC capacity and service people all over the world. Also, there are many entire countries that do not have datacenters and rely on using services from datacenters in other countries.
The aura of latency is something that people *think* drive a lot of datacenter locations, but in fact, that is almost last on the list.
Loved this episode. I wanted to comment on 2 issues that were discussed. For background, my firm is a green hydrogen developer based in the US. If we can, we develop our own physically connected/ "behind the meter" power generation (typically solar or wind) to power our electrolyzers. We do this because it is (1) the most cost-effective clean electron I can get, (2) no concern on my hydrogen truly being green, (3) removes cost and time uncertainty from grid connection. Now we cannot do this for all projects, typically because of land constraints, therefore some projects are fully grid connected and some are hybrid - electrolyzers partially powered by the grid and partially by the BTM generation. For avoidance of doubt, when we are grid connected we comply with 45V's 3 pillars. First, we are exploring special tariffs with certain utilities because we are truly a flexible load (when grid connected we typically run 50% of the time) as we are extremely sensitive to the cost of power. We are not bitcoin that says they are flexible but then just run 24/7. I am fully on board with not passing costs on to other utility customers, but as far as I have seen, most utilities that we engage with have not seen a load that is flexible like ours. Therefore we support these tariffs - if done right. Second comment is just double clicking on the idea that if I am building new renewables that are BTM and physically connecting to my electrolyzers, I should not have to pay T&D charges to the utility for power that physically never touches their system. This gets into the utility monopoly/franchise issue. Welcome any comments!
I suspect in a shared system like the grid that it is inherently impossible to even determine what is absolutely fair in terms of who is using or causing what. Electricity doesn't work that way. So, you will find that each of the parties will try to shift costs to other claiming that anything else is "unfair".
In the grand scheme of things, it doesn't matter. The public will ultimately pay for it all either because their electrical rates go up, or because they will be charged more for the goods and services that are produced or supported by the datacenter players who will pass on any increased costs.
Finally, the US is in an AI "cold war". We are in a race with other countries to come out on top. Whoever wins this race will have a massive advantage economically and militarily over the rest of the world. If we somehow stymie the AI growth in the US with this debate and dragging of heals, the outcome for "consumers" will be far worse than any electrical rate hike. You can bet China is not having these energy discussions - they are just building.
Thank you for this. Exceptional eye opener. Regulation is iteractive as we all know... One hopes proper vision and political will come together to the benefit of consumers (how naive can I be?)
The Future's market should be able to deal with huge new customers. Existing customers will be locked into current pricing. New Customers will either pay spot market prices, buy futures, or create their own electricity generation.
I'll remember that when the 88 diesel backup generators at just one DC kick on at 3am. It's normally running 50+ dbA and 60+ dbC continuous, 24/7/365. So much for enjoying the backyard when there are industrial fans humming.
They are siting these DCs within a few hundred feet of homes. My subdivision will have 13 data centers within a 1/4 mile.
Warehouses don't employ a lot of people. After construction it's a basically a ghost town of warehouses.
And I recall "they" fixed that once they realized what was going on. It was clearly an oversight, and other facilities have been built to repress generator noise. And newer facilities are held to strict noise levels for all equipment. If this isn't being done in your area, absolutely time to raise some hell.
The datacenter business does employ a lot of people - it just isn't represented in the full-time hyperscaler staff that works at the DC. Almost all maintenance and ongoing trade work is outsourced to local professionals. A lot of the computer and network M&R is also outsourced to other companies. Finally, given the size of the new campuses, basic construction jobs will continue for a decade or more - and then there is all the retrofitting that will follow. Measuring jobs by the FTE that belong to the hyperscaler misses the vast majority of employment.
You argued DC's are employment boost...and yet also arguing the bulk of their staff aren't local. Can't have it both ways.
The noise issue is far from 'fixed'. I live 500 ft from one. That one DC had two separate power fail overs in the last 2 months requiring generators firing up. And they run them every month for maintenance.
How are they not "local"? Just because they are not FTE of the hyperscaler, doesn't mean they don't live locally. They clearly don't fly people in daily to do all the construction, repair, maintenance.
Totally agree they should fix the noise to meet any zoning ordinances. Now, if you are saying you don't want to hear anything at all, even something well below local ordinances, then I suggest you move to a very remote place. If you live in a city, there is going to be noise - but, levels should be within the rules.
I meant that the futures market would be for electricity delivered in the future. Companies that generate electricity will sell future supply. Companies and people who use electricity would buy future supply based on their expected future use. Speculators will buy and sell. If a server farm wants to build a server, they will buy electricity at a higher price than those who have pre-existing futures contracts. If the price rises to a point where profit could be made by building a source of electricity and pre-selling the electricity it in the futures market, someone will build the plant. The electricity will already have been sold, so the company building the plant will not have to worry about stranded assets.
And again, that assumes the infrastructure exists to supply it. I can say entirely, we, the current rate payers are the ones being asked to fund the infrastructure expansion to accommodate that 'future'. It's not the new players on the block like data centers. That's literally the point of the pod.
The Transmission of the electricity and the generation of the electricity should be run by different companies. The customer should pay for the electricity cost at the point of delivery. The electricity should have dynamic pricing based on supply and demand. The cost of supply would be based on the cost of generation plus the cost of transmission. If my neighbor generates the electricity, there shouldn't be a transmission cost. Distribution: yes, but not Transmission. If no one else needs electricity from the new plant, the transmission lines don't need to be built from the plant to the town. The server farm company will have to pay for all of the transmission costs. This will encourage them to build the server farm right next to the generation plant. When it is cheaper to generate electricity locally, lots of electricity will be generated locally. Batteries will be installed and EVs with bidirectional charging will be used so that we need much less transmission capacity.
Another problem with costs, is billing customers a per kwh charge for the *infrastructure*. Distribution infrastructure does not degrade based on kwh usage alone. Solar/battery peeps get called 'leechers' for not paying enough for distribution maintenance when the problem is the legacy fee model.
David, I invite you to listen to the first 10 minutes of this episode. This is a monologue. Your guests are the experts, frankly and directly, I want to hear them not you.
I'm regular podcast listener, not this time,I gave up after 10 minutes.
How much of the data center demand is associated with cryptocurrency mining?
Not nearly as much as the data centers. Crypto can be located anywhere, the data centers are focused on being 'as close as possible' to the bulk of internet. Milliseconds matter to them. That means highly localized grid strain requiring lots of new distribution build outs.
Not exactly. The latency to the datacenter doesn't matter much within reason (10s of ms almost never a problem). There are already massive datacenter facilities scattered in very remote areas in the US. And every workload is not hosted in every region.
The concentration in the NoVa area is largely due to historical reasons - that is where a lot of it got started - and, once a datacenter region is established most of the growth for those established customers tend to land there. So, places like NoVa simply grew that large over time because they were first - not because there is some latency requirement.
https://www.datacate.net/latency-and-data-centers-why-it-matters
Of course that doesn't mean there aren't scenarios that aren't as sensitive or have other priorities, but it's a factor that overrides cost/salary/taxes etc in a large number.
And the historical factor also makes latency here lower than Montana. Physical proximity to the backbones reduces latency simply because this is where they were first built.
I have been a network architect for decades, and led network architecture in a major hyperscaler and multiple large carriers. The *vast* majority of apps do not care about what is mentioned in the article - not to say some things might care - but they are in an incredibly small minority. Some of the examples given are just flat-out wrong:
1. High-frequency trading latency - here they are discussing latency between servers in a site. This has nothing to do with some end-user talking to a cloud datacenter app. Similarly, the GPUs in an AI training cluster are interconnected with a very low-latency (100s of ns) "back end" network so that the GPUs can all talk to each other. But, this has nothing to do with how end-users access anyting.
2. Video conferencing (Zoom, Teams, etc) - we regularly ran calls across the ocean and back - i.e with servers on one continent, and all the clients on some other content. In fact, most videoconferencing is hosted out of a few regions in the world - and all sessions go through them.
3. Streaming services - these are not latency sensitive. These are located close to you to avoid building out long-haul capacity given the intensity of the data streams.
In fact, we can traverse the US and back in under 50ms for most things - and most applications do not care about this level of latency. The vast majority of people don't even know where their apps "live". Another way to look at it is that if latency actually mattered, down to say 1ms, for an end-user to talk to a server in the cloud datacenter, then that user would need to be within about 30 or 40 miles of said datacenters. If the application failed or ran poorly outside of that radius, then there would be no way for that user to move his end-client out of that radius. And, of course, we don't have datacenters every 30 to 40 miles spread across the country, and the data for all those apps is certainly not replicated like that. How many apps do you know that suddenly stop working when you get on a plane and go to another country? Those wouldn't be very popular use cases.
Certainly, you can create situations where latency really matters - such as trying to split compute-storage or compute-compute traffic across distant sites. That is "server-to-server" traffic can be very sensitive to latency. In general, these apps are located in a single DC, or maybe between AZs in a region to avoid more than a ms or two of latency.
Finally, there are clearly absolutely massive datacenter infrastructure in remote areas around the country (eastern Washington, southern Virginia, northern Oregon, western Nevada, southern Wyoming, just to name a few). These facilities service gigawatts of DC capacity and service people all over the world. Also, there are many entire countries that do not have datacenters and rely on using services from datacenters in other countries.
The aura of latency is something that people *think* drive a lot of datacenter locations, but in fact, that is almost last on the list.
Please take it up with actual published articles that refute, you, an internet poster.
Gish Gallop. Good Day.
It was interesting to hear about the special contracts and the shenanigans that can entail. Is the Fervo, NV Energy, Google clean transition tariff and example of that, but one that is actually helpful? https://www.utilitydive.com/news/google-fervo-nv-energy-nevada-puc-clean-energy-tariff/719472/
Loved this episode. I wanted to comment on 2 issues that were discussed. For background, my firm is a green hydrogen developer based in the US. If we can, we develop our own physically connected/ "behind the meter" power generation (typically solar or wind) to power our electrolyzers. We do this because it is (1) the most cost-effective clean electron I can get, (2) no concern on my hydrogen truly being green, (3) removes cost and time uncertainty from grid connection. Now we cannot do this for all projects, typically because of land constraints, therefore some projects are fully grid connected and some are hybrid - electrolyzers partially powered by the grid and partially by the BTM generation. For avoidance of doubt, when we are grid connected we comply with 45V's 3 pillars. First, we are exploring special tariffs with certain utilities because we are truly a flexible load (when grid connected we typically run 50% of the time) as we are extremely sensitive to the cost of power. We are not bitcoin that says they are flexible but then just run 24/7. I am fully on board with not passing costs on to other utility customers, but as far as I have seen, most utilities that we engage with have not seen a load that is flexible like ours. Therefore we support these tariffs - if done right. Second comment is just double clicking on the idea that if I am building new renewables that are BTM and physically connecting to my electrolyzers, I should not have to pay T&D charges to the utility for power that physically never touches their system. This gets into the utility monopoly/franchise issue. Welcome any comments!
I suspect in a shared system like the grid that it is inherently impossible to even determine what is absolutely fair in terms of who is using or causing what. Electricity doesn't work that way. So, you will find that each of the parties will try to shift costs to other claiming that anything else is "unfair".
In the grand scheme of things, it doesn't matter. The public will ultimately pay for it all either because their electrical rates go up, or because they will be charged more for the goods and services that are produced or supported by the datacenter players who will pass on any increased costs.
Finally, the US is in an AI "cold war". We are in a race with other countries to come out on top. Whoever wins this race will have a massive advantage economically and militarily over the rest of the world. If we somehow stymie the AI growth in the US with this debate and dragging of heals, the outcome for "consumers" will be far worse than any electrical rate hike. You can bet China is not having these energy discussions - they are just building.
And these data centers surveil us
Thank you for this. Exceptional eye opener. Regulation is iteractive as we all know... One hopes proper vision and political will come together to the benefit of consumers (how naive can I be?)
The Future's market should be able to deal with huge new customers. Existing customers will be locked into current pricing. New Customers will either pay spot market prices, buy futures, or create their own electricity generation.
Future's markets are generally about fuel and materials, not infrastructure build out, no?
I live in Northern VA at the heart of this issue. Current ratepayers have to fund the infrastructure build out via higher rates.
Of course, NoVa also benefits massively for jobs and taxes from these datacenters and enjoys one of the richest lifestyles as a result.
I'll remember that when the 88 diesel backup generators at just one DC kick on at 3am. It's normally running 50+ dbA and 60+ dbC continuous, 24/7/365. So much for enjoying the backyard when there are industrial fans humming.
They are siting these DCs within a few hundred feet of homes. My subdivision will have 13 data centers within a 1/4 mile.
Warehouses don't employ a lot of people. After construction it's a basically a ghost town of warehouses.
And I recall "they" fixed that once they realized what was going on. It was clearly an oversight, and other facilities have been built to repress generator noise. And newer facilities are held to strict noise levels for all equipment. If this isn't being done in your area, absolutely time to raise some hell.
The datacenter business does employ a lot of people - it just isn't represented in the full-time hyperscaler staff that works at the DC. Almost all maintenance and ongoing trade work is outsourced to local professionals. A lot of the computer and network M&R is also outsourced to other companies. Finally, given the size of the new campuses, basic construction jobs will continue for a decade or more - and then there is all the retrofitting that will follow. Measuring jobs by the FTE that belong to the hyperscaler misses the vast majority of employment.
You argued DC's are employment boost...and yet also arguing the bulk of their staff aren't local. Can't have it both ways.
The noise issue is far from 'fixed'. I live 500 ft from one. That one DC had two separate power fail overs in the last 2 months requiring generators firing up. And they run them every month for maintenance.
Gish Gallop. Good Day.
How are they not "local"? Just because they are not FTE of the hyperscaler, doesn't mean they don't live locally. They clearly don't fly people in daily to do all the construction, repair, maintenance.
Totally agree they should fix the noise to meet any zoning ordinances. Now, if you are saying you don't want to hear anything at all, even something well below local ordinances, then I suggest you move to a very remote place. If you live in a city, there is going to be noise - but, levels should be within the rules.
I meant that the futures market would be for electricity delivered in the future. Companies that generate electricity will sell future supply. Companies and people who use electricity would buy future supply based on their expected future use. Speculators will buy and sell. If a server farm wants to build a server, they will buy electricity at a higher price than those who have pre-existing futures contracts. If the price rises to a point where profit could be made by building a source of electricity and pre-selling the electricity it in the futures market, someone will build the plant. The electricity will already have been sold, so the company building the plant will not have to worry about stranded assets.
And again, that assumes the infrastructure exists to supply it. I can say entirely, we, the current rate payers are the ones being asked to fund the infrastructure expansion to accommodate that 'future'. It's not the new players on the block like data centers. That's literally the point of the pod.
The Transmission of the electricity and the generation of the electricity should be run by different companies. The customer should pay for the electricity cost at the point of delivery. The electricity should have dynamic pricing based on supply and demand. The cost of supply would be based on the cost of generation plus the cost of transmission. If my neighbor generates the electricity, there shouldn't be a transmission cost. Distribution: yes, but not Transmission. If no one else needs electricity from the new plant, the transmission lines don't need to be built from the plant to the town. The server farm company will have to pay for all of the transmission costs. This will encourage them to build the server farm right next to the generation plant. When it is cheaper to generate electricity locally, lots of electricity will be generated locally. Batteries will be installed and EVs with bidirectional charging will be used so that we need much less transmission capacity.
Full blown gish gallop. Good Day
Listening to this episode back to back with this podcast https://podcasts.apple.com/us/podcast/microsoft-cuts-the-power-to-ai/id1730587238?i=1000698829485
Another problem with costs, is billing customers a per kwh charge for the *infrastructure*. Distribution infrastructure does not degrade based on kwh usage alone. Solar/battery peeps get called 'leechers' for not paying enough for distribution maintenance when the problem is the legacy fee model.
Rhodium Group just published a study on dedicated geothermal feasibility for data centers:
https://rhg.com/research/geothermal-data-center-electricity-demand/
David, I invite you to listen to the first 10 minutes of this episode. This is a monologue. Your guests are the experts, frankly and directly, I want to hear them not you.
I'm regular podcast listener, not this time,I gave up after 10 minutes.
Be nice! I've got David's "monologue" timed to a brisk 5 min 7 seconds
Which is 4 mins too long, plus every question requires a 90 second preamble when 10 seconds will do.
I have discussed this in my weekly column recently. https://www.newsmax.com/paulfdelespinasse/led-solar-wind/2025/02/21/id/1200030/