In this episode, Priya Donti, executive director of nonprofit Climate Change AI, speaks to how artificial intelligence and machine learning are affecting the fight against climate change.
Text transcript:
David Roberts
As you might have noticed, the world is in the midst of a massive wave of hype about artificial intelligence (AI) and machine learning (ML) — hype tinged with no small amount of terror.
Here at Volts, though, we’re less worried about theoretical machines that gain sentience and decide to wipe out humanity than we are with the actually existing apocalypse of climate change.
Are AI and ML helping in the climate fight, or hurting? Are they generating substantial greenhouse gas emissions on their own? Are they helping to discover and exploit more fossil fuels? Are they unlocking fantastic capabilities that might one day revolutionize climate models or the electricity grid?
Yes! They are doing all those things. To try to wrap my head around the extent of their current carbon emissions, the ways they are hurting and helping the climate fight, and how policy might channel them in a positive direction, I contact Priya Donti, an assistant professor at MIT and executive director of Climate Change AI, a nonprofit that investigates these very questions.
All right, then, with no further ado, Priya Donti, welcome to Volts. Thank you so much for coming.
Priya Donti
Thanks for having me on.
David Roberts
We are going to discuss the effects of artificial intelligence and machine learning on the climate fight. And I think we're going to, for reasons that will become clear as we talk, kind of like taking on an impossible task here. As we'll see, it's going to be very difficult to sort of wrap our heads around the whole thing. But I think we can make a lot of progress and maybe get clear about sort of some of the directions and some of the applications and get a better sense of how things are going, because this is something I've been sort of meaning to think about and talk about for a while.
I'm excited. But to start, can we just get some definitions out of the way? Because I think people hear a lot of these terms flying around. There's artificial intelligence, AI. There's machine learning, ML, in the business, and then there's just sort of the digitization of everything, and then there's just sort of more powerful computers. Like, if I'm running a climate model and I want to put more variables in there, but I'm constrained by the amount of computing power it would take, computers that have more power and more processing cores or whatever, then I can do that.
So help us understand the distinction between these things, between just sort of more and better and faster computing and something called machine learning and something called artificial intelligence. What do all these things mean?
Priya Donti
Yeah, so I'm going to start with AI: Artificial intelligence. So AI refers to any computational algorithm that can perform a task that we think of as complex so this is things like speech or reasoning or forecasting or something like that. And AI has two kind of main branches. One of them is based on rule-based approaches where you basically write down a set of rules and ask an algorithm to reason over them. So when, for example, Deep Blue beat Gary Kasparov in the game of chess, this was a kind of rule-based scenario where you were able to write down the rules of chess and get an algorithm to understand and reason over what to do given that set of rules. Of course, there are lots of scenarios in the world where it's really difficult to write down a set of rules to capture a task, even though we kind of know how the task goes.
David Roberts
Most you could say are difficult.
Priya Donti
Exactly. And so, one of these things is, like, if I have an image, what does it mean for that image to contain a picture of a cat? I can probably tell you, okay, there's got to be a thing with ears, a head, a tail, but it doesn't capture that always because you can't always see the tail. Like, how does this work? And so, machine learning is a type of AI that basically tries to automatically learn an underlying set of rules based on examples. So, for example, it takes large amounts of data, like, that it can analyze and use to help kind of figure out what the patterns are in that underlying data, and then apply those patterns to other similar scenarios, like classifying other images that the algorithm hasn't yet seen but are similar to what it saw when actually being created.
And yeah, I would say that in terms of what's the distinction between these things and computing, I would say computing is a workhorse behind many of these algorithms. So in order for these algorithms to work, you need fast computers that are able to kind of execute the computations behind the creation of these algorithms. Behind the learning. You also need good data. And with those things together, you can basically create a lot of these more powerful AI and machine learning algorithms that you've seen today.
David Roberts
I see. So in AI, you're kind of telling the computer the rules and then hoping that the computer can use the rules to respond effectively to new data. With machine learning, you're just feeding it an enormous amount of data and it is deriving the rules or patterns from the data.
Priya Donti
Right. And those rules might be derived in a way that either is or is not interpretable. So I may or may not be able to go into the model and actually pull out what the set of rules are. But implicitly, at least in there, there's some set of rules that's being learned based on the data.
David Roberts
So there are so many side paths that I'm going to try not to go down all of them as we go. But I'm sort of curious because one of the fears people are always bringing up is you feed it this enormous amount of data, it derives some rules from it and applies that to new data, but you don't really know what it's doing. And this is something we hear about AI a lot, is sort of relatively quickly the sort of complexity of what's going on and the kind of foreignness of what's going on to our way of thinking, to sort of human reasoning just puts these things out of touch. And pretty quickly we're in a kind of like, well, it seems to be working, so let's keep using it even though we don't know what it's doing.
So I guess my question is, is that a limitation of our knowledge? In other words, is it theoretically possible if we had sort of the time and willpower to dig in and figure out what it's doing? Or is there some reason that in principle it's sort of impossible to know what it's doing? Does that make sense?
Priya Donti
It does, yeah. And I'd say that one thing to kind of step back and note also is that there's a diversity of machine learning methods, some of which are inherently a bit more interpretable than others. So, linear regression, even though people don't think of it as a form of machine learning, it actually is, right? Because you're taking in some data and you're learning parameters that allow you to make some kind of prediction. And linear regression is, abundantly, interpretable. And similarly, you have things like decision trees. There are more complicated methods, like physics-informed machine learning or other methods, that try to just constrain the model in a way such that the goal is you can pull out certain kinds of rules.
So, there is that axis of methods, but then there are these other methods, like some of the more complicated deep learning methods you see today, where, agreed, we basically view it. It is a bit of a black box. You don't know exactly why a prediction is being made, and there is some work going on to try to get at this issue and see if there are ways we can understand what the model is doing post hoc. But it's an area of research, I think, one that undergoes a lot of debate. Also, in terms of can you post hoc explain what a deep learning model did?
For example, if I, as a person, make a decision and take some kind of action and you, David, ask me, "Hey, why did you do that?" I could probably come up with any number of explanations for you, all of which seem plausible, but those may or may not actually describe how I actually made the decision. So there's a bit of a debate about kind of even if you can try to somehow understand what the deep learning model did, what are the limits of that analysis and interpreting what it actually did and why?
David Roberts
Yeah, there's a lot of things about this whole subject matter that sort of unnerve people. But this is kind of what I think is at the root of it is just that as these things get more complex, you pretty quickly get into an area of kind of trust or faith, almost like our machine masters. They seem to be doing well by us, even though we don't know exactly why. There's just something a little weird about that.
Priya Donti
Yeah. And maybe just one thing I'll add. There are levers here, though, right? In any kind of machine learning pipeline, you have the data, the model, and then the outputs are how you evaluate the outputs. And you do have the ability to kind of quality control or constrain any of those things. So you should know exactly what's going into the data that's going into a model. In order to understand if your model is actually seeing quality things that it's trying to learn from. You can, as I mentioned, constrain your model to be an interpretable model.
And then, what some of my work looks at is, you can actually often constrain the output in certain settings. So, if I create a controller for a power grid based on machine learning and it outputs some kind of action, but I know something about the control theoretic constraints that that action should satisfy, there are ways I can actually constrain the output so that it still satisfies various performance criteria that we recognize. So, it isn't sort of a foregone conclusion that AI and machine learning must be this sort of black box, scary thing. But I would say that there is work to be done and kind of intention that goes into making sure that we really understand and are constraining and quality controlling how the whole pipeline goes forward.
David Roberts
Right. And one other general question. So when people talk about AI these days, I think mostly in the popular imagination, I think mostly what they're talking about is what's called general intelligence. This idea that you could create a program that could find its own data and apply rules and figure things out, basically that has some autonomy, that would be the AI, right, the rules based. Like you give it the rules and then it goes and applies this to the world. Or is there a dispute about how you get to general intelligence? Which of these routes leads you to general intelligence?
Priya Donti
Yeah, so I would say that the distinction between sort of general intelligent AI versus task-specific AI, it's not quite the same as this AI machine learning distinction of rules versus data. It's something different. And it kind of comes down to, when you create an algorithm, there is some objective that you're creating it with in mind. And so, for example, if I am creating a forecasting model of solar power, that's a very specific task. I'm kind of giving very specific data. I'm making a very specific ask when I look at the output of the model. But others are saying, can we somehow imbue a lot of data or a lot of rules and learn some kind of foundational representation that really is capturing a ton of general knowledge that can be kind of tuned or specified in various ways.
These are kind of the kinds of works that really are trying to lead towards something more general. And so, yeah, I would say that there's kind of these different threads of work within the machine learning community at the moment.
David Roberts
Right. And just to be clear, we have not reached general intelligence and no one knows how to do that. And there's a lot of theoretical work going on, a lot of work going on in that. But practically speaking, almost all of the AI or machine learning that is happening today is task based. Right? I mean that's to a first approximation, when we talk about AI and machine learning, that's what we're talking about today.
Priya Donti
Yes, that's right. So, I think that there is some kind of research going on in specific labs that is trying to work on artificial general intelligence. But when we think about the implementation of AI and machine learning across society and what it's really used for in practice, I think it is safe to say that a lot of it is task-based. And even some of the stuff that looks very clever and artificial general intelligence-like, there is genuine debate as to whether that is actually the case. For example, large language models and models like GPT have been called stochastic parrots, which is to say, they're not actually thinking; they are mirroring, parroting in a kind of stochastic way, what they're seeing in their data.
And we potentially as people who then read text outputs that seem realistic, we maybe ascribe intelligence to that. But that doesn't necessarily mean there's any thinking actually going on under the hood.
David Roberts
Yes. And then, of course, there's this whole, like, back in the "dark ages", I was in grad school in philosophy and I used to study cognitive science and consciousness and all these sort of theoretical debates around this stuff. There is a sort of debate. There is this sort of idea that all we're doing is what the language models are doing, just on a vast scale. So, there is no sharp line. They're just like, eventually you do that well enough that you are, de facto, deploying intelligence, and the models will eventually, eventually there will be no point in drawing a distinction between what they're doing and true intelligence.
But that is well far afield of our subject here today anyway. So we're going to try to wrap our heads around how this all applies to the climate change fight, the clean energy fight. But just as a caveat up front, in one of your papers you write "those impacts that are easiest to measure are likely not those with the largest effects." So just by way of framing the discussion. What do you mean by that?
Priya Donti
Yeah. So when we think about the impacts of AI and machine learning on climate, we need to think about a combination of AI and machine learning's direct carbon footprint through its hardware and computational impacts. The ways in which AI is being used for applications that have quote, unquote immediate impacts on climate change, be those sort of good or bad. But then we also have to think about the broader systemic shifts that AI and machine learning create across society that then may have implications for our ability to move forward on climate goals. And I'm sure we'll get into the specifics of all of those things.
But I guess, briefly speaking, these sort of broader systemic shifts that AI and machine learning is going to potentially bring about are extremely hard to quantify, but they'll be large. And so it's important to make sure that as we think holistically about the impact of AI on climate, we do the quantifications in order to guide ourselves. But we also make sure to look at this holistic picture, even for things that we're not able to put so concretely into numbers.
David Roberts
Yeah, I think about going back to, whatever, the beginning of the 19th century and just saying, like, well, what are the systemic impacts of automation going to be? Who knows? But they were in fact enormous, right? And they did, in fact, sort of swamp the kind of tangible, measurable immediate impacts. So this just to keep in mind that we are to a large extent, I think, stumbling around in the dark here, kind of guessing, like, we know something big is going to happen. Big things are coming, but good big things? Bad big things. What kind of big things?
To some extent, we're guessing from behind a veil of very little information. So let's start then with the immediate impacts. And this is something, when I threw this out on Twitter, this is something I got a lot of questions about. I think it's in some ways the easiest question to ask, which is, just as you say, all these algorithms require a bunch of computing, a bunch of calculations, which requires a bunch of chips and a bunch of data centers and a bunch of hardware, basically. And so the first thing to ask is just, do we know this shift into AI and machine learning, do we have a good sense of just how much it is increasing the world's computing load and just sort of exactly how big the greenhouse gas impacts of that computing load are? This is a conversation I think people are very familiar with, vis-a-vis, Bitcoin, right? Like lots of people are asking about Bitcoin. Is whatever we're getting out of Bitcoin worth the immense resources we're putting into it, computing wise? Sort of same question with machine learning and AI. So do we know how to wrap our head around that do we know how to measure the total amount of computing devoted to this?
Priya Donti
Yeah, and there are some macro level estimates here but they are kind of evolving quite a bit over time. So in the kind of latest numbers at least that I am on top of at a macro level is that in 2020 the total information and communication technology sector was something like 2% of global greenhouse gas emissions and machine learning is an unknown fraction of that. And one thing that was happening is that we were starting to see kind of an increase and I think exponential increase in the amount of computational cycles that were being demanded from just various types of compute that we're doing across society. But hardware was also getting efficient at a similar rate which kind of kept these greenhouse gas emissions and energy impacts relatively constant over a decade or so.
But we're seeing a couple of these trends change. For example, we're starting to see larger and more energy intensive AI and machine learning models being developed and we're also potentially reaching the end of, quote, unquote Moore's Law improvements that were leading to these hardware efficiencies. And so it's really important that we get honestly better and more transparent data on machine learning workloads and sort of the dynamics and trends of that in order to really understand what we're dealing with. And this is one of those things where it's, from a technical perspective, not the hardest in the world to measure the computational impacts of AI and machine learning. You sort of know where they're happening or you know what entities are doing them. And it's a matter of instrumenting some computational hardware. But for political and organizational reasons we don't tend to have transparency on that data. It's also worth noting that hardware is an important part of this conversation because, of course, data storage and machine learning algorithms, they all kind of rely on having computational and storage hardware. And the kind of creation and disposal and transportation of that hardware has not only kind of energy impacts but materials impacts and water impacts and all other sorts of impacts that we really need to be thinking about.
David Roberts
So is it true that Moore's Law is slowing down? I don't know that I had tuned into this issue, but is it measurably slowing down or is it a fear it's going to can we see it? I imagine it's not super clear.
Priya Donti
Yeah. So I'm not a computer systems researcher myself, but I will say that there has at least been discussion within the community about are we reaching the end of Moore's Law as we've potentially run against just physical limits on how small you can make something.
David Roberts
Right, interesting. Yeah, we're getting down to nano, whatevers. Now, is it fair to say that the majority of these direct impacts are about the electricity that is running these things or are the embedded emissions in the hardware itself that you were just referring to are they comparably sized? Do we know how those two compare to one another?
Priya Donti
Yeah. And I will say again, it's a bit of a shifting landscape. But as of now, I would say that the computational emissions are higher than the embodied emissions. But this is also shaped by organizational choices in certain ways. For example, what we see is that when you have data centers, they are often replacing their computational infrastructure very quickly in order to make it so that your computations are more efficient. So you kind of reduce your computational emissions footprint.
David Roberts
Right.
Priya Donti
But by doing that, by replacing your hardware so quickly, especially when your hardware is not actually spent, you're increasing your embodied emissions. And so I think we're seeing kind of adds a picture of what the computational emissions are versus how quickly are we replacing hardware. The kind of proportion of embodied emissions sort of is increasing if we kind of believe this fact that the hardware is getting more efficient.
David Roberts
And just in terms of how much to worry about this, about these impacts in particular, I mean, I guess I'm inclined to just say most of that comes down to the power sources. A) the power sources that are running the data centers, or b), the power sources that are running the factories that are producing the things. Those power sources are getting cleaner over time. Right. They're being replaced by renewables over time. And so you can imagine a not too distant future where this particular family of impacts, the direct impacts, are fairly low to negligible. So I guess I'm just inclined to just not worry about that piece of it much. Is that off? Do you worry more than that about this piece of it?
Priya Donti
I do worry about it. And this is because if we think about decarbonization strategies across any energy related sector, the first order of business is to reduce waste and improve efficiency. And if every sector feels entitled to its unbounded growth in energy use, we start to run into various constraints on the actual "can the grid handle this?" on the decarbonization-of-the-grid side. So I would say that here this translates to kind of reduce waste is; if it's not worth running a particular machine learning algorithm, if the benefit on the other side isn't worth it, then we shouldn't be doing it.
And then improve efficiency is; for use cases where we've decided it is worth it, let's make sure to do that in a way that is reducing energy use as much as possible. And I think this sector, like every other energy based sector, needs to be thinking about those primarily in addition to, of course, decarbonizing the grid.
David Roberts
Right. And there's a lot of runway left to make these things more efficient, like the computations themselves. Is that mostly a software thing, a programming thing to make them more efficient? Or do you mean physical improvements in chips and data centers and whatnot?
Priya Donti
So there's both stuff that can be done in software and in hardware. There are kind of physical improvements that are doable and are being worked on to make hardware more efficient. But also in terms of the software, there's work that's looking at if you have a big model, can you somehow actually do something called pruning or architecture search? Things that allow you to figure out are there smaller versions of the model that would make sense. You can also, when actually training your model so getting it to a state where it's making good predictions. There are various procedures like hyperparameter tuning that go on, where you're trying to figure out kind of meta design choices around how the model is designed.
And there's more and less wasteful ways to do hyperparameter tuning. We can again pick to not always use the most complex model if it's not worth the value. So if a kind of much less energy intensive model gives you 99.9% accuracy and it takes you 1000 times more energy to get to 99.99, that may not be worth it in every use case. And so really, I think there's a lot that can be done in there as well.
David Roberts
And it seems like we could also although I don't think we will, it seems like we could also say as a society that some things are not worth putting all this effort into. Like maybe if you're creating a bunch of greenhouse gases and burning a bunch of data center cycles to sort of improve the performance of a button position on a particular Amazon page or whatever, maybe we should just say deal with the current button position. There are frivolous things that we're throwing enormous resources at already.
Priya Donti
It's totally true. And I think all of these are driven by the fact of money speaks. And I think it's unquestionable sort of where money flows in society.
David Roberts
Okay, well, so those are the computing related sort of direct physical impacts. The next tier up is what you call immediate application impacts, which is just what are the things that are running on machine learning doing now for climate? And I guess you might say against climate, it's like, oil companies have access to this stuff too and I imagine are throwing tons of resources at it. One of the papers you sent me was sort of this catalog of things that are using machine learning and it's just already it's so vast that you can't really wrap your head around it.
It's spread so fast that it's hard to say anything general about how they're being used. But is there some way of sort of wrapping our heads around or categorizing what machine learning is being used for now in this world? In this sort of clean energy climate world?
Priya Donti
Yeah. So I can give a couple of themes that I think cut across a lot of the applications that I've seen and these aren't exhaustive, but hopefully are at least illustrative. So one of them is machine learning is maybe unsurprisingly being used to improve predictions and by analyzing past data in order to provide some kind of foresight. So an example there is the nonprofit Open Climate Fix in the UK is working with National Grid ESO to basically create demand and solar power forecasts by ingesting a combination of historical data, the outputs of numerical weather prediction models and in the case of solar, things like videos or images of cloud cover overhead.
And by basically cleverly combining different data sources and then using machine learning models to learn correlations between these, they were able to cut the error of the electricity demand forecasts in, I believe half —
Oh wow!
by doing that. And there are also applications in the climate change, adaptation space. So for example, there's a Kenya based company called Selina Wamucii which is using AI to predict locust outbreaks which are exacerbated by climate, by basically combining agricultural data, weather data, satellite data. So the idea is basically if you have a bunch of different data sources that are telling you something a bit different about the problem, machine learning is really good at combining and learning correlations among these heterogeneous data sources and then kind of using that to make some kind of forecast in the future. So that's one theme.
David Roberts
And does that theme also apply to the climate models themselves? Like, I'm assuming climate modeling in general is going to benefit from all this stuff.
Priya Donti
Yes. And so there is a lot of work that's looking at not machine learning as a direct predictor of climate because ultimately climate involves a shift in what's going to happen. And what machine learning is good at is you have a data set, you identify existing patterns and then to the algorithm, those patterns are the world. So it's going to continue trying to apply the same patterns. But where machine learning has been used in climate forecasting is to do things like take these existing physical models that are really complicated to run and try to approximate portions of them so that the overall model runs more quickly.
Or take the outputs which are often coarse grained and try to downscale them or fine grain them based on on the ground data. Kind of post hoc.
David Roberts
Interesting.
Priya Donti
Yeah.
David Roberts
So just prediction.
Priya Donti
Yes so prediction is one.
David Roberts
Seems like an obvious enough one.
Priya Donti
Yes. The second one I'll talk about is taking large and unstructured data sources and distilling them into actionable insights. So this often comes up when thinking about the large amount of satellite and aerial imagery that's becoming available as well as the large amount of text documents we have available on public policies or patents or things like that. So for example, there's a project called the MAAP Project which is using satellite imagery to try to give like a real time picture of deforestation in the Amazon in order to then enable interventions to actually stop it. And in the public sector, the UN Satellite Center UNOSAT they use AI and machine learning to analyze satellite imagery to get high frequency flood reports because basically you can have a human looking at satellite imagery and analyzing the extent of flooding, but it's a task that's hard to do at scale for a human.
And so they use machine learning to actually try to analyze how is flooding changing and get real time reports that have helped them improve disaster response actions.
David Roberts
Yeah, in a sense it's just pattern recognition even for data collections that are so vast and heterogeneous that maybe the human mind sort of is stymied. The human minds are just pattern recognition machines too, but we have our wetware limitations so it just can find patterns in much larger and more heterogeneous data sets.
Priya Donti
Yeah, I mean, in some cases it's that the patterns are just really hard for people to grasp. Now, I have to emphasize the pattern needs to exist. You're not going to find patterns where they don't exist. But that's one case. But another case is one where we as humans can grasp them and readily apply them. It's just that scale is really hard. Kind of labeling a couple of satellite images to understand flood extent is fine. Labeling thousands and thousands that you're just going to run out of human time.
David Roberts
All right, that's two.
Priya Donti
Number three. So the third is machine learning can be used to optimize complex real world systems in order to improve their efficiency. So while the kind of last two themes I talked about with forecasting and distilling data into actionable insights, it's fundamentally about providing information that ultimately will go on to inform a decision. But there are places where machine learning is itself in some sense, making a decision is automatically optimizing some kind of system. This comes up, for example, in building automation. So there are companies that are using AI and machine learning to automatically control heating and cooling systems. For example, in commercial buildings, based on sensor data about weather, temperature, and occupancy, we are trying to leverage that to basically find efficiencies in how the heating and cooling infrastructure is managed —
David Roberts
You can throw power prices in there.
Priya Donti
You can throw power prices in there. Yes. And I think this is actually a really kind of underrated and underexplored area of work where there's work using machine learning for demand response and market trading and there's work using machine learning for building energy efficiency. But I think actually there's a lot to be done in kind of bridging those two views. And so I'm really glad you brought that up, actually.
David Roberts
Well, I can also think of another large complex system that desperately needs some optimization, which I think you also know something about one of our shared, shared obsessions, namely the electricity grid. I'm very curious what is currently being done with machine learning on the grid?
Priya Donti
Yeah, it's a great question and I will say , so, machine learning is pretty widely deployed across power grids for forecasting and situational awareness kinds of tasks. When it comes to optimization and control, I would say largely a lot of those applications sit more on the research realm than in the deployment realm right now. And part of the reason for that is that I think there's just a big lack of appropriately realistic data and simulation environments and metrics that actually allow us to test out and validate research methods in an environment that is realistic and actually advance their readiness that way.
Because by testing out a research method in an environment that looks realistic, you then understand how do I need to adjust my method to make it responsive to the realities of the grid. And you sort of have that feedback loop and kind of progression of readiness which I think we're lacking a lot of infrastructure for. But concretely, where machine learning can play a role there is when we think about centralized optimization problems. So things like optimal power flow problems and the stochastic and robust variance of that, these problems are computationally intensive to solve. And so sort of similarly to the theme of improving the runtime of climate models, we can similarly think about are there parts of the problem we can approximate, or can we learn quote unquote warm start points?
Or can we even make direct and full approximations to these centralized optimization models, but in ways that preserve the physics and hard constraints that we care about? And that's actually what some of my work looks at. And then also on the kind of distributed and decentralized control side, we want to construct controllers that can make decisions based on local data, maybe plus a limited amount of communication to get some more centralized data. And this is a place where control theory is playing a role and AI and machine learning can potentially also play a role by basically learning complex patterns in the underlying data and using that to make nuanced control decisions.
David Roberts
When I first thought about AI, machine learning and climate, this was the very first place my brain went, I guess. No surprise to any listeners. But the rise of DERs, the rise of distributed energy resources is just, among other things, an enormous increase in complexity. You're going from, whatever, a dozen power plants in your region to potentially thousands, tens of thousands, hundreds of thousands. And I think I mentioned this when we talked earlier, but I'm not sure ordinary non-grid nerds really understand how much of grid operation today is still like people turning knobs and making phone calls to one another.
It's bizarrely low tech, a lot of it. And so that just seems to me like an absolutely ripe area for this kind of thing.
Priya Donti
Yeah, I definitely agree. I mean, there's the scale problem you talked about, there's the speed problem as we deal with increased variability, and there's actually the physical fidelity problem. So right now, because on power grids, we find that true physical representations are really hard to kind of solve computationally. So you often will use something like DC optimal power flow as an approximation to the grid physics, rather than something more realistic like AC optimal power flow. Then what we rely on is we make a kind of decision a bit ahead of time based on these approximate physics.
We let that play out, and then we allow real time adjustments on the grid. Things like automatic generation control take place to compensate for mispredictions or mischaracterizations of the physics. And as we have fewer spinning devices on the grid, and we're starting to see things like faster frequency swings because we don't have that buffer provided by spinning devices attached to the grid in the same way, we also lose some of our kind of buffer in terms of being allowed to be slightly physically off in terms of our characterization.
David Roberts
So we need to be more precise.
Priya Donti
We need to be more precise.
David Roberts
Yeah, this is the thing about solar power in particular, is just so digital. It just seems like it lends itself to digital control and not to this sort of old fashioned kind of inertia and spinning and all these sort of very physical, very physical things.
Priya Donti
And I think one way to think of it is I know there's a lot of folks who are very scared. I mean, we're fundamentally talking about a safety critical system where if it goes down, it's a real big issue. And so I think there's a combination of for those physical constraints that we can kind of write down and really be certain of, there are ways to start to construct AI and machine learning methods to fundamentally respect those. And then also, I mean, it's not unreasonable to think that at certain timescales that we would possibly have some amount of human in the loop control.
Sort of in the same way, when you're driving a car, you as a human are steering it, but you're not dictating every lower level process that takes place to make the car go.
David Roberts
Yeah, the car analogy getting slightly off course again. But the car analogy raises something that I've been thinking about, which is some of the dangers of automation coming from machine learning and AI. And I think the car example works really well. So it's generally pretty safe for a human being to be 100% in charge of the car. And I can imagine a level of AI and sensing and et cetera, and infrastructure sympathetic infrastructure makes it such that 100% automated control is safe. But what doesn't seem safe to me is the sort of quasi semi-automation where the car can drive itself most of the time, but then you need a human out of nowhere, possibly quite suddenly. And it's just we humans are not really made for that, to sit there not doing anything for hours on end and then be ready at any second to jump in. And I wonder if there's an analogy to other systems in that is there that gap between no automation and full automation where there's weird automation-human interactions that are kind of sketchy. Is that analogy broadly applicable or is it just a car thing?
Priya Donti
No, I mean, I think it is broadly applicable and it's a combination of what is the correct level of sort of human automation-interaction both at the level of an individual component but also you're often thinking of multiple components interacting with each other that may have different trade offs. So in cars, that is, if you have a mixture of autonomous, semi-autonomous and fully human controlled cars on the road in grids, you can imagine, of course right, multiple grids. I mean, it's a physical system, but there are different sort of governance and jurisdiction related things such that we're doing different things on different parts of the system. And so how do those interact with each other becomes a super important question.
David Roberts
Yeah, and it's one thing in a car, it's another thing if you're driving a grid. As you say, the cost of mistakes is much higher. But I interrupted your list, I think. Was there a fourth?
Priya Donti
Yeah, I had a last theme that I wanted to talk about. Yeah, so the last one is machine learning for accelerating the discovery of next-generation clean technologies. We've talked so far about machine learning for operational systems, but of course, as we're trying to transition systems, how do we come up with that better battery for frequency regulation on the grid or for your electric vehicles or how do you come up with a better carbon dioxide sSorbent for sequestration related applications, things like that, or electrofuels. So what machine learning has been used to do is analyze the outcomes of past experiments in order to suggest which experiments to try next, with the goal of cutting down the number of design and experimental cycles that are needed to get to that next better material or clean technology.
David Roberts
Right. Yeah, I hear a lot about this, and this always seems enormously positive to me. And I thought, isn't it also in addition to just suggesting experiments, isn't it also a thing that they can sort of run the experiments virtually? Sort of do the materials science experiments virtually, so you don't have to do the physical experiment at all?
Priya Donti
Yeah. So you can do some amount of physical virtual simulation rather in order to understand what the performance characteristics of a particular material are. But virtual simulations are not perfect. And so ultimately you do sort of need to synthesize at some point. Right? You need to synthesize or create the thing and test it out in the physical world.
David Roberts
At least you could narrow down the number of physical experiments you need.
Priya Donti
That's exactly right. That's exactly right. And so the goal is really to in this case, it's again, not that you're sort of letting a machine learning algorithm itself sort of dictate exactly what experiments you do at all times, right. There is sort of human scientific knowledge that's really coming into play. On the other side, to look at the output and say, that seems reasonable, that seems like something I'm going to try versus this might not be worth the millions of dollars it takes me to synthesize this thing. So it's sort of an interaction between the computational insight and sort of the human judgment on the other side.
David Roberts
This is a big thing in pharmaceuticals too, right? Like drug development. Is there a clear sort of a success story in that particular application? Like, is there a materials advance where the company that did it was like, look what we did with AI. Can we point to something yet?
Priya Donti
Yeah. So a group of us wrote this report for the Global Partnership on AI, which provides recommendations to policymakers on how they can align the use of AI with climate action. And as a part of that, we actually highlighted a couple of real-world use cases where we are seeing kind of on the ground successes. And so actually, some of the examples I've talked through today are from there. But in this category, one of the successful ones that we highlighted in that report, it's a startup called Aionics, which is a Stanford spin out. And what they do is they work with battery manufacturers across different sectors, so across energy and transport in order to help them kind of speed up their process of battery design, where of course, the properties of your ideal battery vary based on your use case.
David Roberts
Right.
Priya Donti
And they use a combination of machine learning and some physical knowledge to do this analysis. And per their reporting, they've been able to cut down design times by a factor of ten for some of their customers.
David Roberts
Super interesting.
Priya Donti
I think there's a lot of potentially very impressive gains in that area.
David Roberts
Yeah. I mean, to return to my theme, how do you even begin to predict where that's going to go? I mean, the mind boggles on some, on some level. So in terms of these immediate application impacts, you listed four sort of broad themes, all positive examples. I'm assuming carbon intensive industries are also —
Priya Donti
Very much seeing the power of AI.
David Roberts
Yeah. Are there prominent sort of examples where AI is being used to find or burn more fossil fuels?
Priya Donti
Definitely. So AI is being used in large amounts by the oil and gas industry to facilitate their operations. So things like advanced subsurface modeling to kind of facilitate exploration, the optimization of drilling and pipelines in ways that try to improve extraction and transportation and also, I mean, marketing, right. To increase sales. And so there's a lot of applications here. And there was a report called Oil in the Cloud by Greenpeace that came out a few years ago.
David Roberts
Yes, I recall.
Priya Donti
Yeah. And that one estimated that AI was going to generate hundreds of billions of dollars in value for the oil and gas sector by kind of the middle of this decade. And that is substantial.
David Roberts
Yeah. And I believe their point was like Google is out there claiming to be a champion of clean energy and decarbonization and et cetera, et cetera, and it is providing these technologies that are turbocharging the fossil fuel industry. Seems odd.
Priya Donti
Yeah. And there's genuine debate, which I do happen to fall on a particular side of, but there's genuine debate about sort of whose responsibility the resultant emissions are. But I guess what I will say is every entity is very — the tech sector, the oil and gas sector, they're very eager to claim that every set of emissions is scope three emissions that are not within their direct control. And given the urgency of hitting climate change related goals, if anything, we shouldn't be so worried about, well, we need to make sure that — this sector is responsible and this isn't — by all means, double count it. Make multiple entities responsible for any packet of emissions and just make sure something happens.
David Roberts
Yeah. What's the danger of double counting? We might reduce emissions, accidentally reduce emissions too much.
Priya Donti
Yeah.
David Roberts
So here's an unanswerable question for you then. When you look out over the landscape of these immediate application impacts, sort of the way AI and machine learning is being used today, is there any way to sort of net things out and say, oh, it's good for climate or bad for climate, or is this just sort of like this is just making everybody who does everything slightly more powerful? You know what I mean?
Priya Donti
Yeah. AI is an accelerator of the systems in which it's used. And this is not an original quote, it's a quote from many other people much smarter than I am. But what that means is that we need to look at what are the societal incentives around kind of who gets to leverage technologies like this and what kinds of processes does it mean it's likely accelerating as a result. For example, is there more money in oil and gas than in renewables? Right. That picture is shifting. But I mean, as long as that's the macro level case, you're going to see AI deployed where there is more money to spend for the use of AI.
And so, yeah, I would say that in some sense, the kind of obvious answer would be net, like, the impact is not good for climate. I mean, and this is aligned with the fact that we as a society are needing to work pretty hard to hit our climate change related goals —
David Roberts
Just because society isn't good for climate right now.
Priya Donti
Exactly. But I think importantly, as we think about both the broader climate fight and the role of AI within it, these are shapable. Right. So I think that in some sense, the macro level question of is AI good or bad for climate? Often leads to maybe the wrong implied downstream action of should we do or not do AI? Which I think unfortunately, or fortunately at this point is a bit of a foregone conclusion. And instead we need to really be thinking about how do we shape these developments on a macro level to be aligned with climate action.
And that's not to say that certain applications shouldn't go forward. Like, I think that's a very valid thing to say. A particular application is one where we should not be applying AI, but on a macro level, it's really about kind of steering both thinking about where we should and shouldn't use it and then how we should use it where we should.
David Roberts
Which is the same set of questions that face us on everything else too. Right. On any technology or doing anything, really. In a sense, the effects of these immediate effects are downstream of just sort of larger forces and will change as those larger forces change.
Priya Donti
Yeah, and the reason to think about them in an AI specific context is the same reason we think about sector specific policies when we look at climate action. There are in principle macro level policies that should just address everything, right? Like if you deal with the emissions and the pricing, sure, technically all of the underlying incentives should follow. But in practice, we find that sector specific policies that are really cognizant of the bottlenecks and trends in a given sector are helpful. And so this is the same thing with AI understanding who the players are, what the levers are, and how we can come up with more targeted policy and organizational strategies. To actually address those is ideally additive to thinking about it on just a macro level.
David Roberts
Right, well, I want to talk about policy, but just real quick before we get there, this third level of impacts is system level impacts, which are just going to — I barely even know how to talk about them. There's going to be sort of emergent large systemic shifts that arise out of the changes that these things bring. Are there examples of systemic impacts that could help us wrap our mind around what we mean by them and is there anything general to say about them other than they're probably going to happen?
Priya Donti
Yeah, I mean, I would say that there are some that are a little more in that, "Uh, they're probably going to happen" and others that are more shapeable. So things like machine learning is a key driver behind advertising and increased consumption, not just because of advertising, but because of on demand delivery and all of these things that AI and machine learning creates which often increase emissions, but not always in ways that make us happier. Right. Which again, like emissions increases. I think there's this thing about well, but if there's a benefit on the other side. But there isn't always, and largely, it's obviously a big question across society, is increased consumption making us happier?
And AI is certainly driving that. In addition, AI is changing not just how we consume goods, but also information. So different people, when googling something will get a different answer. And on social media, also the targeting of posts, the generation of misinformation, but also the detection of misinformation. So I think there are some complex ways in which AI actually interacts with this, both in terms of having the capability to serve better information, but likewise be able to serve worse information as a result. And then there are things like the use of AI for autonomous vehicles where it's unclear what the impacts will look like, but they are potentially very shapeable.
Where if AI and autonomous vehicles are developed in a way that facilitates private and fossil fueled transportation, that has very different implications for the transport sector than if you're facilitating kind of multimodal and public transportation, right. Making it easier for people to connect between different modes of transit. And that's not a foregone conclusion, the direction we go in. And so I think there's actually a lot we can do to kind of shape the directions these technologies take in these settings.
David Roberts
Before we move on. There's just one other thought that occurred to me, is the use of these algorithms in trading in people day trading stocks, they're down to like one millisecond whatever trades. Now, I've read a lot of people a lot smarter than me write about this, and their conclusion is just like, no one needs this. No one is benefiting. The market is not benefiting from this. This does nothing but allow people skimming off the middle to skim more off the middle. So there's an application of algorithms and machine learning where we could just say, no, just don't. Just stop doing that.
Priya Donti
Yeah, I think my tagline for people working on financial markets is; energy markets are way more interesting because you have both your financial system and your underlying physical system. I know there's a lot to be done there to facilitate renewables integration. Come join us.
David Roberts
Yeah, there's a reality on the other side of all our numbers instead of just this weird sandbox that you're all just playing pretend in. Okay, by way of wrapping up, then let's talk about policies. So in your paper where you are making policy recommendations, some of the policies are just sort of obvious. You price carbon emissions, right? And then that produces a more or less universal force, pushing down carbon emissions and things like that. You offer tax incentives for greenhouse gas reductions. Just general good climate policy, you recommend a lot of that and all that stuff would be great, of course.
But are there more sort of AI specific policy directions we should be thinking about?
Priya Donti
Definitely. So when it comes to facilitating the use of AI for climate action, what we want to think about is creating the right enabling data digital infrastructure, kind of targeting research funding in particular ways, enabling deployment pipelines. So I talked about this kind of research to deployment infrastructure that's needed in power grids and also capacity building. I mean, I think that both in terms of people who have the skills to actually implement all or parts of AI and machine learning workflows, but also people who have the ability to run organizations or govern systems where AI and machine learning will play a role.
I think having just that base level of literacy in terms of what you're dealing with becomes super important in sort of allowing there to be a lot of ground up innovation where people now are equipped with knowledge of their particular context and these tools and can make things happen as a result. So I think there's a lot that can be done and those all sound like very general levers, but of course there are specifics in there like how should research funding look? I mean, it should not be that climate funding is diverted to becoming AI plus climate funding only. It shouldn't be a narrowing of scope.
It should be things like making sure you have AI expert evaluators in climate calls so that they can understand when something's being submitted that makes sense. And it's about shaping AI calls to have climate focuses. So there's some subtleties there, but basically a lot of things that are needed to enable the use of AI for climate action.
David Roberts
And it also occurs to me that there's tons of things you could think of where AI and machine learning would improve outcomes that won't necessarily make anybody money or might even by increasing public provision or reducing demand for some services, cost people money, like might reduce the net amount of money to be made. And that seems like a place where government policy could help nudge research funding and activity into those areas.
Priya Donti
Absolutely trying to identify those quote unquote public interest technologies and channeling funding towards them. Exactly. And of course we talked about the kind of negative impacts of AI on climate and these should absolutely be accounted for as well. So when it comes to the computational and hardware footprint, we talked earlier about how it's just really hard to understand what's going on because you don't have transparency on what the computational energy impacts look like, even though you know in principle how to measure them because there aren't reporting incentives or requirements or things like that. And when it comes to hardware impacts, we can get a sense of embodied emissions.
But I mean, measurements on water and materials are really hard, just kind of putting in place at minimum reporting frameworks and standards so that those who want to report voluntarily know what that means. But I think more importantly, putting in kind of more mandatory reporting frameworks for some of these things so we can figure out what the dynamics and trends are and what it makes sense to do next.
David Roberts
Right, final issue. But this is something that several people flagged to me that they wanted to hear about is we've recently, I think, seen some articles about the enormous amount of human labor that is behind these AI things. And of course, the world being the way it is, it's often poor people, it's often exploited people, a lot of people that aren't treated well, aren't paid well. So once again we find ourselves with this sort of shiny new thing in the west and you scratch down a few levels and you find blood and tears from poor people behind it.
Is there any sort of like climate or energy specific way of thinking about that or is that just a general concern and do you have any thoughts about sort of like what to do about that?
Priya Donti
Yeah, I mean it is a general concern and I would say that some of this also comes from machine learning being right now predominantly developed in contexts that have certain assumptions associated with them, like large scale internet data that is able to be scraped and maintained by entities in the west. Whereas in many settings in the climate realm, for example, you don't have data that's that large nor do you have the capability to maintain it. But then when you make the assumption that large data and larger models are sort of the way to progress AI and machine learning, which is an implicit it is an assumption that is created by virtue of who it is who's doing it. Now then you also create all these human costs, all these hidden costs that are really important to take into account.
And so I think really what has to happen is that and this is sort of along this point of also what can we do at a policy level to sort of align the use of AI with broader climate goals: I think we really need to think about what it means to develop AI in a way that is actually serving the needs of people around the world, which doesn't always mean biggest data AI. There are other ways to do AI and where the applications are ones where we're also picking in ways that drive the development of AI in these directions. So if you think about the development of AI for power grids, you're going to think about robustness and safety critical aspects differently than if you're looking at other areas.
And that's going to shape how AI itself moves forward and what other domains it immediately has benefits for. And so this just integration of climate and equity considerations more deeply into AI strategies in a way that should then inform funding programs and incentive schemes and the creation of infrastructure and all of that is going to be really important.
David Roberts
Thank you so much for this. This is really helpful for me to wrap my head around all this. And it just highlights again the fact that I emphasize over and over on this pod, which is it really seems like we are on the cusp of a wild wild time to be alive, to put it as bluntly as possible. Like, we're going to see some crazy stuff in our lifetime. Thank you for helping get our heads around, at least, how that's shaping up so far, so Priya Donti, thank you so much for coming and sharing.
Priya Donti
Thanks so much.
David Roberts
Thank you for listening to the Volts podcast. It is ad-free, powered entirely by listeners like you. If you value conversations like this, please consider becoming a paid Volts subscriber at volts.wtf. Yes, that's Volts.wtf so that I can continue doing this work. Thank you so much and I'll see you next time.
Share this post