Will Investing Big in Distributed Solar Save Us Billions?
A new study says yes. But distributed solar benefits are an elusive prize.
Policy makers, power companies, and a majority of Americans are coming around to the idea that the U.S. needs to accelerate its efforts to green the grid. Getting on the renewable energy train is one thing. Agreeing on how and where to ride this train is more complicated.
Discussions about the right mix of grid-scale and distribution-scale resources are getting polarized. At the crux are tricky questions about how distributed energy resources – such as rooftop solar and storage – will impact future grid operations and costs. We’ve blogged (and blogged) about what we know and don’t know about distributed generation benefits. But we’ve yet to dig into this high-profile work by Christopher Clack and co-authors at Vibrant Clean Energy (VCE).
Clack et al. have built a giant model of the entire US electricity sector which captures distributed energy resource potential in some detail. Their national study found that building a lot more distributed solar and storage (enough to power more than 25% US homes) would save $473 billion in system-wide costs. Last week, VCE released their California-focused study which estimates that distributed solar + storage could save California ratepayers $120 billion over the next 30 years.
Energy pundits have been swooning over the high-powered modeling and the provocative punchlines. Distributed solar proponents want to take these findings and run with them. I can see that this modeling exercise is exciting–who doesn’t get excited about trillions of data points? But I also think some of the model’s key assumptions could significantly overstate the real-world cost savings potential.
Distributed energy benefits redux
Grid-scale solar technology costs significantly less per megawatt than rooftop or community-scale solar technology. That’s a fact. But it’s also true that siting distributed energy resources (DERs) close to electricity demand could help us avoid expensive investments in local transmission and distribution. If we want to measure the full value of distributed solar, we need to assess this potential for reduced grid costs.
Before diving into a big and complicated model, let’s break down some fundamentals.
The first step of a forward-looking analysis involves forecasting how local demand patterns will change as we start to electrify more stuff:
Next we need to identify where and how grid system constraints will start to bind as electrification drives up electricity consumption. Not easy because there’s lots of variation across the system in terms of where there’s grid capacity to absorb new loads (as this cool map of my neighborhood shows):
Finally, across locations of the grid where capacity is projected to get tight, we need to figure out where it could make economic sense to invest in distributed energy resources (e.g. solar plus storage) versus centralized generation and grid upgrades. This requires estimating deferrable distribution costs (which are notoriously hard to pin down).
How do you carry out this kind of exercise for many thousands of locations across the US grid? This is the challenge that the VCE team set out to tackle…
Digging into distribution cost modeling
This VCE model is impressive along many dimensions. Some of these are too technical for this non-engineer to really understand. But the part that I’ve been most interested in involves as much economics as engineering: How to estimate the grid costs that could be avoided with DER deployment?
[Wonk-alert]: This section digs into the details of the deferrable distribution cost modeling. These are important weeds to wade into! But if you want to just cut to the chase, ignore the text between the weeds below.
Ideally, the VCE model would incorporate location-specific information about distribution system constraints and costs to understand how DER benefits could vary across the US electricity system. But this detailed information is not readily available. So instead, the model opts for a much coarser approach.
To calibrate the model of distribution costs, the VCE team uses cost parameter estimates from this UT Austin study which analyzes annual distribution system spending reported by US utilities between 1994-2014. It’s important to note that these UT Austin researchers were not trying to disentangle the causal effect of one cost driver (e.g. peak load) from another (e.g. annual electricity consumption). Their report summarizes average univariate relationships between utility reported distribution costs and each cost driver. So, for example, when they summarize how utility distribution costs vary with kW peak demand, their estimate captures not just the impacts of increasing peak demand, but also the effects of factors that are positively correlated with peak load (such as supporting more higher annual kWh consumption).
Getting back to the VCE model (Section 1.9.2 of the technical appendix for you dive deepers), the distribution cost implications of different load profiles are estimated as the sum of two parts. The first component multiplies peak load on the system in a given location by the cost parameter from the UT Austin study that captures the average relationship between distribution costs and peak demand. The second part multiplies annual grid electricity demand on the system by the cost parameter that captures average relationships between distribution costs and utility distribution electricity consumption.
The VCE approach seems like a reasonable way to get the distribution cost model up and running. But it’s far from ideal:
- It assumes that all load reductions deliver the same cost savings, regardless of how constrained – or not- the system is likely to be in a particular location. This abstracts away from significant variation in cost deferral potential across locations.
- It uses average cost parameters to estimate marginal distribution system cost changes. These can be very different ( my guess is that average cost exceeds marginal).
- Each of the two cost parameters from the UT Austin study capture the combined effects of correlated distribution cost drivers. It seems to me that adding the two components together – one that implicates peak load and one that implicates annual load – will over-estimate the cost implications of demand increases (and exaggerate the benefits of using DERs to offset an increase).
If we’re serious about assessing the potential for DER benefits, we need a better understanding of where DERs can offer real grid cost savings. Fortunately, one of Berkeley’s super-star graduate students is hard at work on this question…but I’ll leave that for a future blog.
Optimal versus actual DER deployment
It’s one thing to figure out how investments in distributed solar and storage could be optimally deployed to minimize costs. It’s another thing to make these distributed investments happen when and where we need them.
The VCE model is projecting savings under optimal deployment. So far, our track record with distributed generation deployment has been anything but optimal. Net metering incentives, for example, are available everywhere in California, regardless of whether there’s any potential for grid system benefits.
California has been trying for years to direct DER investments towards high value locations. This blog series provides a historical overview. An important lesson learned so far? The short planning horizon for distribution investments, the need for contingency planning, transaction costs associated with identifying and incentivizing the most promising investments “leave a very narrow Goldilocks zone for procurement” of cost-effective DER projects.
Keeping it real
Wildfires, heat waves, a mega-drought, yikes. We’re getting daily reminders that we need to get our climate change mitigation act together. There’s climate urgency behind efforts to accelerate investment in renewable energy. There’s also a social obligation to keep costs contained and electricity rates affordable.
The VCE model is impressive along several dimensions. But there are some critical assumptions and blind spots that complicate the translation of optimistic findings to real-world policy priorities or prescriptions. I think we need a reality check on the deferred grid investment estimates. And we need to reckon with the fact that real DER deployment will likely be very different from the optimal DER deployment that the modeling assumes. That’s my take. Interested to hear what our blog-reader-brain-trust has to say…
Keep up with Energy Institute blogs, research, and events on Twitter @energyathaas.
Suggested citation: Fowlie, Meredith. “Will Investing Big in Distributed Solar Save Us Billions?” Energy Institute Blog, UC Berkeley, July 26, 2021, https://energyathaas.wordpress.com/2021/07/26/will-investing-big-in-distributed-solar-save-us-billions/
I agree with everything in Meredith’s blog, but she covers so much ground that I’m concerned that two major flaws in the VCE study may not get noticed as much as they should. They are the second and third bullet points Meredith makes. To elaborate a bit:
First, the UT Austin study is very clear that their regression results are studies of the *average* cost of distribution per unit of peak demand or energy sales, NOT the marginal cost associated with changes (see page 17). The VCE study appears to take those parameters and use them to analyze the cost *change* that would occur if one changed peak demand or energy sales. This will greatly overstate the cost savings associated with reducing peak demand or sales. That’s because it ignores the fact that distribution has significant economies of scale.
The reason we have regulated monopoly distribution companies is they are natural monopolies, with such significant economies of scale that competition would just result in one firm taking over the market. Put differently, one company serving all the houses on a street is more cost-effective than two companies, each serving some of the houses.
The fundamental economic analysis of natural monopolies implies that average cost is declining in sales or peak demand, which implies that marginal cost is below average cost. That means that when VCE uses an estimate of average cost per unit peak demand or energy sales to claim a savings from reducing either of those measures, they are systematically overestimating the level of savings, by an amount equal to the difference between average cost and marginal cost. If, as is almost certainly the case, most costs associated with distribution are fixed costs, then VCE is greatly overestimating the savings. Even if you think that only half of the costs of a distribution system are fixed (with respect to peak demand quantity or annual energy sales) then VCE is still doubling the inferred cost savings compared to reality.
BTW, comparing predictor variables “Total Customers”, “Annual Peak Demand”, and “Annual Energy Sales”, the UT Austin study that VCE cites also says “The number of customers in a utility’s territory is the most accurate predictor for annual electricity distribution costs.” (page 17) That sure sounds like most of the costs are fixed costs.
Second, possibly even more important, the UT Austin study runs *separate* *univariate* regressions of Total Distribution System Costs on each of the predictor variables. By doing so, the authors of that study are estimating the average relationship between total distribution system costs and each of the variables separately. They explain that they do this (page 6), because the variables are extremely highly correlated (correlation between Annual Peak Demand and Annual Energy Sales is 0.986).
It appears that the VCE study then erroneously add those estimates together to claim the savings from installing rooftop solar and reducing annual peak demand and annual energy sales. This is double counting, plain and simple.
This is analogous to running one regression of house price on number of rooms — which yields the average relationship between house price and number of rooms — and then running another regression of house price on square footage — which yields the average relationship between house price and square footage — and then estimating the impact of adding another room to a house on its price by adding the average relationship of house price and an additional room with the average relationship of house price and X more square feet.
Given that Annual Peak Demand and Annual Energy Sales are almost perfectly correlated, adding the estimated coefficients together (see page 31 of the VCE technical manual) is effectively counting the same change twice. It appears that this is what VCE has done.
I keep using the term “it appears”, because the documentation in the technical appendix to the VCE study is, to be charitable, incomplete. The entire description of this critical component of VCE’s calculation is “For CLdp and CLde, we take values from the report `Trends in Transmission, Distribution and Administration Costs for US Investor Owned Electric Utilities’ by the University of Texas at Austin. These values are national averages, and VCE apply a regionalization by State using internal datasets for locational cost multipliers.” (page 32 of the technical manual). An attempt to get more detail from VCE has yielded no reply.
Thanks Meredith for this important blog and thank you Severin for underscoring these two very important points.
The principal component of the “important” point around double counting is incorrect (https://www.eia.gov/todayinenergy/detail.php?id=36675). The other “important” point around average versus marginal is also somewhat misleading because the study is comparing changes over time between two scenarios for each and there is embedded load growth as well as renewal of the aging system (most of distribution is over 25 years old: https://www.eia.gov/todayinenergy/detail.php?id=36675). Therefore, more attention to how the model logic works and drive co-optimization is warranted; rather than assuming that it is erroneous.
The contact behind this comment reads Chris Clack, so I presume this is Dr. Chris Clack from Vibrant Clean Energy. I appreciate your weighing in on this discussion. But I don’t see how this response addresses the issues raised in the blog and subsequent commentary.
To paraphrase Robert Fares(https://twitter.com/RobertFares/status/1420135845663846402), I’m not looking to start an antagonistic flame war. I’m genuinely interested in understanding how the VCE distribution cost modeling works. If we are going to use this tool to steer investments and drive policy recommendations, I think it’s critical and constructive to understand and discuss what’s going on under the hood.
On the first point, you link to an EIA post showing increased utility spending on distribution systems over time. We’ve discussed this increase before (see https://energyathaas.wordpress.com/2021/02/08/distribution-costs-and-distributed-generation), noting that these cost increases are happening even as peak demand (net of behind-the-meter solar) has been declining. I don’t see how this aggregate cost trend implies that the VCE approach to modeling the relationship between peak load, total load, and distribution costs is correct.
Severin’s intuitive example helps elucidate the methodological concern. Suppose you want to estimate how adding another bedroom to your home would increase your home value. If you run a univariate regression of home values on square footage, your estimate of the effect of square footage on home values is going to partly pick up the effect of more bedrooms and other things that are correlated with bedrooms. Similarly, if you run a univariate regression of home value on number of bedrooms, well that estimate will partly pick up the effect of more square footage. If you take these two estimates and add the implied effect of an extra bedroom to the implied effect of the additional square footage, you are going to over-estimate the market value of that home improvement! What you could do instead is regress on home values on number of bedrooms, square footage, and other important factors (e.g. lot size). This model would do a better job of separating the effect of one correlated variable from another. And it would give you a more accurate – smaller!- estimate of how your home price would increase with a new bedroom.
If we want to use regression analysis of utility distribution cost data to calibrate a model of how distribution costs vary with load profiles, the same argument applies. Using the publicly available FERC data, when I regress utility distribution costs on peak load alone, I get the UT Austin estimate of $52/kW yr (using 1994-2012 data as they do). When I regress utility distribution costs on annual electricity consumption alone, I get the UT Austin estimate of 1.1 cents/kWh. But if I regress these utility reported distribution costs on peak load *and* electricity consumption in the same equation, I get a peak load estimate of $32/kW year and a total consumption estimate of 0.5 cents/kWh. If I extend the data to include years after 2014 and use within-utility variation to estimate these parameters, the estimated cost impacts look even smaller (~$20/kW year). The upshot: it seems that logical refinements to the VCE approach would give much lower estimates of how distribution costs increase as peak load/total load increases (and much smaller estimates of DER benefits).
On the average versus marginal cost issue, you point out that you are comparing changes over time between two scenarios. I understand the model logic. But if different marginal changes to load profiles drive the cost differences across scenarios, I think the model will generate the wrong answer if it uses the wrong cost parameters.
Finally, you note that VCE has put all the documentation out there for evaluation. I really appreciate this – it inspired the blog:) The issue I see is that the documentation is very terse and raises some critical questions.
For what it’s worth, I did contact VCE over a week before posting the blog to ask for more detail on the distribution cost model. My email went unanswered. I totally get it that the VCE team is very busy and does not have the time to respond to out-of-nowhere emails. But I also hope we can– at some point soon –get more clarity on the inner workings of this important modeling exercise.
It’s interesting that the VCE models finds that DER cost effective at marginal distribution costs of only $52/kW-yr. In contrast, SCE in its latest 2021 GRC workpapers is reporting marginal distribution costs in excess of $80/kW-yr. And in looking more closely I found that the implied new investment for distribution from these marginal cost estimates are much smaller than the new incremental investment for those accounts listed in SCE’s FERC Form 1. There’s an inconsistency among estimates that needs a much deeper examination than what we see in these disparate presentations.
There is no double-counting of the distr. grid CapEx in the VCE model. I cannot add screenshots of VCE’s model here, so I will refer to pages+lines in the model documentation found in this link: https://vibrantcleanenergy.com/wp-content/uploads/2020/08/WISdomP-Model_Description(August2020).pdf I have also offered a similar reply to Prof. Borenstein’s tweet about this subject, who has kindly quoted my analysis of what seems to be a misunderstanding at this point. My Twitter thread is here: https://twitter.com/PMoutis/status/1420517334729515008 I will not discuss the average/marginal concern, as the VCE model documentation is unclear of how it uses the respective average costs from the UTAustin study or how it – maybe – (pre)processes them…
VCE’s model’s distr. grid cost term may be found at p. 31, fifth line after Fig. 1.6 of the documentation. I here denote all variables, parameters & prices found in that cost term from left to right: C_L^dp=distr. grid CapEx, E_L^p=peak consumption/absorption, E_L^b=peak injection, E_L^m=minimum load (correction factor), h=normalize over time-step, C_L^de=distr. grid OpEx, E_Lt=load demand, J_Lt=DER power, lamdas=’activation’ & weights parameters. You may confirm the validity of what I denoted in the nomenclature of the model found in the same documentation p. 5-7. The overall objective/cost function of the VCE model is found on p. 5 of the documentation. In that objective function the distr. grid cost term is the last one there. I can, honestly, NOT see how distr. grid CapEx C_L^dp cost is double-counted. It only appears once in the objective function in the distr. grid cost term (as a function of peak power – more in the next paragraph), followed by C_L^de which is the distr. grid OpEx cost (as function of energy – more in the next paragraph) as denoted earlier…
The VCE model documentation reads that the model uses values of the UTAustin study https://energy.utexas.edu/sites/default/files/UTAustin_FCe_TDA_2016.pdf for distr. grid CapEx C_L^dp (presumably regressed over peak power – judging by the parenthesis to which C_L^dp is multiplied by in the distr. grid cost term) & distr. grid OpEx C_L^de (presumably regressed over energy – judging by the parenthesis to which C_L^de is multiplied by in the distr. grid cost term). Indeed there are univariate regressions of distr. grid CapEx & OpEx over various parameters in the UTAustin study. Which brings us to what I believe to be the misunderstanding about the double-counting of distr. grid CapEx: Dr Fowlie & Prof. Bornstein correctly identify that the distr. grid CapEx cost C_L^dp comes from the univariate regression of distr. grid CapEx over peak power of the UTAustin study, but misinterpret the distr. grid OpEx C_L^de cost for another (additional) distr. grid CapEx cost from the univariate regression of distr. grid CapEx over DG energy cost in the UTAustin study. Which is not the case, as C_L^de is not a CapEx, but an OpEx cost as denoted in the VCE model documentation.
I believe that this will clarify how the VCE model does not double-count distr. grid CapEx and, thus, does not overly exaggerate (at least from this aspect) the value of distributed generation effects to distr. grid savings. I agree with what’s been said that the nomenclature could be clearer, in order to save us all from such minor misunderstandings (that, unfortunately, impact heavily such discussions), in order to focus on the more important parts of a model analysis.
“Grid-scale solar technology costs significantly less per megawatt than rooftop or community-scale solar technology.”
This statement is not entirely true once we include the cost of transmission. As Jim Lazar points out, community solar is directly cost competitive when we add the $40+/MWH for transmitssion: https://mcubedecon.com/2021/07/13/transmission-the-hidden-cost-of-generation/ Rooftop solar is not inherently more expensive than grid–scale. Most of the cost difference, as we know from Germany and Australia, is due to “soft” and permitting costs which we have control over but haven’t addressed.
We also can see that the solar PV installed to date has had a nearly 1:1 deferral of peak load in CAISO since 2006: https://pgera.azurewebsites.net/Regulation/ValidateDocAccess?docID=658445 The 2020 peak load is 11,000 MW less than the CEC’s forecast in 2005.
Chris Marnay makes the other important point. Local reliability and resilience is increasingly important and the “build bigger, grander” model actually reduces that resilience as we saw in Texas in February. In California, a customer is 15 times more likely to experience a local distribution outage than a system level transmission outage (and note that California has never had a generation outage caused by a physical shortage of generation–both 2001 and 2017 were caused by market behavior.)
The true marginal costs of the utility system, at least in California, appear to be above marginal costs. Otherwise, why have our utility rates been rising almost continually for four decades? If the marginal costs were as low as the utilities claim in their various filings, our rates should be plummeting–the reported marginal costs are often only 50% of the average costs. There is an empirical disconnect between the marginal costs that being presented and the incremental rise in average utility costs.
Which leads to a more fundamental question: if “new” electricity is just so darn cheap (and please don’t try to tell me that there’s some great embedded “economy of scale” in the grid in a radial system–“marginal” costs were just at low 30 years ago) why can’t we buy it for that price, or something close? [I suspect the problem is that the marginal cost calculations have ignored the true costs of future replacement.] With rate increases that PG&E and SCE are asking for, these “marginal costs” will be as low as a third of the current rates (and even less for non-CARE residential customers).
Clinging to the fiction that grid scale costs are so much less while failing to address the true cause of high rates (the unwillingness to allocate risk to utility shareholders) is setting up mass defections by affluent ratepayers. The availability of pick ups that hold a week’s worth of electricity now makes energy independence a feasible option. Microgrids become cost effective for businesses. I doubt the $20 bill will be left on the sidewalk.
Richard… You’re filling the screen with distractions. The marginal cost of electricity is separate from the total cost of electricity. If you look at the monthly costs of electricity for households in OECD countries it’s roughly $100 per month +/- $20 dollars. This simple description will cover 90% of the worldwide monthly electricity bills whether the region uses 3500 kWh per year in Germany, 6000 kWh per year in California or 12000 kWh per year in South Carolina. How can there be a narrow bandwidth in costs despite the high variability in usage?
In nearly all cases around 66% of the costs of supplying electricity are fixed costs. I work in a control room. I’m a mechanic in a sea of electricians. My job is managing power plants. Over 90% of the jobs in the room involve managing wires – low, medium and high voltage. The people who manage and maintain our wires are the heart and soul of all utilities in absolutely all circumstances worldwide. In my control room they are referred to as Load Monkeys and/or Circuit Clerks. These disparaging terms camouflage how important their work is. Our lineman in the field are very much like firefighters in that they run into the storm zone while everyone else is evacuating and/or hiding.
All this wire work represents fixed costs. These people put the R in reliability. Microgrids put the M in Meh. It’s childish micro thinking.
I’m an expert in power plant operations. I’ve seen problems caused by fire, flooding, leaves, animals, car crashes and many other things over the course of 25 years. Despite all my experience I’m regularly surprised by the new problems I see. Micro-grids remove experts like me from the equation and attempt to replace us with an algorithm. Good luck with that…I’ve worked as hard as I possible can to automate my job and this has shown me the idea that a Player Piano algorithm can replace what I do is laughable. The idea of replacing our lineman in the field is deranged.
And the land line companies said that we couldn’t survive with only cell phone networks. The electric grid looked dangerously threatening at the turn of the previous century. Technologies come and those who are embedded in the current one see it as impossibly unworkable–and then it works. Reliability can be provided by more than just hard engineering fixes. Diversity also is important. That’s one of the key reasons why we recommended that SMUD close Rancho Seco. Only laws, not technology, keep us from establishing local networks that will be more reliable than the focus on high voltage interconnections because we’re 15 times more likely to experience a local outage than a system level one.
I don’t see the comparison to land lines applying much. You can’t share telecommunication resources regionally in the same sort of way you can share generation resources regionally. Telecom doesn’t have the seasonality aspect or the geographic diversity aspect. It’s a different type of infrastructure. We can easily send data through the air and this has allowed cell phone towers to replace wires. The same isn’t true of electricity.
I completely agree with the idea that incumbents are poor judges of disruptive competition but I don’t see the comparison applying much here because solar (both distributed and centralized) doesn’t remove the need for the grid. This type of energy-only framing misses the bigger picture. So far as I can tell the modeling community is pinching in on a solution that is roughly 60% solar, 30% wind, 5% hydro+ and the last 5% composed of a basket of fuels. The grid makes it so all of these resources can work together.
Again, I’m not advocating for a grid with 100% centralized solar and scads of new transmission. I’ve done a fare bit of grid modeling but I’ve rarely introduced transmission constraints. The idealized type of modeling I’ve done is helpful if you’re looking to understand the limits of solar/wind. I’ve found the limits of PV/Wind appear to be around 90 to 95%. I’ve also discovered (as have many others) these things called missing power events which drive the need for some sort of fuel. Far more sophisticated models (Marc Perez et al.) show similar results in terms of max practical RE penetrations and the need for fuels. Please note these practical limits apply to a grid scale analysis. Smaller grids (Hawaii, Alaska, South Korea, Japan, Australia and micro-grids) will necessarily have a harder time reaching ultra-high penetrations of RE because they don’t have the benefit of regional diversity or load diversity.
One thing I do advocate for is more regional coordination so we can make better use of the transmission assets we already have. This gives us access to: 1. More load diversity 2. More resource diversity and 3. As I’m sure you know the interties also allow us to share reserve capacity.
I’m convinced the Overbuild and Spill (OBAS) strategy is going to be the keystone feature of the grids in the future. I don’t necessarily think centralized gigawatt sized solar and wind farms are the only way to go but even Clack’s modeling here suggests this type of plant will provide 90% or more of the energy in the system. So far as distributing these resources goes I tend to think we’ll over-build regionally – by region I mean a county or a collection of counties. You could even imagine things happening state by state or somewhat larger blocks.
I agree that I’m probably overselling the demise of the grid, but it’s also not an either/or situation. Distributed local resources are clearly more economic for serving dispersed rural loads, and the question is where is the tipping point for relative cost effectiveness. I also agree on the likely final resource mix. That last 5% can be met with renewable gas in existing CTs if we can agree that it shouldn’t be squandered on boiling tea water for residential customers. BTM storage embedded in EVs will support load following–the existing fleet implies an expected capacity that is 30 times our current peak load.
30X is a large size advantage and it’s a big part of the reason I think our grid battery investments in California are a huge waste of money. Berkeley’s grid model from last year suggests that in a 90% RE grid there is roughly 60 GW of back up CT and/or CC that is only used for ~1% of the time. That’s 60 billion dollars of infrastructure that’s barely being used. You’d think we could come up with a way to incentivize vehicles to displace this rarely used standby generation. The same modeling team also released an additional report that modeled EV load. Unfortunately, they didn’t make the EV load anywhere near as dynamic as they could have.
My utility has done some recent modeling of what EV load is going to look like in 2030. One version assumes business as usual and the other assumes the EVs will charge dynamically. In the BAU case our peak load increases considerably but in the dynamic charging case there is no increase at all. Here again there’s a missed opportunity because a 3rd case could have been developed to show how V2G could actually help reduce peak load. I’m afraid we’re just not there yet with our thinking. Perhaps V2G suffers from a boy who cried wolf problem?
I agree with you about not using RNG for making tea. It appears the technical potential of our waste streams is roughly enough to provide the last 5% of our electricity. While expensive these fuels would actually be carbon negative. Some folks claim RNG is some sort of greenwashing scam. This is unfortunate because tackling methane leakage appears one of the easiest ways to reduce emissions between now and 2030.
Absolutely agree with these points. I posted this on how I envision EVs could become the next “smartphone” transformation: https://mcubedecon.com/2021/07/27/electric-vehicles-as-the-next-smartphone/
Thank you, Lee, for your voice of experience. It’s desperately needed in California energy policy, where academics with little or no practical experience attempt to justify policy driven as much by ideology as conflicts of interest.
Stratospheric California electricity rates have nothing to do with “unwillingness to allocate risk to utility shareholders” (does anyone really believe California would allow PG&E to file Chapter 7?). They have everything to do with the cost of integrating intermittent, unreliable solar and wind into the grid powering the world’s fifth-largest economy. California is only meeting its climate goals by importing reliable, fossil-fuel electricity, then hiding its provenance under the label “unspecified sources of power.” Unfortunately, climate doesn’t care what we call it – the world gets that much warmer, whether our CO2 is emitted here or in Wyoming.
“Stratospheric California electricity rates have nothing to do with “unwillingness to allocate risk to utility shareholders” (does anyone really believe California would allow PG&E to file Chapter 7?). They have everything to do with the cost of integrating intermittent, unreliable solar and wind into the grid powering the world’s fifth-largest economy. ”
For PG&E the total portfolio cost is about $2 billion above the “market value” of that same portfolio. For SCE, it’s north of $1 billion. You can see those numbers in the public versions of their ERRA testimony. This is 30-40%+ of their total generation cost. In other industries, at least some portion of stranded asset costs are usually borne by shareholders. Even the 1996 restructuring decision passed some costs to shareholders through a reduced rate of return.
On the other hand, we see no evidence of “integration cost.” Gas generation has fallen more than 40% in the last decade, generation imports are largely unchanged (although quite a few WECC coal plants have been closed in the last half dozen years) And the only “new” fossil plants in state over the last decade have been to replace the old shoreline plants in Los Angeles.
In PG&E’s 2020 GRC, the PG&E witness found no evidence of significant additional flexible capacity costs, which are the supposed integration costs.
If you have empirical evidence from utility, EIA or CARB filings of these “integration” costs showing data over the last decade, then you should present it. Otherwise, I’m not seeing anything of note.
I appreciate the kudos but I’m afraid I’m going to disappoint you Carl.
I grew into adulthood working in the nuclear industry. My first stop was the Abraham Lincoln in Main Machinery #1. Nuclear carriers and subs have ramp-able reactors but this is an impractical approach on the grid because nuclear plants have high capacity costs. There is only one civilian… Ahem… commercial plant that operates in load following mode in the US. At nuclear plants we pride ourselves on high capacity factors – we paint our performance metrics on our equipment. High penetration renewable grids kill the economics of nuclear plants. RE grids still need backup but this backup is best provided by a resource with low capacity costs. Combined cycle gas plants are the only thing that makes sense to me. Simple cycle plants have slightly lower capacity costs but I don’t think they’ll cut the mustard with backup fuel costs expected to trend up towards 10 $/mmbtu or higher due to progressive restrictions on fossil fuel usage.
Solar and Wind are becoming very cheap sources of power. The data is very clear. On some level wind and solar are comparable to corn for carbs and soy for protein. The US is very good at monomaniacally industrializing things. Solar is day power and wind is night power. There’s also a seasonal yin yang relationship between these resources. They are intermittent as you’ve mentioned but I believe they are predictably intermittent. Consider this simple comparison. We predict load based on weather – mostly temperature but also wind speed, humidity etc. If we can predict load to a high degree of accuracy (generally about +/- 2% on a day ahead basis) we should be able predict the availability of wind and solar to a similar accuracy. Maybe not +/- 2% but something close. If we can predict these resources to this sort of accuracy we can manage them.
My job doesn’t involve these sorts of predictions but I work directly with the people who do make these predictions. I’ve been interested in load prediction and solar/wind modeling for over a decade. It’s come a long way and it’s got a ways to go. My overall point here is that I believe we can achieve a 95% RE grid and back it up with some rarely used dispatchable resources. The costs of maintaining the backup capacity appear reasonable to me and the backup fuel costs, while stiff, also appear to be reasonable. RE looks unstoppable to me.
Lee, on this subject I’m sorry but I’m going to have to disappoint you.
“Nuclear carriers and subs have ramp-able reactors but this is an impractical approach on the grid…”
Please inform Electricité de France (EDF) that their modern grid reactor fleet isn’t ramp-able. They’re ramping their reactors all day long.
“…because nuclear plants have high capacity costs.”
(Assuming you meant capital costs) – most analyses use a 40-year licensing period as the lifetime of a nuclear plant, a number based on tradition more than anything else.
In the early days of the Atomic Energy Commission no one knew what intense neutron bombardment would do to steel over the course of decades. Engineers decided that after 40 years it would be a good idea to take a look inside reactors and see how they were holding up, so that was the initial period set for re-licensing. Though it does tend to embrittle steel, now we know now that constant temperature of operation tends to be far less destructive than its constant fluctuation in a coal or gas boiler, for example. We know that nuclear reactors are capable of lasting 2-3 times longer than originally anticipated.
That most levelized-cost analyses (LCOEs) from investment banks still use 40 years as the lifetime of a nuclear reactor is more a product of their own contemporary investments in renewables. Natural gas and renewables investors, it seems, want desperately to grab a piece of nuclear’s market share.
“High penetration renewable grids kill the economics of nuclear plants.” Um, no:
• There are eleven countries in the world with more than 50% renewable energy powering their electrical grids.
• The U.S. isn’t one of them.
• All but one are completely dependent on abundant natural resources (hydropower and/or geothermal power). The exception is Denmark, which is dependent on wind.
• Until June, Denmark had the most expensive electricity of any non-island country in the world.
• That dubious honor was recently taken by renewables-obsessed Germany.
• Germany’s year-to-date carbon emissions skyrocketed 25% in the first half of 2021, when the wind didn’t blow as hard as it was supposed to blow.
• Those eleven countries are home to less than 5% of the world’s population.
In summary, the economics of renewables are only good for exploiting consumers in small countries. Fortunately, those countries are outliers – there aren’t any others with high-penetration renewable grids.
For the sake of argument we could assume 11 countries around the world get 90% of their artificial lighting from LEDs or drive 90% of their miles in electric vehicles. As Wayne Gretzky used to say, don’t skate to the puck – skate to where the puck is going to be.
I’ve watched photovoltaics grow by a factor of 100 over the last 15 years. In 2004 the largest PV installation in the world wasn’t much larger than 10 MW. As of 2021 we have multi-GW sized plants with much larger facilities on the drawing board. I expect growth rates will average 20% for a few more years and annual additions will reach 500 to 1000 TWh by the of the decade. With generation costs headed towards 1 to 2 cents/kWh I don’t see anything stopping this technology.
The idea of getting 5% of our electricity from rooftop solar seems highly plausible but this type of grid would have higher total costs, not lower. It’s impressive how these studies manage to zhuzh the numbers to make specific types of DERs so valuable. It’s a tightrope walk between utility scale RE on one side that have production costs headed towards 1 to 2 cent/kWh and an assortment of DERs on the other side which are far cheaper, orders of magnitude larger and far more distributable. By all means, install solar on your home and put some backup batteries in your garage but don’t pretend this solution is going to lower system costs by 500 billion dollars.
HERD IN THE ROOM
Thermal storage and EVs are going to flood the market with DER capabilities unlike anything we’ve ever seen in terms of both magnitude and two-way capabilities. I believe it’s clear thermal storage and EVs plus utility scale RE are a far more cost-effective partnership than rooftop solar and distributed batteries.
NON LINEAR BENEFITS
We know low penetrations of DERs produce non-linear benefits but you can’t extrapolate the marginal benefits of low penetrations up to medium penetrations let alone high penetrations. This suggests to me the competitiveness and value of distributed solar would shrink over time as we add DERs to the system. If this were to happen you’d expect distributed solar to play a smaller role over time. This isn’t at all what’s happening in this paper. Distributed PV appears to maintain a relatively steady market share all the way out to 2050.
Another problem here is that we’re obviously dealing with a moving target when it comes to forecasting load decades into the future. We’re not very good at modeling current EV and electric thermal load let alone deep into the future. One thing we do know is that as we electrify heating and transportation we’re likely going to shift the system peaks from the summer to the winter in most of the US outside of the southwest. This will naturally lower the system value of DERs like PV and raise the value of wind, transmission and DERs like EVs and thermal storage.
I tend to think we can easily capture all the non-linear benefits of DERs with flexible EV charging/discharging, smart water heaters and smart water pumps. These load side technologies are far cheaper, far more distributed and far more accessible than PV.
Remember who funded this research: Vote Solar, Local Solar for All and Coalition for Community Solar Access. These organizations do a lot of good but they also fight for retrograde electricity tariffs which 1. Greatly over value solar back feed and 2. Prevent us from moving to dynamic rate designs which would help integrate high penetrations of RE These organizations also continue to fight for tax incentives which subsidize home batteries which in most cases are luxury resources that shouldn’t be subsidized by taxpayers. This is an unforgivable position considering there are so many lower cost technologies which enable higher self consumption rates.
The “sub-optimality” of likely dispersed generation adoption is not only about economics, e.g. off-base tariff incentives, in fact it may not even be the primary driver. In my experience, three main motivators always come up; the other two are sustainability, wanting to be green, and reliability/resilience, wanting to be safe. In fact, I believe the key word missing from the post is resilience. Many will want secure power, even though it may be costly and a small fraction of existing PV systems have storage. Sadly, as you say, the need for resilience is growing faster than any of us would like, and I believe, the hazards electricity distribution creates will drive it out of some neighborhoods. This isn’t to say the megagrid cannot be made safer, but look at the cost forecast coming from the already ferociously unpopular PG&E. Is that investment really likely in relatively sparsely populated rural areas, and on what time scale? Finally, this is a rapidly evolving (deteriorating?) situation, so I have my doubts about any analysis deeply rooted in historic data, even if oodles of it. Looking into our imminent bleak future, frantic panicked hand waving might just be the best analytic technique we have.
I don’t disagree with your point about many customers valuing reliability/resilience. You’re absolutely right but how many is many and do these folks deserve special treatment? I wouldn’t go so far as to say they don’t deserve some special treatment but I’d argue they deserve less, not more, special treatment. I personally have the discretionary income and the proclivity to install solar but I only have about 40 square feet of appropriate roof space because I have a rooftop deck – that’s 400 to 600 kWh per year. Most of my neighbors are in the exact same boat. Most of the folks living in the city across the water have zero solar production potential.
In many cases the folks who lobby for distributed solar argue utility scale solar has unnecessary environment impacts. Hold on a minute. The folks who can actually take advantage of rooftop solar to cover a significant portion of their consumption live in sprawly suburbs and rural areas. These folks have orders of magnitude more environmental impact than people living in downtown San Francisco, Portland, Seattle or Vancouver even if you factor in the footprint of utility scale solar and wind. The environmental argument around UPV vs DPV is upside down and wrong.
If PG&E had their druthers they wouldn’t serve rural customers who live in low density areas let along low density fire prone areas. The same was true of telecom companies prior to the cell phone age. Many utilities are ferociously unpopular and have been since the very beginning of the age of utilities for reasons entirely out of their control. Disclaimer… PG&E is my former employer and I happened to work at the most unpopular facility in their generation fleet. We shouldn’t subsidize flood insurance such that people are encouraged to live in flood plains. Similarly, we shouldn’t subsidize electricity in a way that encourages folks to live in fire prone area. Unfortunately we do both. That’s not PG&E’s fault in my admittedly biased opinion.
I’ve worked on the reliability side of the electricity delivery problem for over a decade. What additionality does resilience provide compared to reliability? To me it’s a vague hand-wavy term that doesn’t have any real meaning due to the fact it has so many meanings. The terms micro-grid and long duration storage are similar buzz words. There is no there there.
First, I know folks in SF row houses who produce their annual load from solar rooftops. They don’t have AC so their load is lower and matches the panel capacity. Seattle and Portland houses have more rooftop capacity (Oakland is more dense than either of those cities) so they can meet their requirements too. But the bigger opportunities are in neighborhood solar systems only use the local circuit.
Second, two wrongs do not make a right. Just because the current land use configuration, that will take decades to change, doesn’t justify destroying another ecosystem. Why not focus our solar resources where the ecosystem has already been largely obliterated? It’s a sunk cost.
As for resilience and reliability, I read Chris’ point not as that we should give them special treatment, but rather than we need to acknowledge that these customers will take their desires into their own hands as the technology and cost balance presents itself. We’re staging a reenactment of the long-distance wars while customers are about to launch into smartphones.
Richard, how many urban homes, much less apartment buildings, have enough roof space to cover their December loads with PV once they electrify their residences?
Apparently more than you would think. I’m relaying the experience of a couple of homeowners of row houses in SF. I was surprised when the first one told me about this a decade ago. He was an early adopted (since moved to Hawaii.)
BTW, my longer comment presenting empirical data in response to the economies of scale comments has not yet been approved. I’m awaiting that.
Be that as it may, millions of people in the cities I mentioned live in multi-story apartment complexes and residential towers. These folks can’t power their homes with rooftop solar. These cities certainly don’t have an abundance of land available for neighborhood solar projects so I don’t see this as a serious option either. Once you get “out of town” you get into spread out suburban living where you have bigger houses with cheaper land for these community solar projects. The community solar solution works a lot better in these locations but there’s a downside. As I’ve already mentioned I believe you’ll find the biodiversity and environmental value of neighborhood solar sites significantly exceed the biodiversity of the mega-solar projects sited in places like the desolate western tip of the Mojave. Try a google image search for neighborhood or community solar and see for yourself.
I don’t believe distributed solar is anywhere near as cost-effective as building solar farms in high solarity regions. I think Clack is fudging his numbers and/or ignoring alternative technologies which can and will provide many of the same benefits of local solar at far lower costs. Your previous comment about 200 kWh truck batteries is a very good example of one of these technologies.
I don’t see adding solar as a leap. You’re going from an electricity supply that’s 99.9996% reliable (US average) and bumping that up to 99.9999ish percent. If that makes you sleep better at night that’s great. You’re still be going to be relying on the grid for well over 50% of the hours in a year.
Most people aren’t installing solar for extra reliability. Most people are installing solar to save money on electricity bills. I don’t have a problem with people installing solar but I do have a problem with people not paying their fair share of grid fees. I think we need to shift over to higher fixed fees and ratchet down unit costs. This would make sure the grid gets paid for but it would have the added benefit of improving the economics of electrification. I know there’s a lot of back an forth between economists on this but I don’t see any other alternatives.
Meredith, not a word about carbon emissions in your article. Why? there seems to be an inherent assumption that distributed solar can “green the grid” on its own. Without any empirical evidence to that effect, your assumption is not only unwarranted, but irresponsible.
From a physics standpoint, there is no question electricity is generated most efficiently and reliably at centralized power plants; whether by nuclear plants, gas plants, or solar/wind farms; that it’s distributed most efficiently and equitably on a system of high-voltage AC wiring, of radial topology. Both factors take advantages of economies of scale – not financial ones, but physical ones. It ensures a system that, overall, is less wasteful of resources (and that’s setting social justice considerations aside, when they should be front and center).
By relocating the backup required for renewables from centralized sources to millions of unregulated fossil-fueled power generators (or making the untenable assumption batteries can serve in that function), we have a potential environmental disaster in the making, one from which there is no turning back. We’re no longer able to reduce emissions en masse, setting a hard limit on the potential for meaningful carbon reductions.
It’s would be wonderful if Chris Clack & Company discovered it was possible to make progress against climate change by everyone generating their own electricity and taking responsibility for their own emissions. But all they found out, apparently, was distributed generation might be the cheapest way to generate electricity. And if we’re trusting individuals to put the welfare of society above their own, to put public good above private, we’re accepting a social construct that has always been doomed to failure.
Can you provide empirical evidence that the adoption of renewable resources including solar rooftops since 2002 has led to an increase in fossil fuel use in the California electricity grid? (Please present data sources for at least the last decade to account for variations in hydropower availability driven by the recent droughts.)
Thank you for this thoughtful blog. The degree to which, as you said, the smart energy punditry class accepted the headline results of this modeling felt rushed and a bit self-serving to me. I’d note 3 things and hoping you can confirm – and maybe do another blog? 🙂 – (1.) Many looked at the results of the modeling and said it indicates NEM subsidies should be, if anything, increased. But the modeling DOES NOT seem to price in NEM tariffs. For instance, in CA, if NEM costs ratepayers 30 cents / kWh, Im guessing the model assumes costs equal to the price to deploy so ignores those subsidies entirely. Pricing them in would likely change the results dramatically, dont you think? (2) In addition to assuming perfectly optimized placement of DERs (which is ridiculous) I think the model also appears to assume perfect coordination of DERs with the the grid operator – e.g. all generation equally visible to the ISO. Def not the case in CA, and I doubt anywhere in the US. (3) I cant tell if all the benefits really come from storage or from solar, and didnt see that disaggregated in any way.
The other major problem with the VCE study is the absence of any attempt to quantify the costs of DERs to all customers under the retail tariff provisions used to compensate customers for behind the meter resources. The study appears to assume that the “cost” of DERs to all customers is equal to the installation and operating costs of these resources. For generation and storage resources procured by load serving entities under wholesale Power Purchase Agreements (PPA), this is a reasonable approach because the PPA prices are directly tied to system costs.
But DERs located behind a customer meter are typically compensated using tariffs like Net Energy Metering where customers receive a retail rate credit for all production (both self-consumption and exports). These credits are not linked to system costs, or avoided costs, and usually compensate customers at prices that significantly exceed the prices that would be paid if the same resource was procured under a wholesale PPA. For example, NEM tariffs provide residential customers of the California IOUs with credits that range from $0.25-0.30/kWh while the levelized costs of such resources are only a fraction of that price. Because the VCE model fails to consider the prices being charged to all customers for NEM resources, it does not provide an accurate basis for measuring the rate impacts on all customers.
Ignoring the tariff treatment for DERs constitutes a fatal flaw in the VCE analysis with respect to behind the meter resources. If the VCE study is being used to support the deployment of additional resources located in front of the customer meter, which would be procured at wholesale prices under PPAs, then the other issues referenced in this blog post deserve further consideration (granularity of cost deferral, methods of identifying optimal deployments).
Besides the lack of true granularity in the modeling of distribution system topology or congestion and the simplified cost avoidance assumptions in the Vibrant model, there are other problems with the way the findings of the nationwide have been characterized by the rooftop solar industry.
The cost savings attributed to DERs in the Clean Energy – Distributed Energy Resources (CE-DER) scenario are not particularly large. The study finds that the average retail rate in 2050 for the “clean energy (CE)” scenario is about 7.1 c/kWh while the rate for the CE-DER scenario is about 6.8 c/kWh, about 4% less. As Meredith notes, to achieve these savings, the DERs must be optimally sited, which isn’t remotely the case today with NEM. (For a detailed discussion of how difficult it is in practice to defer specific distribution investments with DERs, see my article on LinkedIn: https://www.linkedin.com/pulse/non-wires-alternatives-ever-yield-significant-savings-scott-murtishaw.) In addition to NEM’s lack of optimal siting, I assume that when the model selects DERs, it uses the installed costs of the systems rather than paying them at or near the full retail, as ratepayers do under NEM. Because NEM fails to optimally site resources and results in payments far above than the capital cost of the systems, any cost savings attributable to DERs in the model do not justify the use of NEM as the primary mechanism for incentivizing the installation of DERs.
If you look at the installed capacity charts in the study, it doesn’t look like much of the costs savings in the CE-DER scenario is attributable to PV. The CE scenario (with no T&D cost savings attributed to DERs) and CE-DER scenario have very similar amounts of distributed PV (DPV), about 200 GW vs 250 GW (Figure ES-3). What differs much more markedly is distributed storage — 0 GW vs 200 GW. The CE scenario doesn’t seem to allow for any distributed storage, but it makes sense to me that T&D cost savings in the CE-DER scenario depend on the dispatchable and reliable capacity that storage can provide.
When energy pundits conclude, based on this study, that the cheapest energy scenario is “clean and distributed,” it’s misleading because even in the CE-DER scenario, the vast majority of energy comes from utility-scale sources. DPV and Distributed Storage account for about 400 GW of 2,500 GW of installed capacity and in both the CE and CE-DER scenarios, and the model selects utility-scale PV by a greater than 3:1 ratio in the CE-DER scenario and nearly 4:1 in the CE scenario. Because solar, especially rooftop solar, has a much lower capacity factor than other resources, you have to look at the generation stack to get a more accurate representation of how the energy is produced in each scenario (Figure ES-6). Even if you ignore the excess renewable energy curtailed, DPV appears to provide roughly 400 TWh of the 5,000 TWh total in the CE-DER scenario, or about 8%.
Finally, the study lumps together all solar that occurs below the 69 kV transmission-distribution interface as DPV. The model documentation describes this as a mix of ground-mounted community solar and industrial, commercial, and residential rooftop solar, but the study provides no information on the mix of DPV resources. It’s possible that a large share of it, especially in less densely populated areas, is actually ground mounted distributed solar or that most it is consists of more cost-effective commercial and industrial installations.
Yes, it appears nearly all benefits stem from shaving load peaks using storage at or below 69 kv. You can still install utility scale storage at 69 kv! This should not be considered a proof positive justification for NEM compensated BTM solar (and storage) although many will try to spin it that way.
Solar is great and I encourage people to get solar if they have appropriate space. The larger scale like roofs of big box stores is, as pointed out in the first reply, even better. What would help even more is to get the the US.gov to start investing in basic research on LENR. That would get the academics in this country to stop refusing to even look at research results from companies like Brillouin Energy. That would allow us to complete development of our technology. Our technology provides nuclear energy densities using hydrogen as a nuclear fuel and Nickel as a catalyst. Our technology will allow megawatt class dispatchable generation in the footprint of of 4 or 5 parking spaces, 200KW at a large house and just a few KW at a tiny house. Utilities need to start focusing on providing and charging for interconnect to DG if they want to maintain relavence.
One element that is often overlooked is how MARGINAL line losses are affected by distributed energy resources. This is an area where DERs provide big savings.
Line losses are exponential with load. So as a distribution system becomes congested, the losses go up very sharply. For a congested circuit, losses can be as high as 15% on-peak, meaning that the marginal losses are on the order of 30%. Add to that the avoided reserves that are achieved when load is reduced at the meter, and the avoided generation capacity can be more than 40% of the capacity of the DER.
I wrote about this in detail with respect to energy efficiency — 1 kW of load avoidance at the meter can save up to 1.47 kW of generation needed at the transmission level. But the same principles apply to other DERs, including rooftop PV and distributed energy storage (of all kinds, including ice-storage air conditioning, not just batteries). That paper is available at: https://www.raponline.org/knowledge-center/valuing-the-contribution-of-energy-efficiency-to-avoided-marginal-line-losses-and-reserve-requirements/ I would love to see that work reviewed with respect to distributed solar and distributed batteries. I’m too retired (as is my co-author) to take on that technical a task.
John Farrell, at the Institute for Local Self-Reliance (ILSR.org) has done some interesting work on the “optimal” size for PV installations, taking into consideration multiple factors, including transmission and distribution costs and losses, the cost of land for utility-scale systems, and the cost of roofing maintenance for rooftop systems. While this may have changed since he last published, his conclusion was that systems in the 500 kW to 1 MW range seemed to be the sweet spot. They can be built inside the distribution system, they cover the roof of a big box store, so no land is needed, are typically not subject to shading, enjoy economies of scale in construction and operation, and avoid T&D costs. I’ll note that these type of installations also shade the roof of an air-conditioned retail store, saving more electricity in that way. See his report at: https://ilsr.org/report-is-bigger-best/
A few years ago, the Arizona Corporation Commission essentially accepted Farrell’s research, in determining that the “fair” compensation for rooftop solar is the cost the utility would incur to install 1 MW PV systems within the distribution system. The said that would capture all of the locational benefits of distributed resources, rather than base the calculation on the cost of central station solar. I think they got it pretty much right.