Skip to content

Will Investing Big in Distributed Solar Save Us Billions?

A new study says yes. But distributed solar benefits are an elusive prize.

Policy makers, power companies, and a majority of Americans are coming around to the idea that the U.S. needs to accelerate its efforts to green the grid. Getting on the renewable energy train is one thing. Agreeing on how and where to ride this train is more complicated.

Source

Discussions about the right mix of grid-scale and distribution-scale resources are getting polarized. At the crux are tricky questions about how distributed energy resources – such as  rooftop solar and storage – will impact future grid operations and costs. We’ve blogged (and blogged) about what we know and don’t know about distributed generation benefits. But we’ve yet to dig into this high-profile work by Christopher Clack and co-authors at Vibrant Clean Energy (VCE).

https://www.vibrantcleanenergy.com/products/wisdom-p/

Clack et al. have built a giant model of the entire US electricity sector which captures distributed energy resource potential in some detail. Their national study found that building a lot more distributed solar and storage (enough to power more than 25% US homes) would save $473 billion in system-wide costs. Last week, VCE released their California-focused study which estimates that distributed solar + storage could save California ratepayers $120 billion over the next 30 years. 

Energy pundits have been swooning over the high-powered modeling and the provocative punchlines. Distributed solar proponents want to take these findings and run with them. I can see that this modeling exercise is exciting–who doesn’t get excited about trillions of data points? But I also think some of the model’s key assumptions could significantly overstate the real-world cost savings potential. 

Distributed energy benefits redux

Grid-scale solar technology costs significantly less per megawatt than rooftop or community-scale solar technology. That’s a fact. But it’s also true that siting distributed energy resources (DERs) close to electricity demand could help us avoid expensive investments in local transmission and distribution. If we want to measure the full value of distributed solar, we need to assess this potential for reduced grid costs. 

Before diving into a big and complicated model, let’s break down some  fundamentals.

The first step of a forward-looking analysis involves forecasting how local demand patterns will change as we start to electrify more stuff: 

Source: http://electrifyeverything.online/buildings

Next we need to identify where and how grid system constraints will start to bind as electrification drives up electricity consumption. Not easy because there’s lots of  variation across the system in terms of where there’s grid capacity to absorb new loads (as this cool map of my neighborhood shows):

A snapshot of my neighborhood taken from the PG&E Integration Capacity Analysis Map which summarizes, among other things, the amount of load that can be installed at that location without any thermal or voltage violations.

Finally, across locations of the grid where capacity is projected to get tight, we need to figure out where it could make economic sense to invest in distributed energy resources (e.g. solar plus storage) versus centralized generation and grid upgrades. This requires estimating deferrable distribution costs (which are notoriously hard to pin down).

How do you carry out this kind of exercise for many thousands of locations across the US grid? This is the challenge that the VCE team set out to tackle…

 Digging into distribution cost modeling

This VCE model is impressive along many dimensions. Some of these are too technical for this non-engineer to really understand. But the part that I’ve been most interested in involves as much economics as engineering: How to estimate the grid costs that could be avoided with DER deployment? 

[Wonk-alert]: This section digs into the details of the  deferrable distribution cost modeling. These are important weeds to wade into! But if you want to just cut to the chase, ignore the text between the weeds below.

Ideally, the VCE model would incorporate location-specific information about distribution system constraints and costs to understand how DER benefits could vary across the US electricity system. But this detailed information is not readily available. So instead, the model opts for a much coarser approach.

To calibrate the model of distribution costs, the VCE team uses cost parameter estimates from this UT Austin study  which analyzes annual distribution system spending reported by US utilities between 1994-2014. It’s important to note that these UT Austin researchers were not trying to disentangle the causal effect of one cost driver (e.g. peak load) from another (e.g. annual electricity consumption). Their report summarizes average univariate relationships between utility reported distribution costs and each cost driver. So, for example, when they summarize how utility distribution costs vary with kW peak demand, their estimate captures not just the impacts of increasing peak demand, but also the effects of factors that are positively correlated with peak load (such as supporting more higher annual kWh consumption).

Getting back  to the VCE model (Section 1.9.2  of the technical appendix for you dive deepers), the distribution cost implications of different load profiles  are estimated as the sum of two parts. The first component multiplies peak load on the system in a given location by the cost parameter from the UT Austin study that captures the average relationship between distribution costs and peak demand. The second part multiplies annual grid electricity demand on the system by the cost parameter that captures average relationships between distribution costs and  utility distribution electricity consumption.

The VCE approach seems like a reasonable way to get the distribution cost model up and running. But it’s far from ideal:

  • It assumes that all load reductions deliver the same cost savings, regardless of how constrained – or not- the system is likely to be in a particular location. This abstracts away from significant variation in cost deferral potential across locations.
  • It uses average cost parameters to estimate marginal distribution system cost changes. These can be very different ( my guess is that average cost exceeds marginal).
  • Each of the two cost parameters from the UT Austin study capture the combined effects of correlated distribution cost drivers. It seems to me that adding the two components together – one that implicates peak load and one that implicates annual load – will over-estimate the cost implications of demand increases (and exaggerate the benefits of using DERs to offset an increase).

If we’re serious about assessing the potential for DER benefits, we need a better understanding of where DERs can offer real grid cost savings. Fortunately, one of Berkeley’s super-star graduate students is hard at work on this question…but I’ll leave that for a future blog.

Optimal versus actual DER deployment

It’s one thing to figure out how investments in distributed solar and storage could be optimally deployed to minimize costs. It’s another thing to make these distributed investments happen when and where we need them. 

The VCE model is projecting savings under optimal deployment. So far, our track record with distributed generation deployment has been anything but optimal. Net metering incentives, for example, are available everywhere in California, regardless of whether there’s any potential for grid system benefits.

California has been trying for years to direct DER investments towards high value locations. This blog series provides a historical overview.  An important lesson learned so far? The short planning horizon for distribution investments, the need for contingency planning, transaction costs associated with identifying and incentivizing the most promising investments “leave a very narrow Goldilocks zone for procurement” of cost-effective DER projects.

 Keeping it real

Wildfires, heat waves, a mega-drought, yikes. We’re getting daily reminders that we need to get our climate change mitigation act together. There’s  climate urgency behind efforts to accelerate investment in renewable energy. There’s also a social obligation to keep costs contained and electricity rates affordable.

The VCE model is impressive along several dimensions. But there are some critical assumptions and blind spots that complicate the translation of optimistic findings to real-world policy priorities or prescriptions. I think we need a reality check on the deferred grid investment estimates. And we need to reckon with the fact that real DER deployment will likely be very different from the optimal DER deployment that the modeling assumes. That’s my take. Interested to hear what our blog-reader-brain-trust has to say…

Keep up with Energy Institute blogs, research, and events on Twitter @energyathaas.

Suggested citation: Fowlie, Meredith. “Will Investing Big in Distributed Solar Save Us Billions?” Energy Institute Blog, UC Berkeley, July 26, 2021, https://energyathaas.wordpress.com/2021/07/26/will-investing-big-in-distributed-solar-save-us-billions/

42 thoughts on “Will Investing Big in Distributed Solar Save Us Billions? Leave a comment

  1. Local Solar for All is a coalition of businesses and advocates that bring real-world experience deploying solar in communities across the country. We have long understood the tangible societal benefits that result from the deployment of local, distributed energy resources (DERs). We launched this campaign to use the power of newer, better models to explore the potential of deploying DERs at-scale to help us achieve our policy goals and the impacts it would have on grid costs. We sponsored research that analyzes the grid at a national level and within individual states using the WIS:dom®-P model because it is demonstrably superior to any other utility resource planning model we’ve seen. Our results show that DERs at scale help flatten demand to accommodate the lowest cost and fastest transition to a clean energy grid and that by continuing to scale DERs and embracing their benefits nationally, and in California, we can save money and meet our policy goals. We look forward to working with regulators and policymakers in California and elsewhere on how we can fully realize these cost savings and continue to scale distribution level resources to create a grid that works for everyone. We welcome you to learn more about our research at http://www.localsolarforall.org.

  2. Mathematicians utilize the tool of limiting cases to assist in the understanding of complexity. The limiting case to investigate the concept of “distributed energy resources” (DER) is a single-family “off grid” housing unit in the wilderness. This housing unit has a refrigerator and a central heating and cooling unit for environmental control. To power this housing unit with solar and/or wind would require a very large area (likely larger than the available roof area) dedicated to energy collection infrastructure because the energy density per square foot of both solar and wind is low, and the likely capacity factor, or percentage on-time is only about 20%. Thus, some practical form of energy storage sufficient to serve the residence is also necessary.

    This infrastructure will be short-lived. Batteries will last only 7 to 10 years. The useful life for solar PV panels or wind generation are only about two decades. Clearly, this system is not cost-effective when compared to a large liquefied propane (LP) storage tank and a LP-powered generator, which is how such an “off grid” residence is commonly implemented. Supplemental energy could also be supplied by a wood-burning stove or fireplace.

    Applying such a limiting case to a residence in an urbanized area would require a complicated “black box” DER model with many researcher-adjustable parameters to achieve the alleged billions in savings. However, the real-world DER implementation would be far more costly than an urbanized residence supplied by a vertically-integrated electric utility in a non-regulated market. That is the case that deserves our attention, not advocacy dressed up within an opaque model.

  3. Bravo to Severin for alerting us to “double-counting” the benefits in the model under examination.

    Skimming through this discussion, I see no attempt to quantify the increased fire risk associated with a large quantity of battery electric storage units installed in their residences in the hands of consumers, who typically lack energy profession training and discipline. Earlier this week, Chevy Bolt EV owners were advised to park their Bolts *outside* to prevent possible fire damage after charging them to only 90% capacity. They were advised against charging them overnight. They were also advised to not let the range indicator drop below around 100 miles – i.e. to leave a substantial reserve in the battery at all times instead of draining it to almost zero.

    In South Korea, where many of these batteries are manufactured, they found that aggressive and deep charge and discharge cycles are associated with increased fire risks. Lithium-ion batteries cause difficult to extinguish fires with “thermal runaway” being a significant problem. Arizona Public Service learned about this the hard way in their 2 MW McMicken Battery Electric Storage facility near Surprise, Arizona. There was a fire and explosion that injured 9 first responders. “‘Reasons that are still unknown’: 30 experts investigate Surprise battery explosion that injured 9,” by Ryan Randazzo, April 23, 2019, The Arizona Republic. https://www.azcentral.com/story/money/business/energy/2019/04/23/arizona-public-service-provides-update-investigation-battery-fire-aps-surprise/3540437002/ APS issued an informative report advising caution about a year later.

    – Then, there is the battery waste problem. Means to recycle these batteries at the end of their roughly 7 to 10 year lifetimes still remain under development. There is the also problem of child labor used to mine cobalt in the Congo for these batteries. The current Clack et. al. model is suspect as policy advocacy, similar to 2015 100% WWS advocacy by Jacobson et. al. that was debunked by Clack et. al. in a PNAS paper.

  4. Thanks for this write up.

    While there are some definite limitations to the modeling (as with all models); the core issues pointed our are erroneous. Perhaps a better read of the model documentation and understanding of the model logic (mathematics) would enlighten the author(s) of this piece to what the VCE WIS:dom®-P model is doing and how the study works in terms of distribution co-optimization.

    Note that VCE have put all the documentation out there for criticism and evaluation. As a company, they will have proprietary datasets. Try using them for your modeling needs!

  5. Nice summary!

    But, I think a key driver of the deployed systems we will is all about financing. There is an active ecosystem for financing large-scale solar. This does what classic finance is supposed to do: price risks, break them up, sell them off to various parties. The result is efficient low-cost capital for large-scale solar. And because small-scale solar is not yet as efficient in its financing, the entire market tips towards large-scale deployments.

    Solar on big box retailers seems like a good place for this type of financing efficiency to go next.

    Only when residential solar is financed as efficiently as credit card payments will the market tip back towards smaller systems.

    Yea, I’m saying ‘follow the money.’ Any forecast of DERs on the grid needs to take into account how much capital frictions and efficiencies drive deployments.

  6. Meredith Fowlie makes a significant contribution to the debate over centralized vs. distributed renewables and the role of the VCE Study in informing that debate. My takeaway from this article is that the VCE Study should not be considered set in stone. It should be viewed for what it is, namely an evolving structure for modeling grid behavior in an environment of widespread, decentralized renewables. Indeed, the CPUC is implementing a broad Proceeding intended to address this very topic.

    Even in its current form, the VCE Study makes a major contribution towards understandng how distributed renewables, along with demand response, can favorably shape load at the grid periphery and increase the efficient use of centralized resources.

    Nobody really knows for sure how users are going to respond to an environment that includes a significant amount of distributed energy generation AND storage. As value pricing becomes more robust, electricity generated at one location and time can have a very different value when stored and deployed at another place and time. How is that value distributed between the parties? How will behavior change in such an environment? What are the proper incentives? What is the role of EV’s as they become the dominant form of transportation? How will increased building electrification impact the local infrastructure?

    My major concern is that we design an environment that is optimized for the way we think people will behave (or want them to behave) rather than as they choose to behave. The one thing I learned over a career in the technology industry is that you are only as successful as your user community permits. When you cease to be rersponsive to user needs, you quickly become yesterday’s news.

    The Municpal Utility community could provide an ideal environment for testing advanced concepts. They represent ideal “stand-alone” electrical infrastructures. They should be encouraged to promote as much distributed energy as they can implement without destroying the value proposition they’ve already created within their communities. They should also be encouraged to become as locally resilient as possible in order to better determine just how much transmission resource is actually required in the state.

    The current climate emergency requires no less of us.

  7. You may be thinking solar at homes will feed back to the grid but Im working on a scheme where my small solar always matches some load in my home. The result is net metering of energy even if the utility does not provide net metering. This is done electronically as load reduction the utility serves. So the question is, does a peak load reduction help the grid or not? Electrically it helps both the customer and the grid, but the REP loses money by not selling as much energy, and if they use an inverted rate, then the reduction of load is especially beneficial to the customer. Right now Im powering my Tesla entirely off my home solar. The energy is never purchased from the utility. All those fears EVs are going to overload the grid are unfounded if the customers use their own solar power to charge their cars.