Skip to content

Energy Efficiency in Schools – How Are We Doing?

A new paper uses machine learning and finds savings, but much lower than projected. 

Almost five years ago, California voters passed Proposition 39, which closed a corporate tax loophole and devoted a good chunk of the increased revenues to reducing energy use at schools. In the most recent reports, the California Energy Commission says that $1.4 billion has been committed to K-12 schools so far through the program.

The arguments in favor of Proposition 39 are compelling. Nationwide schools spend $8 billion a year on energy – second only to personnel in K-12 budgets. If schools can trim their energy budgets through efficiency improvements like new air conditioning systems or LED lighting, they’ll have more to spend on the important things like salaries and textbooks. But, as with any investment, energy efficiency involves putting money down in the hope of recouping more money over time. In the case of energy efficiency, investments result in lower energy bills.

How do we think these energy efficiency investments in schools are paying off? In a new study, my colleagues Fiona Burlig, Chris Knittel, Dave Rapson, Mar Reguant and I developed a new machine learning approach to measure the energy savings from efficiency upgrades in California schools. We analyzed upgrades at over 1,000 schools, and used machine learning algorithms to predict counterfactual energy consumption after the upgrades. Most of these upgrades preceded Proposition 39, but covered the same types of things, such as lighting and heating, ventilation and cooling (HVAC) retrofits. Armed with a wealth of real-world data – electricity consumption every 15 minutes at K-12 schools in the Pacific Gas and Electric service territory in California – we measured the impacts of the improvements and compared our findings to the projected savings made prior to the investment.

The good news: the upgrades did lower energy consumption at the average school by 3%, freeing up real money to pay for salaries, textbooks and supplies. The bad news: that was only about one quarter of the projected savings. Put another way, if a school expected to recoup their investment in the form of lower energy bills over 4 years, our estimates imply they might never see it pay off.

This does not tell us that energy efficiency investments shouldn’t be made, but instead, that researchers, professional evaluators and policymakers need to improve projection models and continue doing real-world evaluations like this one. We are super excited that Proposition 39 comes with extensive data reporting criteria, so researchers can hopefully use new approaches like ours to continue measuring savings and improve the estimates used in future programs. (Here’s a cool visualization of some of the data.)

Additionally, our work shows just how important it is for policymakers to build retrospective studies into governmental programs. Doing so can help building managers determine which investments deliver the greatest savings and optimize their investment dollars. For example, we discovered that lighting upgrades and improvements related to HVAC appear to do the best, achieving 49% and 42% of expected savings, respectively. Ideally, projections made to justify future investments should be calibrated to real-world results like ours.

So, why aren’t the projection models providing more accurate predictions of the amount of energy that will be saved after an upgrade? A simple answer is that they are based on engineering models of the ideal energy user and aren’t sufficiently benchmarked to the real world, where, for example, people leave equipment on overnight by accident or open windows when a room is hot, even if it’s the middle of winter. That’s where our approach differs.

Using our machine learning method, we developed a rich understanding of the drivers of real-world electricity consumption at all the 2,000-plus schools in our data set. Simply comparing a school that invested in a new air conditioner to one that didn’t may mask important differences that impact their energy use. For example, one may be in San Francisco and the other may be in a hot Central Valley town. Instead, we essentially compare each school to itself, both before and after upgrades were made. Using this approach, we can be confident that what we’re measuring is just the effect of the upgrade.

Here’s how the savings map out over time:

Source: Burlig, Knittel, Rapson, Reguant and Wolfram, E2e working paper.

The chart depicts measured savings over time, lining schools up so that period zero corresponds to the quarter when the energy efficiency upgrades were installed. Our initial estimates before the upgrades were right around zero – we’re not measuring savings before they happen. After the zero point, we started to see the effects of the upgrades, with energy reductions that last well after the investments are made.

One big benefit of our new approach to measuring savings is that we can look under the hood to see how well it works. Because only about half the schools in our sample had energy efficiency upgrades, we validated our approach by measuring savings at schools without upgrades. For those schools, our machine learning method estimated zero savings, which is comforting—it showed that the approach performed as we expected. At the same time, at schools that did install upgrades, we see a clear reduction in energy use on average. The figure below shows the distribution of estimated savings for treated and untreated schools.

Source: Burlig, Knittel, Rapson, Reguant and Wolfram (2017), E2e working paper.

Along the way, we discovered something else interesting: when we compared real changes in energy use to the projection model’s expected savings on a school-by-school basis, we found the low actual-to-projected savings ratios reported above. But when we compared the average actual savings across all upgrades, and compare this to the average expected savings across all upgrades, we found that the average prediction was more in line with reality. If this is confusing at first glance, suppose you handed out a bunch of jars of jellybeans, and had one person per jar guess how many beans their jar contained. Their estimate would likely be wildly off. But if you averaged all those guesses, and then compared them to the average quantity of beans across all the jars, your new guess would be closer to the correct number.

Using this aggregate approach, all improvements, just HVAC, and just lighting upgrades achieve 55%, 103%, and 67% of expected savings. This tells us that if you ask the projection models to tell you how well any individual upgrade will do, the answer won’t be very helpful. But, if you were to install hundreds of upgrades, you could get a better sense of how well the set of upgrades would perform on average. Part of the reason for this is that some schools ended up saving more energy than they were expected to, while others saved substantially less. Why? That’s hard to say. Bigger schools are just as likely to see disappointing realization rates as small schools, for example.

Source: Burlig, Knittel, Rapson, Reguant and Wolfram (2017), E2e working paper.

At the end of the day, we’ve learned important lessons about how effective energy efficiency upgrades are in schools. We hope these results translate into some homework for regulators who want to help schools make educated energy efficiency upgrade decisions. And, even though schools aren’t saving as much power as they expected, these upgrades did translate into real energy and cost savings in the classroom. Maybe some of the money schools are saving on energy can be put towards data science classes where students can learn about all the ways machine learning can help us tackle real-world problems.

Note: Fiona Burlig (UChicago)Christopher Knittel (MIT), David Rapson (UC Davis), and Mar Reguant (Northwestern) contributed to this post. They are associated with The E2e Project. A previous version of this post appeared on Forbes.com.

Catherine Wolfram View All

​Catherine Wolfram is the William F. Pounds Professor of Energy Economics at the MIT Sloan School of Management. She previously served as the Cora Jane Flood Professor of Business Administration at the Haas School of Business at UC Berkeley. ​From March 2021 to October 2022, she served as the Deputy Assistant Secretary for Climate and Energy Economics at the U.S. Treasury, while on leave from UC Berkeley. ​Before leaving for government service, she was the Program Director of the National Bureau of Economic Research’s Environment and Energy Economics Program, Faculty Affiliate of the Energy Institute at Haas from 2000 to 2023, as well as Faculty Director of the Energy Institute from 2009 to 2018. Before joining the faculty at UC Berkeley, she was an Assistant Professor of Economics at Harvard. Wolfram has published extensively on the economics of energy markets. Her work has analyzed rural electrification programs in the developing world, energy efficiency programs in the US, the effects of environmental regulation on energy markets and the impact of privatization and restructuring in the US and UK. She is currently working on several projects at the intersection of climate and trade. She received a PhD in Economics from MIT in 1996 and an AB from Harvard in 1989.

6 thoughts on “Energy Efficiency in Schools – How Are We Doing? Leave a comment

  1. Using machine learning to predict future energy consumption is wise. But a practical way of saving energy and money is using products/appliances with an energy star rating. They help increase the energy efficiency.

  2. OK so what lesson can we easily draw from this? The charts indicate that the realization rate labeled “Savings (individual)” is less than the rate labeled “Savings (average)”. The realization rate is actual/predicted and it looks to me like the text is saying that the average of (actual/predicted) is less than (average actual)/(actual predicted). Am I correct (I found the text a bit confusing)? If so that means the weighted average, weighted by predicted savings, is less than the simple average. At first I didn’t understand that in the light of your statement, “Bigger schools are just as likely to see disappointing realization rates as small schools,” but now I get it: the quality of the a priori savings estimate is correlated with its size. In other words, you can judge the size of fruit, and estimate how hard it is to pick, better when it is low-hanging. Makes sense to me!