Last year, Governor Jerry Brown signed a law, Senate Bill 350, that sets out to double energy efficiency savings by 2030. Last week at the Democratic National Convention, Governor Brown focused his remarks on the importance of policies such as this to tackle climate change.
The precise energy efficiency targets haven’t been finalized, but they will be ambitious.
Meeting these targets will require an expansion of energy efficiency policymaking. Policymakers need to understand which programs work in energy efficiency and which don’t.
This is a daunting task. The California Public Utilities Commission’s (CPUC’s) energy efficiency efforts fund roughly 200 programs. The California Energy Commission (CEC) is regularly introducing new appliance and building standards. The evaluations of these activities are made public, but they can be hard to find and difficult to interpret. Additionally, policymakers may not have the time or training to critically assess the methodologies being used.
As a result, individual programs may not be getting enough scrutiny.
Many people working on energy efficiency may think the last thing we need is MORE evaluation. Energy efficiency is heavily evaluated.
I disagree. Today we have an opportunity to step up our game. We have access to more data and more rigorous evaluation techniques than ever before. It’s time for more evaluation, not less. In particular, it’s time to evaluate the evaluations.
To illustrate what I’m talking about, let’s look at an example from another heavily evaluated sector, criminal justice. The context is quite different, but the basic lessons are instructive.
In the 1980s many US states enacted stricter laws to reduce domestic violence. Rather than putting every offender in jail, courts began to mandate that offenders go through batterer intervention programs (BIPs). The initial evaluations of these programs found they were highly effective. These evaluations contributed to the justice system’s growing reliance on BIPs. In a 2009 report, the Family Violence Prevention Fund and US government’s National Institute of Justice estimated that between 1,500 and 2,500 such programs were operating.
As the cumulative number of evaluations grew, researchers began to undertake reviews that evaluated the evaluations, referred to as meta-analyses or systematic reviews. What they found was disappointing.
Many of the past evaluations that showed positive effects had methodological shortcomings. While some men completed a BIP and did not reoffend, others failed to complete court-mandated BIPs. Many men also became difficult to track down for surveys. The positive evaluations left out these populations, who were the people most likely to re-offend. More recently, careful studies that recognized the systematic differences between men who stuck with the programs and those that didn’t found that mandating the programs had a small or no effect.
There is disagreement on what to do next. Some researchers and practitioners have argued that BIPs could still be effective for some people. What is needed is better targeting and tailoring of the BIPs, coupled with evaluation. Others have taken the position that policymakers should stop relying on these programs because they waste valuable resources and create a false sense of security for women who think their batterer will be reformed through the programs. This is a really important evidence-based debate that should result in more effective policy.
This example is not unique. Evaluations of evaluations, known as systematic reviews, are becoming prevalent in many sectors including medicine, international development, education and crime and justice.
The way a systematic review works is that a team of reviewers focuses on a specific policy intervention. The reviewers do an exhaustive search for all the evaluations on the intervention. This includes academic and consultant evaluations, and includes other geographies. Then the reviewers carefully consider each study. They particularly focus on how carefully each study considered what would have happened in the absence of the intervention—the counterfactual – and whether there is a risk that the results may be skewed one way or another.
The systematic review report discusses each study’s risk of bias and then reaches a conclusion about the intervention based on the studies with the lowest risk of bias. In some cases a systematic review may conclude that a program is effective, or that it is not. In other cases a review finds that there is insufficient evidence to reach a conclusion. In these cases the review recommends how evaluations should be performed in the future to reach a firmer conclusion.
There are several reasons why now is the time to begin doing systematic reviews of energy efficiency evaluations. First, a very large number of evaluations have been completed across the country and world. There is value in reviewing and synthesizing these evaluations so that policymakers everywhere have access to the best evidence. Second, new statistical approaches are taking hold in energy, fueled in part by smart meter data. Systematic reviews can help policymakers make sense of the diversity of approaches. Third, energy efficiency is taking on increasing importance, as reflected in ambitious goals and growing spending. The evidence base needs to be strong to ensure the resources are being used effectively.
Research conducted at The E2e Project points to questions that systematic reviews could help answer. When are ground-up engineering estimates most appropriate to use? How important is the rebound effect? What considerations are most important when embedding evaluations into program design? What can interval smart meter data tell us about the effectiveness of programs that other approaches cannot?
Several of these were highlighted by agency staff at an energy efficiency workshop held by the CEC last month.
California produces only 1% of global greenhouse gas emissions. Given that, as Severin emphasized in a prior blog, the state’s policies can’t possibly have a meaningful direct impact on climate change. Instead, the way California can best address the climate change challenge is through invention and learning, then exporting the knowledge to the world.
In the case of energy efficiency, California should focus on finding which policy interventions are most effective and sharing the findings. Policymakers should take a look at systematic reviews as a tool to accomplish this.
Andrew Campbell is the Executive Director of the Energy Institute at Haas. Andy has worked in the energy industry for his entire professional career. Prior to coming to the University of California, Andy worked for energy efficiency and demand response company, Tendril, and grid management technology provider, Sentient Energy. He helped both companies navigate the complex energy regulatory environment and tailor their sales and marketing approaches to meet the utility industry’s needs. Previously, he was Senior Energy Advisor to Commissioner Rachelle Chong and Commissioner Nancy Ryan at the California Public Utilities Commission (CPUC). While at the CPUC Andy was the lead advisor in areas including demand response, rate design, grid modernization, and electric vehicles. Andy led successful efforts to develop and adopt policies on Smart Grid investment and data access, regulatory authority over electric vehicle charging, demand response, dynamic pricing for utilities and natural gas quality standards for liquefied natural gas. Andy has also worked in Citigroup’s Global Energy Group and as a reservoir engineer with ExxonMobil. Andy earned a Master in Public Policy from the Kennedy School of Government at Harvard University and bachelors degrees in chemical engineering and economics from Rice University.