Skip to content

Deconstructing the Rosenfeld Curve

The Rosenfeld curve does not prove that California’s energy-efficiency policies work.

The Wall Street Journal, Forbes, and, most recently, the Sacramento Bee have pieces on Arik Levinson’s new NBER working paper, “California Energy Efficiency: Lessons for the Rest of the World, or Not?”   The paper makes a nice point, but I worry that it is being misinterpreted.

Levinson starts with the well-known “Rosenfeld Curve” (below), named after energy-efficiency pioneer Arthur Rosenfeld.  For the last four decades, residential electricity use per capita in California has been nearly flat, while growing 75 percent in the rest of the United States.

Residential Electricity Use per Capita 1963-2009

Fig 1

While some have attributed the difference to California’s energy-efficiency policies, Levinson argues that California’s temperate climate, changing demographics, and other factors can explain almost 90% of the gap. For example, Levinson shows that part of the explanation for the increase in other U.S. states is that more and more people are living in the Southwest, where air-conditioning is used more intensively.

Levinson’s paper is thoughtfully done and deserves to be widely read, but the results are not terribly surprising. Even energy-efficiency proponents have long understood that at least half of the gap is likely due to non-policy factors (Sudarshan and Sweeney, 2008; Rosenfeld and Poskanzer, 2009).

But the truth is that it is hard to learn much from this type of aggregate data.  The challenge with any empirical analysis is how to construct a counterfactual. What would have happened to California’s electricity use without energy-efficiency policies?  This is a deceptively difficult question because of all the ways, large and small, that California is different from the rest of the United States. These differences accumulate over the 40+ year time horizon, obscuring the causal impact of energy-efficiency policies.

And none of these comparisons capture some of the broader impacts. California has consistently pushed the national agenda on energy-efficiency.  For example, in 1976 California was the first state to introduce appliance energy-efficiency standards. Other states quickly followed, leading eventually to national appliance standards in 1988. These national “spillovers” don’t show up in these analyses because they result in decreased electricity use both in and out of California.

So the “Rosenfeld Curve” does not prove that California’s energy-efficiency policies have worked. But nor does Levinson’s analysis prove that the policies haven’t worked.  With aggregate data it is impossible to answer this question definitively, let alone to say anything about which particular types of policies are most effective, or about how differences in program design impact effectiveness.

To be fair, Levinson understands all this. But people tend to have strong views on energy-efficiency. So when Levinson pokes holes in some of the best known “evidence”, it is tempting to run to the other extreme. Let’s not.  It doesn’t make sense to have such extreme views when the evidence is so incomplete. As it often is, the truth is probably somewhere in the middle.

For more see “California Energy Efficiency: Lessons for the Rest of the World, or Not?” (by Arik Levinson), Journal of Economic Behavior and Organization, 107, 2014.

Keep up with Energy Institute blogs, research, and events on Twitter @energyathaas.

Suggested citation: Davis, Lucas. “Deconstructing the Rosenfeld Curve” Energy Institute Blog, UC Berkeley, August 5, 2013,
https://energyathaas.wordpress.com/2013/08/05/deconstructing-the-rosenfeld-curve/

Categories

Uncategorized

Lucas Davis View All

Lucas Davis is the Jeffrey A. Jacobs Distinguished Professor in Business and Technology at the Haas School of Business at the University of California, Berkeley. He is Faculty Director of the Energy Institute at Haas, a coeditor at the American Economic Journal: Economic Policy, and a Faculty Research Fellow at the National Bureau of Economic Research. He received a BA from Amherst College and a PhD in Economics from the University of Wisconsin. Prior to joining Haas in 2009, he was an assistant professor of Economics at the University of Michigan. His research focuses on energy and environmental markets, and in particular, on electricity and natural gas regulation, pricing in competitive and non-competitive markets, and the economic and business impacts of environmental policy.

20 thoughts on “Deconstructing the Rosenfeld Curve Leave a comment

  1. Arik Levinson, “California Energy Efficiency: Lessons for the Rest of the World, or Not? Draft July 29, 2013.

    Review by Robert Clear, September, 2013

    California’s low use of energy relative to the rest of the United States is commonly thought to be primarily due to state policies, such as appliance and building standards, and restructured utility rate schedules. Levinson’s paper attempts to prove that the common wisdom is incorrect, and that instead the difference between California and the rest of the United States is primarily due to simple demographic and economic factors. This is a very non-intuitive claim, and deserves a strong degree of proof. Levinson’s paper fails to provide this degree of proof, and is in fact beset with numerous severe errors that make its claims baseless.

    The crux of Levinson’s argument is that simple regressions show that these simple demographic and economic factors can “explain” the bulk of the perceived California effect. This is a dangerous approach, as regressions by themselves only show correlation. When statistics are used as an aid to modeling and understanding a problem, the investigator has a chance to check whether the results make sense in terms of causality. If statistics are instead used simply as a tool to try to disprove a claim, the investigator risks losing the check that the regressions actually show causality.

    The causality problem is easy to comprehend in the abstract, but is not always as easy to spot in the particular. Nor is it necessarily immediately obvious just how serious an error it can be. As a lead-in to the problems with the Levinson paper, it is perhaps worthwhile to illustrate how correlation can distort the relationship between residential electrical per capita energy use and household size, before tackling the more subtle errors actually committed by Levinson.

    We expect that per capita household energy use will be negatively related to the number of people in a household because heating and cooling demands are largely independent of the number of people in a house. Household size is clearly a causal factor, but over the past 40 – 50 years it has also been strongly correlated to time, which in turn has been strongly correlated to increased household energy use due to an increasing saturation of existing appliances (especially air conditioning) and the introduction of new appliances, such as computers. Figure 1, below, shows annual per capita household electrical use plotted against the estimated household size for the period from 1960 to 2003 (the period for which I found data from an internet search). The linear correlation between the two variables is very high, and is clearly statistically significant. It is equally clearly bogus in terms of causality. If the curve was causal, we could eliminate residential electric energy use by increasing household size to an average of 3.75 persons per household. Figure 1 shows that although household size is likely to affect per capita energy use, it is important to be careful in how you estimate the size of the effect.

    Figure 1: Per capital annual household residential electric use from the period of 1960 to 2003 plotted against the estimated per capita household size from the same period. The household size data was taken from the web site: http://www.census.gov/statab/hist/HS-12.pdf
    The data for the energy use came from the EIA: http://www.eia.gov/totalenergy/data/annual/index.cfm#consumption

    Levinson analyzes residential electrical use versus a number of variables, but household size appears to be one with the biggest impact in his analysis. Levinson bases his assertion that relative changes in household size in California versus the rest of the U.S. account for 40 to 50 percent of California relative energy savings on a fit of the per capita energy use versus household size for houses in the RECS (Residential Energy Consumption Survey) data base. Levinson claims that a fit to the RECS data indicates that a relative change in household size of 0.6 persons is equivalent to an electrical energy savings of 1.9 MBTU per person, or 40% of the difference between what California’s energy use would have been if it grew at the national rate, and what it actually is. A little thought indicates that there are a number of potential problems with this analysis.

    Consider first that the RECS database includes California homes. California has over 10% of the overall U.S. population, so the California data will have a significant effect on the national data. California homes will, on average, use less energy than other U.S. homes, and will also have larger household sizes. Fitting the RECS data against household size alone means that the California effect shows up in the coefficient for household size. This is the same type of problem that was shown in figure 1 above, although to a lesser degree. Levinson does not describe his analysis in detail, so one cannot tell whether his analysis is biased in this fashion.

    Let’s continue: Levinson uses the calculated savings from the RECS database as a direct estimate for the national and California household size effects. Unfortunately, he doesn’t tell us how the average energy use in the RECS database compares to the national or California energy use. It should be obvious that if a house uses a lot of energy, especially for heating and cooling, that the per capita energy use will change more with changes in household size, than if the house uses very little energy. The natural way to calculate the effect of household size would be via ratios. Calculated in this manner the household size effect (ignoring other errors as noted above and below) drops to 30%.

    Onward again: Levinson computes the energy savings by using the mean values of the household sizes. Why the means? Based on Levinson’s table 6, the estimated median household size in California declined from 2.53 to 1.49 versus the 2.93 to 1.80 for other U.S. for the period Levinson analyzed. When you use the medians and a fit to Levinson’s figure 5 you get an estimate that is only 40% of the value calculated with means. The point of this discussion is that unless the function over the independent variable is linear over the variable, the value of the function calculated at the average value over the distribution of the independent variable is not equal to the value of the function averaged over the distribution of the independent variables. I actually made an effort to estimate the real distribution effect. Household size data is unfortunately not easy to get, and is likely to be of limited accuracy as they are estimated through survey data. I made an estimate of the distribution effect based on U.S. data for 2004 and 1960 (http://www.infoplease.com/ipa/A0884238.html#ixzz2bEYKmS00), and found a household effect that was even lower than was estimated by the medians.

    One final household size issue: Levinson computes the household size effect by comparing per capita energy use versus household size for 1963 and 2009. Why these two dates? 1963 is well before California implemented appliance standards or inverted rate schedules, and 2009 appears to be merely the last convenient date for which there was data. If household size affects energy use intensity then it should affect the entire time series, and not just two essentially arbitrary points. Time series data are subject to a lack of independence in the data points and time lags between the input series (household size) and the output series (per capita energy use). These effects can make estimates based on two arbitrary points of the time series extremely inaccurate. It is particularly disturbing in this context that household size estimates for California are relatively flat from 1975 to 2000, while the estimate for 2009 is significantly higher. The 2009 estimate for the U.S. does not show this trend.

    Household size is only one of several factors that Levinson claims jointly explain most of the California difference in per capita residential energy use. Household income and climate are posited as major players in the difference, and again these are factors that should affect energy use. The question is of course whether it makes a difference in California versus other U.S. state energy use.

    The analysis over income appears to share many of the problems that were identified above as problems in the household size analysis. Levinson does appear to have included a time trend term in his initial analysis of these factors, but he still uses means, absolute rather than relative energy use coefficients, and point estimates, instead of fitting the time series trend.

    A review of the climate factor analysis raises questions about other possible errors. Levinson calculates an estimated electrical energy use increase in the U.S. minus California due to a shift over time to climates with higher cooling loads. For the U.S. without California, the weighted average heating degree-days is claimed to decrease by 10%, while the weighted average cooling degree-days increases by 19%. A footnote recognizes that California has its own internal migration, but then simply discounts it because it is to areas with less electricity use per capita. The analysis then proceeds under the assumption that there is no net change in weighted degree-days for Californians. It is not legitimate to discount data arbitrarily like this. I did a quick estimate of the migration effect by allocating each of the major cities to one or more PG&E climate zones and got a 1% decrease in heating degree-days and an 8% increase in Cooling degree days. This is almost half the value for the U.S., which suggests that this calculation should not be ignored. An even more disturbing issue here is that the PG&E data uses an 80°F base for cooling, while Levinson use 65° F base, and yet gets a lower estimate for California degree days than I estimated using the more stringent degree day base. My calculation was somewhat crude, but the differences were large.

    The mismatch between Levinson’s degree-day estimate and my estimate raises a very troubling concern regarding the paper as a whole. Levinson does not document his source data very well, and it does not appear that he has paid much attention to the validity, consistency, or meaningfulness of the data. For example, in table two, the growth rates in column 5 for other energy uses in the residential, commercial, industrial, and transport sectors are all higher than the growth rate for all four together. This is not possible.

    Another example, is that in the conclusion he discounts the importance of the residential electricity use sector as being only 4 percent of California energy use. A quick check of the data indicates that Levinson has counted electricity production losses as part of other energy, instead of properly allocating it to the electricity sector. Using source energy, instead of site energy, increases the residential electricity fraction to 12 percent. The use of site energy in this context is not the kind of mistake that should show up in a serious analysis.

    A more subtle error is the use of degree-days to estimate heating and cooling energy use for different locations. Degree-days can work tolerably well to analyze differences in energy use over time, when all other factors are held constant, and when the correct degree-day base temperature is used. It is not at all clear that they can be used directly to analyze a situation where differences in humidity and construction practices make the appropriate degree-day base different over the different areas.

    The point here is that the degree-day base temperature is the estimated average (or midpoint) outside temperature at which a house just begins to need heating or cooling. Heating degree days have traditionally been calculated versus a base temperature of 65°, as this was historically the “daily average” outside temperature at which people began to heat. The average inside temperature will be several degrees above this base outside temperature due to internal loads in the house, as well as solar gain through windows. Increasing the thermal integrity of a building, or adding south windows to capture more winter solar gain can significantly reduce the outside temperature for which heating is required, and thus will significantly reduce the appropriate base temperature for calculating degree days. For cooling the big problem is the differences in humidity for different areas. Levinson uses a 65°F cooling degree base in his calculations. This may be appropriate for a climate with very high summer humidity, but it is not appropriate for hot-dry climates. In a harsh climate a change in thermal integrity can result in almost proportional change in energy use, but in a mild climate the change in degree-day base can make the relationship very non-linear. This, plus the differences in the appropriate degree-day bases for different areas makes estimation of the effect of migration patterns complex. Levinson makes no mention of any of these complexities, so there is no way to judge whether his treatment is appropriate without essentially doing the entire analysis properly.

    Levinson’s analysis is not limited to the simple regressions and fits that I have discussed so far. In his final section on residential electricity use he turns to a “Oaxaca-Blinder decomposition” of residential energy use and develops a multi-variable regression. There is almost no detail describing the data or fits, and it is not possible for the reader to replicate any of the results. We already know that there are questions about the validity of some of the regressions, and the validity of some of the data. However, even if we didn’t have this background, there is ample material in the fit given in Levinson’s table 9 to raise concerns.

    There are three fits listed in table 9; one for California, and two for the remainder of the U.S. The California fit lists 36 parameters plus a constant. The first other-state fit uses the same terms as the California fit, while the second one includes 26 “region fixed effects”. This is a lot of variables. Many of them are correlated to each other (for example: total square feet, number of room and number of bedrooms), which can make the fitted coefficients very unstable or meaningless. In the California fit a full two-thirds of the listed variables are evidently not statistically significant, which again impacts the values of all the parameters. The situation is better for the other U.S. fits where 25 variables are listed as being statistically significant, but this still leaves a lot of room for slop. This is a potential disaster. The whole point of the analysis is to sum up the energy use for those terms that are not related to California’s energy standards (which incidentally are not explicitly identified) so as to compare them to the sum of the terms that presumably are. There can be little confidence in this comparison if the terms aren’t independent and some of them aren’t even statistically significant. Levinson does not provide an error estimate for the decomposition, but even if he had, it is not clear that it would be meaningful given these problems.

    There is evidence in the table itself that the above problems are real. Consider the following: Adding the regional effects to the basic fit barely improves the degree of fit, from an R2 of 49% to an R2 of 50%, but it significantly increases the fraction of the difference in energy use between California and the remaining states that is explained by the non-policy variables from 60 percent to almost 90 percent. In the “other state” basic fit the coefficient for heating degree days is statistically significant, and strongly negative. This is an extremely counter-intuitive result, but not to worry – in the fixed effects model it shifts to positive. In fact, the difference the difference between the two coefficients represents a shift in energy use of 48 percent of the difference in energy use between California and the other states. This is large.

    Let’s go on. The coefficients for household income for all three fits are listed as negative. More income, less energy use? Table 9 lists energy use by when a house (structure?) was built, in decades from 1950-1960 to 2000+. No entry is listed for houses built before 1950. In the statistical package I often use, the missing entry in a categorical variable is assigned the value equal to minus the sum of the remaining entries. This would result in very large positive entries, which seems non-physical, so I hope we can assume that the missing entries are zero. It would have been helpful if Levinson was more explicit here, and frankly, in a lot of other places in his analysis. Nonetheless, even if the pre 1950 coefficient is zero, this still leaves a problem with the California fit. California introduced building standards in 1978, and has continued to tighten them over the years, yet table 9 would have us believe that California homes built in 2000 or after use more energy than homes built in any of the previous decades. For clothes washers it is the coefficients for the other states’ fits that is a cause for concern. For both other states’ fits the coefficient is statistically significant, and negative.

    Let us consider the non-physical coefficients from another perspective. Clothes washers don’t produce electricity, so a negative coefficient means that this variable is acting as a proxy for some other behavioral or physical attributes. If variables are acting as proxies, we cannot decompose the analysis into terms which should, or should not, be affected by California’s regulations. This is fundamental. In order for the Oaxaca-Blinder decomposition to have any validity, the parameters have to be physically meaningful so that they can be assigned to the proper category. Variables that act as proxies to unidentified other attributes cannot be part of the analysis.

    Ordinarily, I would make concluding remarks at this point, but Levinson did not confine his analysis to just the residential electricity sector. The last sector of his paper has a very short and abbreviated analysis of the manufacturing and transportation sectors. This review is already long enough, and I am not going to comment on these sections in detail. I do have two short comments. The first is that the prices of electricity and natural gas in California are not structured to be particularly favorable for large users. This is due to California policies, and is, I believe, different from what is found in most other states. Levinson did not include electricity prices in his residential analysis, but it is obvious that it could be the major factor that has continued to differentiate California residential energy use from that of the other states since 1987 when federal appliance energy standards were established. Although Levinson makes no explicit mention of this factor when discussing manufacturing use it should be obvious that this also could be a major part of the “something other” that he states explains the differences in manufacturing energy in California versus other states.

    Levinson does not list any California policies that would explicitly affect transportation energy use, however California air quality standards have the effect of making California gasoline more expensive than for most other states, and this is likely to have an effect on transportation energy use. Levinson claims that there isn’t really any California effect, and that instead for some unknown reason there has been a much lower growth in vehicle miles traveled in California relative to the rest of the country. An obvious critical input to this calculation is the vehicle miles traveled, but Levinson does not provide a reference or discuss this number at all. Levinson did note that the energy use data is from the DOT, but again provides no actual reference. I tried googling the DOT web site for this information, but finally gave up and went to the EIA web site (http://www.eia.gov/state/seds/seds-data-complete.cfm), for energy data. Unfortunately the EIA data very definitely does not match the values plotted in Levinson’s figure 9, and at this point I gave up on further attempts to review this section.

    Determining the importance of the various energy policies that California has pioneered in saving energy would clearly be useful in informing future policy decisions. I cannot recommend Levinson’s paper as a source for this type of information. There is insufficient documentation of the sources of data and the methodology to allow the reader to replicate results. Where the reader can follow the analysis, it becomes clear that there are errors in the data and its interpretation, and fundamental errors in the choice and use of the various procedures used to analyze the data. The errors are sufficiently severe, that it is not even clear that demographic and econometric factors reduce the apparent size of the California effect, as Levinson claims, rather than adding to it. The paper should be withdrawn, and the analysis completely redone with the aid of competent statistical help.

  2. To mcubedecon
    I disagree with your claim that “Most efficiency changes to products have changed the characteristics of the appliances.”. A typical 18 cubic foot (interior) frost free refrigerator from the 1970s used 155 kwh per month. A current Energy Star 18 cubic foot frost free refrigerator uses only 30 kwh per month. The difference is due to foam insulation, better seals, a more efficient motor, and possibly other changes that I am unaware of. My modern 21 cubic foot refrigerator is only slightly larger than the 1977 14 cubic foot partial defrost refrigerator that it replaced. This is pure efficiency.

    My old water heater used fiberglass insulation. My new one uses foam, and is the same size. My old stove had pilot lights, no insulation and lousy seals. My new one has electric ignitors (but both top burners and oven can be lit with a match), fiberglass insulation, and very good seals. Pure efficiency. My desktop computer uses 12 watts, and is smaller and faster than my old computer. My LED monitor uses 16 watts, and its huge (for me 23″ is huge). For appliances in general, most efficiency changes have either not changed the user characteristics of the appliance, or have improved upon them. It is not 100%. My new clothes washer is slower. It is also a lot quieter. I am definitely not complaining.

    I don’t normally think of cars as appliances, but yes, they are a mixed case. They are much more aerodynamic, and have better tires and transmissions, and in some cases they have electric motors (much better in stop and go traffic). They also have to meet a lot more air pollution standards, with the result that CAFE standards force manufacturers to sell smaller cars – just like they used to before SUVs and vans became popular (but modern VW punch-buggies are bigger).

    Bob – yet again using my wife’s account

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: