What we don’t know about economic climate change impacts

A relatively recent econometric literature examines the impact of weather/climate on a variety of outcomes of economic interest. In order to provide an estimate of a climate impact you need two things: An estimate of how a sector responds to a change in weather/climate and projections of future climate. The latter comes out of Global Climate Models with global coverage. The response functions, however, are increasingly provided by econometric studies. These studies are based on statistical models explaining variation in observed outcomes (e.g. crime rates, mortality, electricity consumption) as a function of weather, while carefully accounting for the impact of other confounding factors.

I have written on the impact of weather on electricity consumption and rice yields. EI@Haas family member Wolfram Schlenker has written extensively on the impacts of weather on agricultural yields. Michael Greenstone has some fascinating work on heat mortality in the US and India. There is a recent explosion in the literature of the weather impacts on crime, conflict and labor productivity. These studies are important as they inform us about how sensitive these sectors are to higher temperatures. There is a lively discussion about what we can and cannot learn from these models.

While I get excited about these studies, I was not fully aware of how limited their coverage is in terms of sectors and geographical areas. Enter Solomon Hsiang. As many readers of this blog know, Tom Steyer (founder of Farallon Capital), Hank Paulsson (former treasury secretary and CEO of Goldman) and Michael Bloomberg (founder of Bloomberg and former mayor of NYC) recently released their “risky business” report, which studies the projected impacts of climate change on the US economy. They hired Solomon and coauthors to quantify the impacts of climate change on the US economy using econometrically estimated response functions for different sectors. Their background report is truly illuminating (and full of frightening, yet beautiful graphs). They scoured the literature for available response functions and incorporated them in their analysis. If they did not find high quality estimates for a sector, they did not include it in their study. Below is a schematic of what was and was not included in the analysis. The blue symbols are areas that were included:

Screenshot 2014-07-21 15.27.10

The grey symbols in the box of impacts are areas that were not included in the analysis because there aren’t any well-estimated response functions available. This is really worrisome. We are missing good estimates for some very important sectors: Water supply and demand, morbidity, extreme weather impacts, livestock, crops other than the big four etc. etc. And this is just for the United States!

Are these important issues we should be working on? Well, greetings from the state of California. If the current drought is any indication of what is to come by end of century, it would be useful to have a well estimated weather elasticity of water demand – maybe even by sector. Morbidity anyone? Does a damage only count if one drops dead? If you look at the projected impacts in this most excellent report, we see that it is high time we get to work. We have spent the vast majority of our time worrying about agriculture and written hundreds of papers estimating impacts for a very limited number of sectors, but largely ignored energy, productivity and storm damages. And more importantly, there are few to no studies for a significant number of other climate sensitive sectors.

To make that point a little bit more bluntly, look at the figure of total impacts by sector for the end of the century:

Screenshot 2014-07-21 15.32.17

What this indicates is that we expect relatively small and well estimated impacts for agriculture (green) and crime (red) (which is what the literature has focused on) and big yet uncertain impacts for labor (yellow), energy (orange) and coastal damages (blue) (which has been largely ignore by the econometric literature so far). What is even more worrisome are the sectors that do not appear on these graphs.

On the spectrum of econometric difficulty, these studies are not that hard to do. Even I work on them. Funders and researchers should make these a priority. The required weather data are easily accessible. All we need to do is identify the relevant outcomes and collect high quality data on them. Let’s get to work. If you’d like to collaborate, drop me a line. I’m ready.

Posted in Uncategorized | Leave a comment

Cap-and-Trade’s Moment of Truth

With the looming expansion of its cap-and-trade program to transportation fuels like gasoline, California is fast approaching a significant moment of truth for its climate policy. This has some people nervous, and there are growing rumblings of proposals to delay, perhaps permanently, the expansion of the cap to transportation fuels. While many of the backers of these proposals profess continued support for AB 32 and its goals, such a stance (pro-AB 32, anti fuels under the cap) is not logically consistent.

California’s Global Warming Solutions Act (AB 32) was passed in 2006. As is so common with ambitious policy initiatives, AB 32 immediately generated a lot of excitement, while initiating a slow process that deferred much of the costs till later. It was a feel good moment, but now the time to pay part of the bill is nearly upon us.

Amongst the pro-AB 32 community, there has always been two competing narratives. The first, the “it’s all good!” narrative, has been that AB 32 would be nothing but positive for California’s environment and its economy. Since CO2 regulations are inevitable, according to this narrative, by moving first California can get a jump on other states and countries and gain a competitive advantage in a new carbon-controlled world. All of the new policies directed at reducing CO2 emissions would stimulate innovation and growth in new industries.

A second narrative, which is more persuasive to me and many other environmental economists, is that there are indeed some costs to imposing limits on carbon emissions, but that those costs are far outweighed by the risks of climate change and the pressing need to do something. There would obviously be growth in some industries (biofuels, solar panels) but there would also be (likely modest) costs imposed on many others. Those of us in this second camp have always been nervous about emphasizing economic benefits, since real change would require a continued commitment even when it’s not all good. An overemphasis on the economic benefits risks a collapse of support when the public is confronted with any bad news.

The first narrative was dominant for many years in California. Each step in this process has contained an element of trying to shield customers, either psychologically or literally, from the costs of the regulations, and therefore from the costs of their contribution to global greenhouse gasses. Part of the appeal of AB 32 programs like the renewable portfolio standard, and the low-carbon fuel standard, is that those programs emphasize subsidizing clean energy, rather than the costs they create for dirtier energy. As readers of this blog will know, cap-and-trade in its pure form puts a price on carbon. Indeed, that’s sort of the point. So far, however, the impact on consumers and business has been greatly muted by the extensive free allocation of allowances to almost every business currently under the cap.[1]

At each major decision point, there have been signs of second-thoughts, but the public mood was dominated by a sense it was necessary to continue along the path set in 2006. We are now approaching the most significant milestone to date for the program. Gasoline and other transportation fuels (and much natural gas usage) are set to go under the cap starting in January 2015. This will more than double the amount of CO2 emissions covered under the program.

This is a major milestone in many ways. Transportation fuels are one of the very few sources of CO2 emissions that will not be offset with the free allocation of allowances. Just about every serious person who has thought about this expects that the costs of the carbon that is contained in petroleum-based fuels will be reflected in retail prices.[2] While the suite of other policies rolled out under AB 32 will certainly have an impact on electricity, auto, and (yes) even gasoline prices, expanding the cap to transportation fuels will constitute the most explicit link of carbon-costs to consumer prices yet seen in California.


May 2, 2008 – Source: David McNew/Getty Images North America

This looming impact on transportation fuel prices has had a lot of people worried for quite some time, and the intensity is growing. In February Darrel Steinberg proposed taking fuels out from under the cap, and instead taxing them. More recently a Bill sponsored by Henry Perea would delay expanding the cap to fuels until 2018.  The fuels industry itself has periodically argued for either more allocations of allowances or some kind of alternative to the cap.

There are different motivations at play here, Steinberg cited concerns over uncertainty in environmental costs, but by offering a tax recognizes a role to be played by the fuels industry.  Perea appears mainly focused on the costs of gasoline, considering the 10-15 cents per gallon that the cap could add to fuels costs to be an unacceptable burden for drivers.

However if you were a believer in AB 32 in 2006, nothing has really changed.  If we want to make even a tiny dent in CO2 emissions we need to begin addressing transportation fuels, the largest single category of such emissions in California. We knew this in 2006 and it is still true today. There is also a logical disconnect to continued support for AB 32 and opposition to capping transportation fuels. If the sole concern is the costs imposed on drivers, policies like the low-carbon fuel standard are likely to have a greater impact than the cap.


One fact that is not widely appreciated is the critical role that transportation fuels have been given in the design of the cap-and-trade program. One of the best aspects of California’s market design has been the relatively tight price-collars (price floors and ceilings) it has placed on CO2 prices. But these price-collars depend upon the ability of the ARB to inject or withdraw allowances into the market to maintain a relatively stable price. In essence, fuels provide the slack that allows the cap-and-trade market to balance.

You can’t just pull fuels out of the cap and expect the rest of the system to work just fine without it. It wasn’t designed that way. As we wrote last week, the most likely outcome is that allowance prices would settle in close to the floor price. But if fuels are taken out, that floor price would be very vulnerable. The ARB maintains the integrity of the floor price through its ability to reduce the number of permits sold in its quarterly auctions. The problem is most all of the permits outside of transport are already spoken-for, having been freely allocated to various industries. It is very plausible that the remaining market could be left with excess allowances that would continue to be injected in to the market since they have already been allocated to the various remaining industries.

Taking fuels out from under the cap, and taxing them instead would reduce the uncertainty of carbon costs on fuels, but it would increase the uncertainty for everything else at the same time. That makes no sense. Fuels are important, but given their high visibility on street corners and in the news-media, policy discussions too often focus solely on fuel prices at the expense of other aspects of the economy.

If its uncertainty we are really worried about, then lets take steps to reduce that uncertainty by reinforcing, or even tightening the allowance price-collar mechanisms that are already in place. Adopting the recommendations in the Market Simulation Group report would be a good start on that. If we are really worried about any increase in gasoline prices, well, that is effectively saying that we are unwilling to take the steps California committed to in 2006.[3]

California’s experiment with climate policy has always been about trying to set a model for the rest of the country and world. It is also a learning experience.  Maybe one of the most important lessons we need to learn is whether the public, and our political leadership, can tolerate policies that are transparent in their impact on energy costs.  If not, then we will be left with a suite of less efficient, more feel-good policies that impose only hidden costs.

[1] Really close followers of the blog will note that the exogenous free allocation of permits benefits the firms that receive them but probably not their customers. However the bulk of allowances allocated in Calfornia have either been allocated through output-based updating, which can keep the CO2 price from filtering down, or given to regulated utilities whose regulators are requiring that the allowance value is passed on to consumers.

[2] Because of the way ethanol is treated under the cap, the 10% of our fuels that are biofuels will not carry a carbon cost. Neither will most of the emissions associated with refinery emissions, which will be offset by the allocation of allowances. Also because it is one of the few sources of emissions for which allowances have not been freely allocated, the auctioning of allowances associated with fuels will provide the bulk of the revenues generated by the cap-and-trade program.

Posted in Uncategorized | Tagged , , | 7 Comments

What’s the Worst That Could Happen?

[This post is co-authored with my three collaborators in cap-and-trade work for the California Air Resources Board:  Frank Wolak, Jim Bushell and Matt Zaragoza-Watkins.]


California’s year-and-a-half old cap-and-trade market for reducing greenhouse gases (GHGs) has drawn renewed interest over the last few weeks, since the EPA announced its initiative for limiting greenhouse gases from existing power plants.  California’s program is seen by many as a model of how state-level policies can lower GHGs without imposing undue costs on the economy.  This is heartening to supporters of the program who argued that by going first California could develop a market-based system for emissions reduction that could be a model for other regions and countries.

Of course, being on the “bleeding edge” of any movement means learning from risks and mistakes as well as from successes.  So far, California’s program has operated smoothly and is mostly viewed as a success, though the price has stuck pretty close to the effective price floor, $10-$11 for an “allowance” that covers one ton of CO2 emissions.  Some stakeholders, however, have expressed concerns with how the market will perform in the future, particularly after January 1, 2015 when emissions from transportation fuels, residential natural gas combustion and some other sources are added to the market.

The oil refining industry and some lawmakers have argued that the market could see volatile allowance prices, which would feed through to the price of gasoline and to other GHG-intensive products.  Another set of stakeholders have worried that certain market participants might be able to manipulate the price of allowances to their advantage.

The final report of the Market Simulation Group to the California Air Resources Board – released today by the Energy Institute and the Air Resources Board – examines these concerns.  The members of the MSG are Severin Borenstein, Jim Bushnell, and Frank Wolak working with Energy Institute graduate student Matt Zaragoza-Watkins.  In 2012, we were enlisted by the ARB to “stress test” the market design.

The report focuses on what might still go wrong in the market, not on the many thoughtful design features that have reduced or eliminated myriad other problems that could have otherwise occurred in such a complex market.   Our group recognizes the years of careful planning that have gone into creating the foundation for an efficient and robust market.  While we find that some risks remain, we conclude by recommending a couple of straightforward adjustments that could address those risks.  We believe that with these changes, California’s course for addressing climate change with a market mechanism will indeed be a model for other regions and countries.


CCD Carbon Price without flag (including caption) 2014.07.08 2The price of allowances in California’s cap-and-trade market since trading began. Graphic by Climate Policy Initiative with interactivity and updates at CalCarbonDash.org.


A Sidebar on Including Transportation Fuels in Cap and Trade

Before we get too far into the details of market rules and trading strategies, we want to note that our report has direct implications for the recent debate over whether or not to include gasoline in the cap-and-trade program starting on January 1, 2015, as is now prescribed by state law.  Our estimates imply that by far the most likely effect of including transport fuels in the program next January will be to raise California gasoline prices about 10 cents per gallon.[1]  That barely registers in the context of gasoline price gyrations.

To put that in context, the average Californian uses about 40 gallons of gasoline and diesel per month (directly in their own cars and indirectly through transportation of products they purchase), so this will raise their cost of living in California by about $4 per month.  That’s not nothing, but in the long list of +costs and benefits of living in California, it isn’t one of the major factors.  And unlike when crude oil price spikes drive up gasoline prices, the cap and trade revenue is going back to Californians through state expenditures of the funds.

Finally, the program has been designed around the 2015 expansion to include transport fuels.  Changing the program at this point to exclude fuels from cap-and-trade would be extremely disruptive to the market and could exacerbate the risk of volatile and high allowance prices.


WhatsTheWorst2California gasoline prices since 1995 (constant 2014 dollars).  Source: U.S. Energy Information Administration


Now Back to Our Study

Our report evaluates the possible outcomes of a competitive allowance market and the potential for non-competitive outcomes, such as market manipulation.  Importantly, we incorporate the uncertainty in economic growth, GHG intensity and other factors that would determine the “business as usual” path of emissions, which is the starting point for analyzing a GHG market.  Previous estimates have not accounted for this uncertainty.

We find that the most likely outcome is a competitive allowance market that yields a low allowance price.  But we also find real risks that allowance prices could rise to much higher levels due to combinations of low CO2 abatement, strong economic growth, and possible market manipulation.  Such disruptive price spikes that could create a backlash against cap-and-trade markets are reminiscent of the impact the California electricity crisis had in virtually stopping electric industry restructuring in the U.S.    Along with nearly all the people we’ve spoken with about the market, we would hate to see a disruption in California’s cap-and-trade program slow the pace of adopting market mechanisms to address climate change.

In response to these risks, the report recommends two changes to the market that would greatly reduce the probability of disruptive price spikes, which we explain below.  Any regulatory change in California requires a lengthy legal process, but we think the changes we propose are relatively straightforward.


Forecasting Long Run Supply/Demand Balance in the Cap-and-Trade Market

We first examine the likely supply/demand balance through 2020, the period for which the rules of the program have been set.  The most likely outcome in the overall market, we conclude, will be allowance prices at or just slightly above the price floor.  We also find, however, that in less likely, but plausible, scenarios in which the market tightens and the price starts to rise, there would probably be relatively little additional GHG abatement available.  Thus, if the California economy were to grow strongly and boost GHG emissions – not the most likely outcome, but certainly not unimaginable — we could see allowance prices jump to much higher levels, most likely in the later years of the program.

Our median estimates suggests that without changes to the program there is an 18% chance that prices would eventually rise high enough to trigger the Allowance Price Containment Reserve (APCR).  The APCR adds some additional allowances to the supply if the price exceeds a trigger level that increases from $40 in 2013 to $56 in 2020 (all adjusted for inflation to 2013 dollars).  The same estimates suggests a 6% chance of depleting all the allowances in the APCR, which would then allow prices to go much higher.[2]

Severin’s previous blog posts, in May and September of last year, addressed this risk of price volatility in the cap and trade market, and the need for a credible price ceiling.  The first recommendation in the report is for ARB to adopt a firm and credible price ceiling by standing ready to make additional allowances available at the ceiling price – which they could do by borrowing from post-2020 emissions, purchasing allowances from other GHG markets, or otherwise expanding supply at the ceiling price.


Scarcity and Market Manipulation Risks in the Short Run

The final report also expands the analysis to examine the competitive supply/demand balance in the market during the earlier “compliance periods” – 2013-2014 and 2015-2017 – within which participants will accumulate allowance obligations by emitting GHGs and then have to acquire enough emissions allowances to cover them. We find that for each of the earlier compliance periods, prices are very likely – over 80% probability in nearly all scenarios — to remain at or near the price floor.

But, as with the overall eight-year program analysis, we find that there are scenarios of strong economic growth — especially when paired with only modest reductions in emissions intensity — that could push up emissions.  In those cases, the market would be unlikely to demonstrate much price-responsive abatement over the short time available for such response.  The outcome could be substantial price spikes and potential disruption in the market.

Finally, we turn to the potential for anti-competitive behavior in the market.  We find a small but significant risk that some market participants could manipulate the price upwards by buying up more allowances than they need for a given compliance period and then “banking” some of them for use in future compliance periods, thereby creating an artificial shortage of allowances for compliance in the current period.  The report walks through in detail how this might be done and the difficulties a firm would face in carrying out such manipulation.  Banking is just saving and there are legitimate and important reasons for banking; the program needs a policy that undermines use of banking as part of a manipulation strategy.

In response to these risks of short-run shortages and market manipulation, we propose a change in market protocols.  Our second recommendation is to allow “vintage conversion”:  by paying a “conversion fee,” a firm would be permitted to use a later-year “vintage” allowance to meet a compliance obligation it faces in earlier years.[3]  We demonstrate that vintage conversion would greatly reduce the size of price spikes that could occur due to real or artificial short-run allowance shortages and it would undermine the incentive for participants to manipulate the market in order to create an artificial shortage.

California’s cap and trade market is working well so far and our analysis suggests there is a good chance that would continue without changes to the market.  But we also find real risks that the market could tighten and result in large increases in allowance prices.  Such increases would likely make the market politically unsustainable and, in any case, would damage the credibility of this cap and trade market, and probably others.  We urge the Air Resources Board to make the changes proposed in our report in order to greatly reduce or eliminate these risks.  Such changes would be very much in the spirit of pioneering a new path in environmental policy not just by celebrating the successes, but also by learning from the risks and possible failures.

A press release from the Energy Institute on the report can be found here

The final report can be found here as working paper #251


[1] Each gallon of California gasoline is counted as about 0.008 tons of GHG, so the current price around $12/ton translates to about $0.10 per gallon.

[2] To be clear, the 6% chance of exhausting the APCR is a subset of the 18% probability of triggering the APCR.

[3] The concept is very close to what is known as “borrowing” in the cap and trade literature.

Posted in Uncategorized | Tagged , | 2 Comments

Open Sourcing Risky Business

A lot of the policy discussion, and many of our blog posts, focus on the difficult task of trying to slow climate change. It’s useful to remind ourselves of the difficulties associated with NOT slowing climate change.

Last week, Michael Bloomberg, Hank Paulson and Tom Steyer released their Risky Business report quantifying the costs of climate change for the US economy. The report relies heavily on modeling work by one of EI@Haas’ own, Solomon Hsiang. Sol is a professor at UC Berkeley’s Goldman School of Public Policy and was a visitor at the Energy Institute this past semester. He was the lead economic author on the technical report that generated all of the results for the Bloomberg et al. glossy.


Photo from Risky Business Report

The technical report is notable on several dimensions, both the results it presents and the methodologies the authors use. The press has picked up on the bi-partisan leaders of the project, their call to apply risk management thinking to climate change, and the impactful charts and figures documenting the regional effects of climate change.

I was intrigued by a component of the report that hasn’t gotten as much attention –its commitment to “open science.” There’s no hard and fast definition of that term, but the Open Science organization defines it as, “the idea that scientific knowledge of all kinds should be openly shared as early as is practical in the discovery process.”

I have been involved in lengthy discussions about open science with my colleagues. Here are some of the tradeoffs in the context of the Risky Business report.

Bartopen-1Underlying the report’s fabulous pictures is an elaborate computer simulation model. Sol and his co-authors spent many, many hours developing their model, which they are calling SEAGLAS (Spatial Empirical Global-to-Local Assessment System). Developing a model like SEAGLAS involves everything from digging into the literature to come up with the best inputs to the model (e.g., the projections for temperature changes, impact of weather on electricity consumption) to writing lines and lines of code to combine all the inputs to generate useful outputs.

The benefits to embracing open science in this case are pretty clear. If Sol and his co-authors make the code underlying SEAGLAS accessible, other researchers can check the code for errors, stress test it by using it to generate results different from the ones in the original report, write modules to expand it, and generally improve its usefulness.

But, there is a potential cost. Academics are rewarded (e.g., promoted to tenure or given raises) almost purely on the number of original scientific publications they produce. So, the inclination of some people who develop models like SEAGLAS is to keep them under lock and key while they publish papers. After all, what’s the purpose of investing a lot of time in a well-constructed model if other researchers can use it to do cool stuff before you can? It’s like a company shielding its intellectual property, just that the currency is academic publications rather than cash. The risk is that we will get fewer valuable models created in the first place if researchers are expected to immediately make them open.

Much like Elon Musk at Tesla, Sol and his co-authors seem to be opening up the model vault. The report explicitly says, “We will be making our data and tools available online at climateprospectus.rhg.com. We hope others build on and improve upon our work in the months and years ahead.”

Sol and his co-authors are certainly not the first climate modelers to make their code public, and, in fact, how truly public it is remains to be seen. Posting undocumented code is almost useless to other researchers – it can take as much or more time to dissect someone else’s code as to write your own. On the other hand, some modelers have made an industry out of teaching people about their models and go to great lengths to make it accessible.

I am optimistic that Sol and his co-authors will provide useful information at the link above. After all, they’ve already made an early, public commitment to do that in the Risky Business technical report.

And, in terms of the inputs to the model, Sol sent us an email explaining that, “We have developed a new Distributed Meta-Analysis System (DMAS) that continuously and dynamically integrates new empirical findings (that are crowd-sourced from researchers around the world).” This means the inputs will be kept current and should represent consensus estimates from other researchers.

We have a lot of work to do quantifying the impacts of climate change and identifying the best mitigation and adaptation strategies. With public-minded – not to mention superb – researchers like Sol, I’m optimistic about the progress we can make.

Posted in Uncategorized | Tagged , | 1 Comment

Swapping negawatts for megawatts under the EPA’s proposed Clean Power Plan

If you are embarking on a new weight loss plan, it typically makes sense to pursue a mixed strategy of diet and exercise. If you are working to get your finances under control, you should look for ways to increase your household earnings and reduce spending. And so it goes, as we work to reduce carbon emissions from the nation’s power sector, pursuing a mix of supply and demand-side strategies makes good sense.

By Auffhammer’s Yoga Theorem (or, for you purists, Le Chatelier’s principle), the outcome of an emissions regulation that supports compliance flexibility will be less costly than a rule that limits states’ compliance options.  Under the auspices of the Clean Air Act, the EPA has rolled out its metaphorical yoga mat and demonstrated some serious regulatory flexibility with its “outside the fence” approach to compliance.


“Outside the fence” basically means that states can look beyond their existing power plants for ways to reduce emissions cost-effectively. There is evidence to suggest significant untapped potential for cost-effective energy efficiency gains on the demand-side.  Allowing states to use demand-side efficiency improvements to meet their compliance obligations has the potential to significantly reduce compliance costs.  But with these potential cost savings come implementation challenges.

Factoring in the demand side

The EPA has defined a set of state-specific emissions standards based on a detailed assessment of state-specific “best system of emissions reductions” (more details can be found in the  proposed rule).   The EPA chose to define these standards in terms of emissions rates (i.e. tons CO2/MWh), although states can choose to convert their standard to a mass-based (tons of CO2) target.

A state that sticks with the rate-based standard would, for compliance purposes, calculate their emissions rate as follows:eqnThe numerator is simply the tons CO2 emitted from electricity generation.  The denominator is the sum of electricity generation plus “negawatts”.  In principle, negawatts represent the electricity consumption that did not happen thanks to a demand-side efficiency improvement.

Looking at this equation, a state could bring its emissions rate into compliance using supply-side strategies that reduce the emissions rate per MWh of electricity generated. Examples include supply-side operating efficiency improvements and increased reliance on less carbon intensive generation sources.  On the demand-side of the fence,  the state can pursue efficiency improvements in order to reduce the total quantity of electricity generated while increasing its negawatts.

How many megawatts in a negawatt?

Measuring carbon emissions and electricity generated is relatively straightforward because state-level emissions and electricity production are directly observable.  In order to measure negawatts you need to construct a credible estimate of something you cannot observe directly: the “counterfactual” electricity consumption that would have been observed absent the efficiency intervention. Recent studies have looked into how projected savings from energy efficiency programs compare with realized energy savings in a variety of settings.  Whereas some find that savings materialize as expected, others find that realized energy savings fall short of predictions (some recent studies can be found here and here). This can happen for a variety of reasons including mis-calibration of simulation models, sub-standard installation of efficiency measures, rebound, or a failure to fully anticipate free riding behavior.

If energy efficiency savings are over-estimated, the denominator in the above equation will be inflated. If a state’s emissions are divided by an artificially large number, the stringency of the rate-based standard is effectively reduced.  In other words, too many emissions will be permitted per unit of electricity generated. Importantly, damages from these increased emissions will work to offset- or even eliminate-  the relative cost-advantages of demand-side compliance options.

Measurement matters

The moral of this story is *not* that we should throw outside-the-fence compliance flexibility out with the bath water.  The real punchline is that measurement matters. Over-estimates of energy savings make energy efficiency look artificially cheap relative to supply-side strategies. This mis-measurement undermines cost-effectiveness. Under a rate-based standard,  it can also reduce stringency and increase emissions.

The EPA has indicated that it intends to develop guidance for evaluation, monitoring, and verification (EM&V) of demand-side energy efficiency programs for the purpose of this rule. This is a daunting but important task. The good news is that there is lots of work being done by academic researchers, government agencies, and other stakeholders to inform this process and advance the state-of-the-art. Providing clear guidance and resources to help states tackle these EM&V challenges will be key to realizing the real potential of outside-the-fence efficiency improvements.


Posted in Uncategorized | Tagged , , | 2 Comments

EPA and climate regulation: Mind the gaps

Last week, following years of anticipation, the shoe finally dropped on EPA carbon regulations. Two or three things are notable in this. First, we’re on the road to a national climate policy!   Second, EPA is going out of its way to highlight how much flexibility it will give to States in implementing this policy. Traditionally, most regulations under the Clean Air Act have taken the form of “command and control” technology mandates for industries and equipment located inside dirty, “non-attainment” counties and regions. Economists have long complained about this and spoken of the virtues of more flexible, market-based approaches.


That’s why there has been so much excitement amongst the econ-wonk community about the EPA’s embracing of flexibility. Instead of rigid technology requirements, which really aren’t a great tool for dealing with the CO2 in the electricity sector, states can find other ways to come up with equivalent reductions – like cap-and-trade!

So flexibility can be great, but there are some reasons to worry about too much flexibility when it comes to CO2 reduction strategies. While the announcement has had economists manning their whiteboards to extol cap-and-trade, it is not at all certain that “flexibility” will translate to “cap-and-trade.” The EPA is clear that it considers its regulation to be fundamentally based upon emissions rates (CO2/MWh) rather than limits on total CO2. This means an emissions rate (or emissions intensity) standard will likely be one of the flexible options. Emissions intensity standards, such as the low carbon fuel standard (LCFS) for fuels, are sort of the ugly stepsister to cap-and-trade. They attack environmental problems through rates, and have “trading” elements where one can comply with the rate standard, but they do not explicitly limit the total amount of pollution.

The complaint about intensity standards, as articulated by Stephen HollandJon Hughes, and Chris Knittel in their 438-part series “why we hate intensity standards,” is that while they raise costs on dirty sources that exceed the standard, they implicitly subsidize sources that are still dirty, but cleaner than the standard. Thus gas plants can receive implicit subsidies to replace the output of coal plants, even though gas plants still produce CO2.

Within a single state or region, this is dismissed by some as a lesser-of-evils, but there is a real potential for mischief if some states go one way (cap) and others go another way (rate standards). This is because an intensity standard subsidizes the output of plants cleaner than the standard.  Gas plants in Arizona could end up having their output subsidized by an Arizona standard to import power into California, displacing sources that would have otherwise been capped.  If the Arizona plants are dirtier than the California plants they displace, total emissions increase.

More generally, California is going have to come to grips with how its internal climate policy will interact with those adopted by other states. While the expansion of CO2 regulation to other states would appear to reduce concerns about negative spillovers, new compatibility issues will be created. California will face decisions about which states to partner with, and how to design its policies to better adjust to those adopted by states it chooses not to partner with.  If it joins other states under a common cap, California will have to confront the fact that its aggressive renewable electricity goals could simply create more head-room under the cap for other states to use fossil fuels (something I wish the city of Davis would appreciate).  Some issues, such as leakage, could be mitigated if neighboring states adopt programs similar to California. However, neighboring states could also adopt compliance pathways that are inconsistent with California’s, and even exacerbate concerns such as leakage and reshuffling.

I don’t want to sound too pessimistic. The EPA’s actions are a tremendous development in the climate policy arena. Just a year ago it was common at conferences to hear statements like “there will be no national climate policy in the US in the foreseeable future.” The Obama administration is using the tools at its disposal to push the ball forward and I am glad they are doing it. However, those who thought California’s worries about leakage will be solved by these developments need to think again. We still need to mind the gaps.


Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Unlocking Cost Savings with Cap-and-Trade

Much was made last week about the flexibility of the EPA’s proposed new power plant regulations. According to EPA administrator Gina McCarthy, it “gives states the flexibility to chart their own customized path. There is no one-size-fits-all solution. Each state is different so each state’s path can be different.”

Perhaps most importantly, states would be able to meet their required carbon dioxide reductions by starting a cap-and-trade program, or by joining an existing cap-and-trade program like the one that is part of California’s AB32. This approach raises legal and administrative challenges, but it also could dramatically reduce costs by incorporating emissions reductions from other sectors.

For decades, economists have emphasized the efficiency gains associated with cap-and-trade and other market-based environmental policies. The economic argument for cap-and-trade is right out of Econ 101. Every textbook teaches this the same way and I have always thought that this is the best way to understand how cap-and-trade works.

In the video, I draw marginal abatement cost curves for two sectors. If you’d like, you can think of “Sector A” as fossil-fuel power plants and “Sector B” as all other sectors. I then show how incorporating this additional sector with a cap-and-trade program could reduce costs substantially. Cap-and-trade moves abatement from high-cost sectors to low-cost sectors, thus reducing the total cost of achieving a given level of reductions.

By the way, none of this hinges on exactly what these abatement cost curves look like. In fact, with cap-and-trade we don’t even have to know, ex ante, where to find the cheap abatement. By putting a price on carbon dioxide emissions you create an incentive for abatement in all sectors of the economy.  The price efficiently allocates abatement across and within sectors, and spurs innovation in new and sometimes unexpected technologies.

And while the theory is compelling, it is also important to emphasize that cap-and-trade is not some unattainable ideal that only exists in the classroom. Think back to 1990, when President George H.W. Bush signed into law the EPA’s cap-and-trade program for sulfur dioxide.  Between 1990 and 2004, sulfur dioxide emissions decreased by 36% — yielding $50 billion annually in benefits at a cost of less than $2 billion per year (here).  Economists have estimated that, relative to technology standards, the program reduced costs by as much as 90% (here).

We have practical experience with nitrogen oxides too. EI@Haas faculty Meredith Fowlie and Catherine Wolfram, along with EI@Haas alumnus Chris Knittel (MIT) have a paper measuring the potential cost-savings from a cap-and-trade program for NOx (here).  It turns out that NOx abatement from power plants is relatively expensive, like the “Sector A” in the video, while abatement from cars and other mobile sources is relatively cheap, like the “Sector B” in the video.  Consequently, they find that there would be enormous cost savings from equating marginal costs across the two sectors.

As Max explained with his “Yoga Theorem” last week, the less flexible you are, the more you will suffer.  This is exactly the case with the EPA’s proposed regulations. Nobody can predict with certainty exactly how much these regulations would cost, but economic theory is extremely clear on the benefits of making these policies as multi-sector, and as price-based as possible. In practice, every time we have implemented market-based environmental policies, they have ended up costing less than expected and there is every reason to believe this would work again for carbon dioxide. 

Posted in Uncategorized | Tagged , | 6 Comments