Proposals to Eliminate Natural Gas from the Fuel Mix Are Premature

Natural gas is commonly called a “bridge” to a low carbon future. Why this metaphor?

A bridge crosses over an obstacle, like a river or canyon. The metaphor suggests that transitioning from coal to natural gas for electric generation is one of the most cost-effective and scalable opportunities for cutting greenhouse gases on our way to the promised shore of even lower emissions. This is especially true in the US, which has plentiful low-cost natural gas resources. The idea is that natural gas can carry the US economy in the short term while higher impact carbon mitigation solutions are still too expensive on a large scale. At some point, though, we need to reach the other side of the natural gas bridge so we can continue our journey with even lower carbon solutions like renewable energy, next-generation nuclear, or carbon capture and sequestration.

Politico Morning Energy recently reported that a major environmental group, the Sierra Club, doesn’t want to cross the natural gas bridge at all. They are organizing an aggressive campaign to stop the construction of natural gas power plants and pipelines. In the Sierra Club’s view there’s no river or canyon in our way. The US just needs to make the leap directly to a low carbon future, abandoning fossil fuels as quickly as possible. The Sierra Club believes that if the US and other countries cross the natural gas bridge, the world is headed toward a climate catastrophe.

With the impending turnover in the US executive branch, and possible changes in the legislative and judicial branches, policymakers need to critically evaluate these two visions of the future. Is natural gas a bridge to a low carbon future that should be supported? Or will natural gas take us somewhere we don’t want to go – a greenhouse gas point of no return?

Argument 1: We’re Already on the Natural Gas Bridge

MIT’s 2010 Future of Natural Gas report illustrates how the natural gas bridge could work. The authors, led by now-Secretary of Energy Ernest Moniz, developed several scenarios of future energy supply and demand out to 2050. One scenario assumed that price-based policies are used to achieve a 50% reduction in US greenhouse gas emissions by 2050 relative to 2005 levels. This scenario found that natural gas demand would increase through 2040, then begin to slowly decline.

The shift from coal to natural gas has already pushed down US energy-related carbon dioxide emissions by 12% between 2005 and 2015, as Lucas Davis discussed in a recent blog. We’re already on the natural gas bridge.

Changes in the relative market prices of coal and natural gas, driven by the shale gas revolution, provide much of the explanation. However, policy is also influencing the competitive standing of natural gas generation. As evidence, the EIA reports that 30% of the nation’s coal capacity that closed in 2015, shut down in April. That was the month that the EPA’s Mercury and Air Toxics Standards went into effect.

Would the two major presidential candidates continue over the natural gas bridge?

For Donald Trump the concept is moot, since he has no interest in moving to a low carbon future.

Hillary Clinton, on the other hand, supports policies that would take the US further across the natural gas bridge. In particular she wants to implement the Clean Power Plan (CPP). Trump wants to kill it.

Modeling by the EIA estimates that the CPP’s greenhouse gas reduction requirements would boost natural gas generation by 10% by 2040 relative to a scenario with no CPP.

Source: Duke Energy, HF Lee Energy Complex; combined-cycle plant; generating station; power plant; Goldsboro, NC.

Source: Duke Energy, HF Lee Energy Complex; combined-cycle plant; generating station; power plant; Goldsboro, NC.

The Sierra Club, however, wants to take a different path altogether. They enthuse about Clinton’s aggressive renewable goals, such as her pledge that half a billion solar panels will be installed by the end of her first term.

Their preferred path is more consistent with the Deep Decarbonization Pathways laid out in a study conducted by Energy and Environmental Economics (E3), Lawrence Berkeley National Laboratory and Pacific Northwest National Laboratory. This study models reducing US emissions to 80% below 1990 levels by 2050. The study includes four scenarios. In two, natural gas is all but gone from the electricity mix in 2050. In another scenario the market share of gas is cut in half. In the final scenario, natural gas remains important, but only with carbon capture and sequestration.

Proponents point to California as evidence that a rapid transition away from natural gas is realistic. Natural gas consumption for electric generation in California decreased by 3% between 2014 and 2015. This drop occurred despite the state’s drought, which led to a 16% drop in hydroelectric generation. The growth in renewable generation provides much of the explanation.

If the US is headed down one of these paths then the Sierra Club’s strategy to stop the construction of new natural gas power plants and pipelines could save society money. It’s worth considering because it would mean we’re potentially wasting billions of dollars to build a natural gas bridge headed to the wrong place.

Reality: Stopping Natural Gas Could Benefit Coal

I find the Sierra Club strategy troubling.

The displacement of coal generation by natural gas generation is a highly cost effective way to reduce greenhouse gas emissions. Even without a nationwide carbon policy, the US is seeing widespread replacement of coal with natural gas.

Recent research looking at the period from June 2008 to the end of 2012 found that the degree to which natural gas replaced coal varied by region. In areas where more natural gas power plants had been built during the prior five years, greenhouse gases from power generation dropped more since there was more natural gas capacity available to come on-line and compete with coal. The Sierra Club’s “Beyond Natural Gas” strategy would retard the continued displacement of coal by natural gas.

I am also skeptical that the California example is relevant to the US as a whole. The nation is much more reliant on coal than California. California also has unusually attractive solar, wind, and geothermal resources. I expect replicating California’s move away from natural gas would be much less cost-effective elsewhere. Also, electricity intensive industry in other states would strongly oppose policies that pushed electric rates up toward California levels.

Rather that categorically declaring natural gas a loser, the US should stick to market-based policies that prioritize the most cost-effective climate solutions. In the near-term, that likely means the US needs to continue its way across the natural gas bridge. Anyone who suggests otherwise is trying to sell us a … well, you know.

Posted in Uncategorized | 13 Comments

Is Cap and Trade Failing Low Income and Minority Communities?

Pollution – like income- is unequally distributed. In fact, pollution exposure is more unequally distributed than income in the U.S. for some pollutants.wilm

Refinery in Wilmington, CA. Credit: Luis Sinco/LA Times

Exposure to pollution-related health risks, accumulated over a lifetime, can have real impacts on outcomes that matter (such as health, education, productivity, and income). So neighborhoods that are more exposed to these risks are disadvantaged in more ways than one.

California has made it a priority to ensure that new environmental regulations improve conditions in these communities.  But a new report from the USC Program for Environmental and Regional Equity (PERE) suggests that these efforts might not be working as far as the state’s greenhouse gas (GHG) emissions trading program is concerned. The report emphasizes the preliminary nature of the findings and stops short of definitive conclusions. But in media coverage, op-eds, blogs, and press releases, some provocative implications are being drawn. For example, the California Environmental Justice Alliance concludes:

“(this report) demonstrates that polluters using the cap and trade system are adversely impacting environmental justice (EJ) communities. The system is not delivering public health or air quality benefits, not achieving local emissions reductions, and it is exporting our climate benefits out of state.”

When the stakes are so high, and when preliminary evidence appears incriminating, it’s tempting to conclude we should change course. But it’s important to keep in mind that these are preliminary findings, and that the GHG policy under fire is not intended to regulate the kinds of pollutants that cause local damages.

How are EJ communities faring under cap and trade?

Economists like cap and trade programs because they harness market forces to seek out the most low cost emissions reductions. Environmental justice advocates are quick to point out that cost-minimizing outcome need not be equity-maximizing outcome. Who wins and who loses will really depend on how the program is implemented and where the lowest cost pollution reductions can be found.

Against this backdrop, a careful assessment of how low income and minority communities are being impacted by California’s emissions regulations is important. But it’s also complicated.  Here are three issues I think we need to get a handle on before we can address this question:

(1) Cap and trade compared to what?  To figure out whether low income communities are faring better or worse under cap and trade, we need a clear sense of what we are comparing against. Researchers looking at the very same data can reach very different conclusions depending on their benchmark.

Research assessing the equity of impacts under cap and trade programs often uses more traditional, prescriptive regulation as a basis for comparison. For example, some co-authors and I looked at emissions under Southern California’s RECLAIM trading program, a regional cap and trade program for criteria pollutants. We compared emissions across all RECLAIM facilities in the first five years of the program against a matched group of facilities that remained under the prescriptive regulations that RECLAIM replaced.  We find that RECLAIM delivered significant emissions reductions which appear to be equitably distributed over this time period. A recent working paper extends our analysis to more carefully account for pollution dispersion patterns. These authors find that RECLAIM may have disproportionately benefited some minority households. Analysis of other cap and trade programs have found similar results, such as this study which documents an equitable distribution of net benefits under the SO2 emissions trading program.

Getting back to California’s GHG emissions trading, the PERE study compares in-state GHG emissions at regulated facilities during the first two years of the program (2013-2014) against emissions at those same facilities in the years preceding (2011-2012).  The figure below shows the average change in emissions by sector. Positive emissions indicate higher emissions, on average, during the post-policy implementation period.


     Change in Emitter Covered GHG Emissions by Industry Sector (N=314 facilities)

Large covered facilities in these sectors are disproportionately located in disadvantaged neighborhoods.  So apparent increases in emissions in some sectors, together with the purchase of offsets, are  being used to support the claim that California’s cap and trade program is making things worse in disadvantaged communities.

Before reaching this conclusion, it’s important to remember that these kinds of pre-post comparisons can confuse the effects of a policy change with the effects of other factors that are also changing over time. For example, the graph below shows annual growth in GSP (gross state product) for California (gold) and all US states (blue). The graph shows how the rate of growth in California’s economic production increased as the GHG CAT program took effect in 2013 (in absolute terms and relative to the rest of the country).  With this increased industrial production comes increased emissions.


The PERE study highlights an in-state emissions increase at regulated sources as the state economy continued to recover. But before we can draw meaningful conclusions about the impacts of GHG emissions trading versus other factors, we need credible estimates of what emissions would have looked like under some plausible policy alternative.

(2) Cap and trade as a means to what end?  So far we’ve been focusing on GHG emissions from regulated sources. But what we ultimately care about is the damages caused by this pollution.

In the case of GHGs, the link between regulated cause and local health effect is indirect. In contrast to criteria pollutants, GHG emissions have no direct, local health impacts. Climate change damages depend on global concentrations.  Importantly, it’s the damages from local “co-pollutants” that EJ communities are concerned about. In other words, the current debate about the injustice of GHG emissions trading is fundamentally concerned with the adequacy of other policies that regulate other (local) pollutants.

The PERE study does not estimate how changes in GHG emissions at covered sources have translated into local exposure to co-pollutants and associated health impacts. It also sidesteps a key question: why are we using GHG regulations to tackle local pollution problems?

(3) Cap and trade… and transfer:  Unlike prescriptive emissions regulations, cap and trade programs can generate revenues through the sale of tradable emissions permits. These revenues can be redistributed in a way that addresses equity concerns. Under California’s GHG emissions trading system, some revenues are used to improve health and economic opportunity in disadvantaged communities. As of December 2015, 51 percent ($469 million) of the California Climate Investments had been allocated to projects that provide benefits to disadvantaged communities. And it is anticipated that the energy bills paid by low income consumers should fall, on average, thanks to climate credits and low-income energy assistance.

These transfers are not accounted for in the PERE analysis, but they should factor into an assessment of whether disadvantaged communities would be better off or worse off under cap and trade as compared to more prescriptive policy alternatives.

Barking up the wrong tree?

A defining advantage of market-based regulations is the cost savings they can deliver over more prescriptive regulations. A defining EJ concern is that the market – versus the regulator –  determines where emissions reductions will happen. California’s cap and trade (and transfer) approach is trying to strike a balance between sending price signals that reflect GHG emissions costs and improving conditions in disadvantaged communities.

The PERE report highlights trends in in-state emissions and the use of offsets which warrant further investigation. But it does not provide a basis for foreclosing on cap and trade in favor of direct regulation as some have suggested. We need more studies using better data and clearly defined benchmarks to understand how climate change policies are impacting outcomes we care about. Perhaps more importantly, we need to confront the question of whether inequalities in exposure to local emissions should be addressed by distorting climate change policy, or strengthening regulations of local pollutants.





Posted in Uncategorized | 3 Comments

Trash those incandescent bulbs today!

When it comes to lighting, I’m no early adopter.  For the last 20 years, I’ve annoyed my energy efficiency friends by arguing that those curlicue compact fluorescent bulbs (CFLs) were overhyped. The light quality is still inferior; they still warm to full brightness too slowly; and the claims of 10-year life are vastly overstated.

And then when they burn out (after a year or two) we are supposed to wrap them in a cloth and drive them to the local hazardous waste disposal site, because they contain mercury.  Sure they use less electricity, but they don’t offer the value to get most people to switch.

leds1So I hope I have the cred to convince you that now is the time to trash (almost) all of the incandescent bulbs that are still lighting your house and replace them with light-emitting diode (LED) bulbs. If you are like me, you probably hesitate to throw away a perfectly good working bulb, but you should. Really.  Don’t fall victim to the sunk cost fallacy.  Both your wallet and the environment will thank you.

A standard LED bulb now costs only about $3, less if you buy in bulk or live in an area where they are subsidized by the local utility.  And the LED uses 8.5 watts to produce the same amount of light as a 60-watt incandescent.  The Department of Energy generally calculates costs based on assuming a light bulb is used 3 hours per day, but let’s be super conservative and assume it’s only used one hour a day. And let’s assume you pay the average residential retail rate for electricity in the U.S., 12.73 cents per kilowatt-hour. If that’s the case, then in the first year you would save $2.39, 80% of the purchase cost.

leds2That’s in the first year. These bulbs are touted to last for more than 20 years (at which point it is just fine to throw them in the regular trash). The spreadsheet I keep of every light bulb in my house (yes, I really do, which is how I know that my old CFLs  lasted 1-2 years on average) shows that none of the LED bulbs I’ve installed, going back to 2009, has yet failed.   As long as the LED lasts even a bit over a year, screwing it in today and throwing away the working incandescent bulb will still save you money.  (If you want to do your own calculation with different assumptions, here is a spreadsheet to figure out the savings and payback period.)

And the numbers will be better than that if you have the light bulb on more than one hour per day.  But if you know you use a particular bulb very infrequently — the one in the cellar that you only turn on for a few minutes every week or two — you might skip that one and focus on the bulbs that get regular use.  In fact, you could save all the incandescents that you removed from the other fixtures in your house to replace that one in the cellar every few years for the next century or two.  (Though you may have a better use for storage space than that.)


The basement incandescent that is hardly ever used can stay

On the other hand, if you live in a high cost area such as California, where the electricity you save could very well cost 30 cents per kilowatt-hour or more, you either throw away nearly all the incandescent bulbs today or you are throwing away real money every day.

But you aren’t just saving money with LEDs, you’re also saving energy and the planet.  Sure, some energy does go into making an LED, but that is a small fraction of the $3 cost. Compare that to the $2.39 (or more) savings each year, which is all energy.  The math for saving energy is even more compelling than it is for saving money.[1]

So if this is such a no-brainer, why are you still reading this article instead of replacing your incandescent bulbs?

LED bulbs have gotten less expensive, but a year from now they will be even cheaper, so by delaying I will save even more money.”   LEDs are indeed going to get cheaper, but not fast enough to justify waiting. You will save so much in the first year after you replace an incandescent that unless LEDs are going to be virtually free a year from now (they aren’t), you’d still be better off doing it today rather than waiting.

I really like the light quality from the traditional incandescent bulbs. I don’t like LEDs as much.”  With the old CFLs, the difference was so obvious that even your hipster nephew who always wears sunglasses could tell the difference. Distinguishing LED from incandescent lighting is much more difficult, and the difference is much less likely to bother you. Still, if you are a photophile (the lighting equivalent of the audiophile who can’t stand listening to an MP3 file because it has less complexity than the larger digital file on a CD) then ignore everything I’ve said. You should get out there and scoop up all the incandescent bulbs the rest of us will be throwing away.

“My parents taught me `waste not, want not’.  I just can’t throw away a working lightbulb.”  OK, if it will make you feel better, put them all in a box and store them away until you meet the photophile from the previous paragraph.  But seriously,  that view misses the big point: by not wasting the visible and tangible incandescent bulb, you are instead wasting electricity, which may be invisible, but still uses more of the world’s scarce resources than the bulb.

“It’s a hassle to replace a lightbulb.  I know it costs more money, but I’m putting it off until it burns out and I have to replace it.”  Fair enough.  But waiting a year until it burns out is going to cost you a few dollars in additional electricity while saving you just the interest you earn over a year by delaying the purchase of the LED bulb, which at today’s interest rates is practically nothing.  Replacing all of the incandescents in your house is likely to save you $50 per year or more.  Of course, nearly all of us procrastinate on some task that would save us that kind of money.  Still, next time you have the ladder out to replace one bulb take the opportunity to do them all.  And then you won’t have to do it again for many years.

While you are at it, should you also throw away all those curlicue CFLs?  Not unless the light is really bugging you. They save almost as much money as an LED, until they burn out that is (and you have to deal with disposing of them).  Just don’t buy any more. Actually, that’s not really a concern: LEDs have taken over the market so completely that many stores no longer sell CFLs.

Decades after CFLs were supposed to displace the incandescent bulb, LEDs have finally done it!

I’m still tweeting interesting energy news articles, research, and stats @BorensteinS


[1] Yes, I am implicitly assuming that you and the LED bulb manufacturer pay the same price for electricity. That’s not right — industrial customers pay about 45% less on average — but energy is such a small part of the cost of manufacturing LED bulbs that even doubling it won’t come close to matching the energy savings in your home.

Posted in Uncategorized | 19 Comments

Addicted to Oil: U.S. Gasoline Consumption is Higher than Ever

August was the biggest month ever for U.S. gasoline consumption. Americans used a staggering 9.7 million barrels per day. That’s more than a gallon per day for every U.S. man, woman and child.

The new peak comes as a surprise to many. In 2012, energy expert Daniel Yergin said, “The U.S. has already reached what we can call`peak demand.” Many others agreed. The U.S. Department of Energy forecast in 2012 that U.S. gasoline consumption would steadily decline for the foreseeable future.

Source: Constructed by Lucas Davis (UC Berkeley) using EIA data ‘Motor Gasoline, 4-Week Averages.’

This seemed to make sense at the time. U.S. gasoline consumption had declined for five years in a row and, in 2012, was a million barrels per day below its July 2007 peak. Also in August 2012, President Obama had just announced aggressive new fuel economy standards that would push average vehicle fuel economy to 54 miles per gallon.

Fast forward to 2016, and U.S. gasoline consumption has increased steadily four years in a row. We now have a new peak. This dramatic reversal has important consequences for petroleum markets, the environment and the U.S. economy.

How did we get here? There were a number of factors, including the the Great Recession and a spike in gasoline prices at the end of the last decade, which are unlikely to be repeated any time soon. But it should come as no surprise. With incomes increasing again and low gasoline prices, Americans are back to buying big cars and driving more miles than ever before.

image-20160924-29889-18mi4tbGas is cheap and Americans are back in their cars and trucks. viriyincy/flickr, CC BY-SA

The Great Recession

The slowdown in U.S. gasoline consumption between 2007 and 2012 occurred during the worst global recession since World War II. The National Bureau of Economic Research dates the Great Recession as beginning December 2007, exactly at the beginning of the slowdown in gasoline consumption. The economy remained anemic, with unemployment above 7 percent through 2013, just about when gasoline consumption started to increase again.

Economists have shown in dozens of studies that there is a robust positive relationship between income and gasoline consumption – when people have more to spend, gasoline usage goes up. During the Great Recession, Americans traded in their vehicles for more fuel-efficient models, and drove fewer miles. But now, as incomes are increasing again, Americans are buying bigger cars and trucks with bigger engines, and driving more total miles.

Gasoline Prices

The other important explanation is gasoline prices. During the first half of 2008, gasoline prices increased sharply. It is hard to remember now, but U.S. gasoline prices peaked during the summer of 2008 above US$4.00 gallon, driven by crude oil prices that had topped out above $140/barrel.

Gasoline prices in Washington D.C. top $4 a gallon in 2008. brownpau/flickr, CC BY

These $4.00+ prices were short-lived, but gasoline prices nonetheless remained steep during most of 2010 to 2014, before falling sharply during 2014. Indeed, it was these high prices that contributed to the decrease in U.S. gasoline consumption between 2007 and 2012. Demand curves, after all, do slope down. Economists have shown that Americans are getting less sensitive to gasoline prices, but there is still a strong negative relationship between prices and gasoline consumption.

Moreover, since gasoline prices plummeted in the last few months of 2014, Americans have been buying gasoline like crazy. Last year was the biggest year ever for U.S. vehicle sales, with trucks and SUVs leading the charge. This summer Americans took to the roads in record numbers. The U.S. average retail price for gasoline was $2.24 per gallon on August 29, 2016, the lowest Labor Day price in 12 years. No wonder Americans are driving more.

Can Fuel Economy Standards Turn the Tide?

It’s hard to make predictions. Still, in retrospect, it seems clear that the years of the Great Recession were highly unusual. For decades U.S. gasoline consumption has gone up and up – driven by rising incomes – and it appears that we are now very much back on that path.

This all illustrates the deep challenge of reducing fossil fuel use in transportation. U.S. electricity generation, in contrast, has become considerably greener over this same period, with enormous declines in U.S. coal consumption. Reducing gasoline consumption is harder, however. The available substitutes, such as electric vehicles and biofuels, are expensive and not necessarily less carbon-intensive. For example, electric vehicles can actually increase overall carbon emissions in states with mostly coal-fired electricity.

Americans are buying less fuel-efficient vehicles.

Can new fuel economy standards turn the tide? Perhaps, but the new “footprint”-based rules are yielding smaller fuel economy gains than was expected. With the new rules, the fuel economy target for each vehicle depends on its overall size (i.e., its “footprint”); so as Americans have purchased more trucks, SUVs and other large vehicles, this relaxes the overall stringency of the standard. So, yes, fuel economy has improved, but much less than it would have without this mechanism.

Also, automakers are pushing back hard, arguing that low gasoline prices make the standards too hard to meet. Some lawmakers have raised similar concerns. The EPA’s comment window for the standards’ midterm review ends Sept. 26, so we will soon have a better idea what the standards will look like moving forward.

Regardless of what happens, fuel economy standards have a fatal flaw that fundamentally limits their effectiveness. They can increase fuel economy, but they don’t increase the cost per mile of driving. Americans will drive 3.2 trillion miles in 2016, more miles than ever before. Why wouldn’t we? Gas is cheap.

The ConversationThis blog is available on The Conversation

Posted in Uncategorized | 15 Comments

I’m Not Really Down with Most Top Down Evaluations

Lunches at Berkeley are never boring. This week I had an engaging discussion with a colleague from out of town who asked me what I thought about statistical top down approaches to evaluating energy efficiency programs. In my excitement, I almost forgot about my local organic Persian chicken skewer.

For the uninitiated, California’s Investor Owned Utilities (the people who keep your lights on…if you live around here) spend ratepayer money to improve the energy efficiency of their customers’ homes and businesses. Think rebates for more efficient fridges, air conditioners, lighting, and furnaces. The more efficient customers are, the less energy gets consumed, which is especially valuable at peak times. For doing this, the utilities get rewarded financially for energy savings produced from the programs. The million kWh question of course is how much do these programs actually save? I’m glad you asked.


Multiple Ways of Looking at Energy Efficiency

The traditional way is to take the difference in energy consumption between the old and new gadget. If you’re really fancy you downward adjust the estimated savings by a few percent to account for free riders like me, who send in energy efficiency rebates for things they would have bought anyway. These so-called “bottom up” analyses have been shown to provide decent estimates of what is possible in terms of savings, but completely ignore human behavior. Hence, when tested for accuracy, bottom up estimates have over and over again been shown to overestimate savings. There are many factors that contribute to this bias, but the most commonly cited one is the rebound effect.

Another way of course, as we have so often advocated, is using methods that have their origin in medical research. For a specific program, say a subsidy for a more efficient boiler, you give a random subset of your customers access to the subsidy and compare the energy consumption of people who had access to the program to that of the customers who didn’t. These methods have revolutionized (in a good way), the precision and validity of program evaluations. My colleagues at the Energy Institute are at the forefront of this literature and are currently teaching me (very patiently) how you do these. I am always a bit slow to the party. These methods are not easy to implement and require close collaborations with the utilities and significant upfront planning. But that is a small price to pay for high quality estimates that allow us to make the right decision as to whether to implement programs that cost ratepayers hundreds of millions of dollars.

A third option, which has given rise to a number of evaluation exercises, is called top down measurement. The idea here is to look at average energy consumption by households in a region (say census block group) for many such regions over a long time period and use statistical models to explain what share of changes in energy consumption over time can be explained by spending on energy efficiency programs. The proponents of these methods argue that this is an inexpensive way to do evaluation, the data requirements are small, the estimates can be updated frequently, and – maybe most importantly – that these estimates include some spillover effects (if your neighbor buys a unicorn powered fridge because you did). Sounds appealing.

The big problem with the majority of these studies is that they do not worry enough about what drives differences in the spending on these programs across households. I am sure you could come up with a better laundry list, but here is mine:

  • Differences in environmental attitudes (greenness)
  • Income
  • Targeting by the utilities of specific areas
  • Energy Prices
  • Weather
  • ….

What these aggregate methods do not allow you to do is to separate the effects of my laundry list from those of the program. Or in economics speak, they are fundamentally unidentified. No matter how fancy your computer program is, you will never be able to estimate the true effect. It’s in some sense like using an X-ray machine as a lie detector. In practice you are possibly attributing the effect of weather, for example, to programs. Cold winters make me want to be more energy efficient. It’s the winter, not the rebate that made me buy a more efficient furnace. Further, the statistical estimates are just that. They provide a point estimate (best guess) with an uncertainty band around it. And that uncertainty band, as Meredith Fowlie, Carl Blumstein and I showed, can be big enough to drive a double-wide trailer through.

Time to Stop Using 1950s Regression Models

So currently there is a lot of chatter about spending more ratepayer dollars on these studies and I frankly think that majority will not be worth the paper they are printed on. To be clear, this a problem with the method, not the people implementing them. What we have seen so far, is that the estimates often are significantly bigger than bottom up estimates, which is sometimes attributed to spillover effects, but I just don’t buy it. I think we should stop blindly applying 1950s style regression models in this context.

I am also not advocating that everything has to be done by RCT. There are recent papers using observational data in non-experimental settings that try to estimate the impacts of programs on consumption. Matt Kotchen and Grant Jacobsen’s work on building codes in Gainesville, Florida is a great example. They do a very careful comparison of energy consumption by structures built pre  – and post – building code and find significant and credible effects. Lucas Davis has a number of papers in Mexico that use regression techniques to back out the efficacy of rebate programs on adoption and consumption. Judd Boomhower has a nice paper on spillover effects. They all employ 21st century methods, which allow you to make causal statements about program efficiency. These can be much cheaper to do and produce credible numbers. Let’s do more of that and work closely with utilities on implementing RCTs. It’s been a great learning experience for me and an investment worthwhile!

Posted in Uncategorized | 13 Comments

Is the Regulatory Compact Broken in Sub-Saharan Africa?

(Today’s post is co-authored with Paul Gertler. Wolfram and Gertler direct the Applied Research Program on Energy and Economic Growth (EEG) in partnership with Oxford Policy Management. The program is funded by the Department for International Development in the UK.)

As we teach our students in econ 101, the prices of most goods and services reflect both demand and supply factors. So, to use a classic example, the price of snow shovels may go up during a blizzard, even if it costs no more to supply them when it’s snowing.


On the other hand, as we teach our students in regulatory economics 101, prices for regulated utilities are different. Their prices are driven almost purely by costs and, not just current costs, but costs incurred in the past that other businesses might write off as sunk.

In the textbook model, regulated utilities are what we call “natural monopolies.” They are supplying a good for which it makes the most economic sense to have a single supplier. This could be driven by the high fixed costs of building the transmission and distribution system to supply electricity, for example.

shaking-hands-clip-art-png-clipart-panda-free-clipart-images-jcsbzf-clipartRegulated utilities are implicit signatories to what’s called the “regulatory compact.” Basically, the regulator gets to set prices for the utility, ensuring that the company won’t take advantage of its monopoly position to charge prices through the roof. And, the regulator requires that the company offer universal service to anyone who wants it at the regulated prices. In exchange, the company gets assurance that it will be allowed to collect revenues to cover reasonable costs of doing business.

In the US, this is formalized through decades of judicial and regulatory decisions, for example, describing “just and reasonable” rates and “prudently incurred” costs.

According to a fascinating report recently released by the World Bank, the regulatory compact appears seriously out of whack in Sub-Saharan Africa.

The figure below highlights the problem. Each bar reflects the situation in a single country. The red diamonds reflect the cash collected per kWh by the main electricity provider (most are vertically integrated monopolies), and the purple and green bars reflect the costs. Note that for all but two of the countries, the dots are to the left of the bars. This means that the companies’ revenues are not covering their costs.


But, who is breaking the deal? Are companies’ costs too high? Perhaps “imprudent” in some sense, maybe due to corruption? Or, are the local regulators setting prices that are too low? Or, is it some combination of the two?

It’s first worth noting that only some of the countries in Sub-Saharan Africa have regulatory agencies, and only a subset of those have any real power over prices, so we’re using the “regulatory” part of the “regulatory compact” broadly.

We recently saw these issues up close in Tanzania. As part of a DFID-funded research program on Energy and Economic Growth, we organized a policy conference in Dar es Salaam, together with our partners at Oxford Policy Management.

unknown-4Tanzania’s local monopoly, the Tanzania Electric Supply Company (TANESCO), only collects revenues to cover 82%  of its costs (14 out of 17 cents per kWh), based on the World Bank calculations above. TANESCO is a vertically integrated utility, and the government owns 100% of its shares.

According to people close to the company, the rate-setting process is highly politicized, so rates are poorly aligned with TANESCO’s claimed costs. They point out that on the day the new Minister of Energy was appointed, he announced his intention to initiate a rate cut.

On the other hand, the regulators seem to believe that TANESCO’s costs are not “prudently incurred,” though they didn’t use that phrase explicitly. They argue that the company is inefficient, and the rates would easily cover costs if they cut fat.

Ministry of Energy and Minerals, Dar es Salaam

Ministry of Energy and Minerals, Dar es Salaam

It’s difficult to know who is right. Consider TANESCO’s recent experience procuring power from independent backup generators. Historically, over half of the country’s annual generation came from hydroelectric generators. In 2010, a severe drought led to persistent electricity shortages, so TANESCO signed several contracts for what’s been called “emergency” generation, including a contract with a company that owned two 50 megawatt diesel generators. Diesel prices were high in 2011 and 2012, though, and, under the contract, TANESCO had to pay the fuel costs. TANESCO’s losses during that period were reportedly more like 50% of their total costs.

Now, the company is saddled with debt from this period, but the regulator contends that the emergency generation costs were too high and is unwilling to raise rates to cover the accumulated debt. Also, the regulator increased rates by 40% in 2014, so may feel like it’s already done its part. Figuring out what the right prices for generation procured in an emergency is difficult, though. Presumably, the utility did not have much time to shop around. Then again, maybe it should have foreseen the emergency situation and planned to avoid it.

Tanzania, like many countries in the developing world, also experiences high levels of “nontechnical losses” (largely theft). So, even if rates are set to cover costs if most consumers pay, the companies will experience heavy losses. Theft appears to have a political component as well, though. This paper, by Brian Min and Miriam Golden, shows that nontechnical losses in India increase when elections are near.

The World Bank report divides each utility’s losses into four categories: underpricing (meaning the regulators are breaking the deal and setting prices lower than what would be required to cover reasonable costs), bill collection losses (meaning the utility bills for the consumption, but fails to collect), transmission and distribution losses (a combination of technical line losses above an acceptable limit and theft) and overstaffing (relative to a benchmark, suggesting the company’s costs are imprudently high). They do not attempt to identify other types of inefficiencies, such as purchase power costs that are too high.


Upgrading the distribution system in Tanzania

They find no underpricing in Tanzania – suggesting the regulators are upholding their side of the compact. They attribute 80% of the losses to bill collection and nontechnical losses – suggesting the company needs to improve their billing system and distribution network. The remaining 20% is due to over-staffing. In its current situation, though, TANESCO struggles to finance its ongoing operations, let alone the investments needed to achieve fewer billing and distribution losses, so more price increases may be needed in the short run.

The new Energy and Economic Growth program will sponsor research to address some of these key questions and issues. First, we suspect there are real costs in terms of economic growth and other development outcomes due to the kind of institutional breakdown documented in the World Bank report. We need to document the extent to which economic growth is constrained by unreliable power, for example. We aim to measure costs like this by collecting new data and conducting new analyses. Second, we will work with policymakers, regulators, the utilities and other stakeholders to learn about the best ways to improve the institutions.

Posted in Uncategorized | 8 Comments

If a Tree Falls in the Forest…Should We Use It to Generate Electricity?

Every summer vacation, we pack our tree-hugging family into the car and head for the Sierra Nevada mountains. In many respects, our trip this summer was just like any other year, complete with family bonding moments and awe-inspiring wilderness experiences:


I date myself with this reference                                                     Source

But our 2016 photo album is not all happiness and light.  This year, we saw an unprecedented number of stressed and dying trees. Forest roads were lined with piles of dead wood.


source                                                                       source

These pictures break a tree hugger’s heart. But they barely scratch the surface of what has been dubbed the worst epidemic of tree mortality in California’s modern history. According to CAL FIRE, over 66 million trees have died since 2010. And it’s not over yet.

The underlying cause is climate change working through drought and bark beetles. Warmer winters and drier summers mean this pesky bark beetle has been reproducing faster and attacking harder.  Drought-stressed trees are more vulnerable to fungi and insects. The big-picture impacts are devastating.

Acres of dying trees raise fundamental questions about how to preserve and protect our national parks and forests in the face of climate change. These existential issues were at the heart of President Obama’s speech in Lake Tahoe last week. But the epidemic also raises some more material questions. This week’s blog looks at the heated debate over what to do with millions of dead trees in the forest.

 66 million trees and counting

I’m an economist, not a woody plant biologist, so I have a hard time thinking in terms of millions of trees. With some expert assistance, I made the following ballpark conversion from trees to some more familiar metrics.

  • 66 million trees hold approximately 68 million tons CO2e.[1] To put that in perspective, California emits about 447 mmt CO2e annually.
  • If all 66 million trees were used to make electricity at existing biomass facilities (a very unlikely scenario), this would generate about 38,600 GWh.[2] To put this in perspective, California’s biomass facilities generated 7,228 GWh (gross) in 2015 .

Upshot is that 66 million dead trees is a big deal, no matter how you measure it.

There seems to be widespread – but not unanimous– agreement that leaving close to 40 million dry tons of wood (my rough estimate) in the forest will increase wildfire risk and intensity to unacceptable levels. So Governor Brown has declared a state of emergency and formed a tree-mortality task force to safely remove the dying trees, especially those that pose immediate danger. Having dragged these trees out of the forest, what to do with them?  Right now, many trees are being burned in open piles or “air curtain incinerators”.   tree3

Wood burning in an air curtain incinerator

CalFire plans to start running these incinerators 24 hours per day in the fall.  Yikes. The thought of incinerating wood in the forest 24/7 begs the question: are we better off using these trees to generate electricity? Researchers, including some esteemed Berkeley colleagues and forest service scientists, have been collecting some of the information we need to answer this question.

Forest-fueled electricity generation – at what cost?

Teams of researchers have been documenting the costs of biomass generation versus “non-utilization” burning  (i.e., burning trees in the woods to reduce fire risk). The punchline: Unless trees are located quite close to biomass generation facilities, the cost of extracting the trees, processing the wood, and transporting it to biomass generation facilities exceeds the market value of the wood fuel for electricity generation.  And this market value is falling as biomass generators struggle to compete with low natural gas prices and falling solar and wind electricity generation costs.

Some stakeholders argue that current market prices and policy incentives are failing to capture all the benefits of biomass generation has to offer. In particular, a growing body of research looks at  relative environmental impacts. The table below summarizes some recent estimates of the quantity of pollution emitted per kg of dry wood across different wood burning alternatives:

Biomass option Emissions (g/kg dry wood)
Air curtain incineration


Open pile burning 1834 0.7 0.6 10 5
Open pile burning 1894 7.5 5.0 62.5 3
Biomass to energy: gasification 1349 0.062 0.127 0.859 0.25
Biomass to energy: direct combustion 1349 0.111 0.028 0.768 0.45

Sources are here and Placer County Biomass Program. Biomass to energy conversion assume trees are 40 miles from the site of generation.

The first thing to note  is that the  estimates of CO2e emissions from electricity generation (1349 g/kg) are lower than emissions associated with burning wood in the woods, even though additional emissions are generated in the processing and transport of wood fuel. The reason is that these estimates are reported net of “avoided” CO2e emissions. In other words, researchers assume that if a kg of wood is used to fuel biomass generation, it will displace natural gas fired generation and  506 g of CO2e emissions associated with that gas generation. So 1349 g = 1856 g-506g.

It is standard to see avoided emissions from displaced electricity generation counted as an added benefit of biomass generation. Absent binding regulatory limits on GHG emissions, this can make sense. But in California, CO2e emissions are regulated under a suite of climate change policies, some of which are binding.  If the aggregate level of emissions is set by binding regulations, an increase in biomass generation will change the mix of fuels used to generate electricity, but not the level of CO2e emissions.

A quick walk in the policy weeds puts a finer point on this.  In California, an aggressive renewable portfolio standard (RPS) mandates the share of electricity generated by qualifying renewable resources (including forest-sourced biomass). So long as the RPS is binding, an increase in biomass generation will reduce demand for other qualifying renewable resources (such as wind or solar). But it should not reduce overall CO2e emissions from electricity if the biomass generation and the renewable resource it displaces are CO2e equivalent.

If avoided CO2e emissions are set to zero, the estimated CO2e emissions per kg of wood burned look fairly similar across non-utilization burning and biomass generation. In contrast, these alternatives differ significantly as far as harmful pollutants such as NOx and particulates are concerned. Aggregate emissions of these pollutants are not determined by mandated caps or binding standards.  And the quantity of pollution emitted per unit of wood burned differs by orders of magnitude across non-utilization versus electricity generation options.

It is not clear how differences in these (and other) emissions translate into differences in health and environmental damage costs. But accounting for these environmental costs would presumably reduce the net cost of  biomass generation relative to the more polluting alternative.

Dead trees fuel biomass policy developments…

No matter how you measure it, there’s a lot at stake in California’s dead and dying trees. Some of the wood can be harvested for timber. Some of the wood will be left in the woods to provide benefits to soil and wildlife. But given the current trajectory, lots of wood will be burned.

Many of the forest managers and researchers I talked to despair that biomass generation facilities are closing down just as air curtain incinerators fire up. They feel strongly that more of this dead wood should be used to fuel electricity generation. In response to these kinds of concerns, the California legislature recently passed legislation to support biomass power from facilities that generate energy from wood harvested from high fire hazard zones.  The bill is awaiting the Governor’s signature.

Increased support for biomass generation (over and above existing climate change policies) makes sense if the benefits justify the added costs. On the one hand, burning more wood at biomass facilities will incur additional processing, transport, and operating costs. On the other hand, it will generate less local air pollution as compared to non-utilization burning and other potential benefits (such as reduced ancillary service requirements vis a vis intermittent renewables). Getting a better handle on these costs and benefits will be critical if we are going to make the best of this bad situation.



[1] Assuming a mix of conifer species (pine, Douglas-fir, true fir, cedar), we estimate1800 green pounds per tree. X 66 million trees = 118.8 billion green pounds of wood available or 59.4 million green tons.  If we assume 35% moisture content (dead trees have less moisture) we have 38.6 million BDT (bone dry tons). Multiply the dry tons by 0.5 to obtain a comparable weight of entire tree’s sequestered carbon.  This gets us to 19.3 million tons of carbon. Multiply tons of carbon by 3.67 to get comparable weight in CO2e, and then convert to metric tons = 68.7 million tons. Thanks to Steve Eubanks, Tad Mason, and Bruce Springsteen for assisting with these calculations. All errors are mine.

[2] 1 bone dry ton generates approximately 1 MWh in existing biomass generation facilities.

Posted in Uncategorized | 21 Comments

Spying on You from Space

The chest thumping in economics about how big and cool our datasets are is becoming somewhat unbearable. Bigger is not always better. In fact, one of the many reasons why we love the field of statistics is that we don’t have to know everything about everyone, but we can infer information about the larger population based on a small (and hopefully random) sample. Big data were not useful when computers were essentially electrified wooden spoons holding hands being fed code and data on paper punch cards. Now that my current iPhone has a processor 10,000 times faster than the Mac Color Classic I wrote my undergraduate thesis on, there are few computational constraints and the opportunities are endless. I can connect to the Amazon Cluster and run my programs on thousands of computers at the same time. Many of the papers I read using big(ger) data, however, don’t really add a proportional amount of knowledge.

thinking machine

But, last week our former student Marshall Burke at Stanford jointly with the certified genius David Lobell and some colleagues in the Stanford Computer Science department published a paper in Science that still has me giddy. One of the big issues in trying to learn information about households is that you have to ask them questions. And that is really expensive. The 2010 US Census cost $13 billion. The World Bank spends millions on sending surveyors out across the world to learn about incomes, what homes look like, the health status of members of households, etc. Due to constrained budgets, you cannot ask everyone.

But, we are asking too few people, which leads to a devastating lack of knowledge. In the Burke-Lobell paper we learn that between 2000 and 2010, 25% of African countries did not conduct a survey from which one could construct nationally representative poverty estimates and close to half conducted only a single survey. This is problematic, since we are trying to eliminate poverty by 2030. If we don’t know where the poor are, this is going to be hard.


The paper proposes an approach that is likely going to provide high-resolution estimates of poverty at a tiny fraction of the costs of surveys.  The authors used the ubiquitous NASA imagery of earth at night, which shows night lights. Night lights are a decent indicator of energy wealth and higher incomes, since without electricity, no streetlights (usually). This is where the rest of us, me included, stopped our thinking. The Stanford brainiacs used the lower resolution night light data and a machine learning algorithm to look for features in the much higher resolution daytime imagery that predict night lights. They did not tell the machine what to look for in the way an econometrician would, but let the computer learn. The computer found that roads, cities, farming areas are features in the daytime imagery that are useful to predict night lights. The authors then discard the night lights data and use the identified features to predict indicators of wealth found in surveys. They show that the algorithm has very impressive predictive power (think Netflix challenge, but for poverty indicators instead of whether you chose the West Wing over the Gilmore Girls).

Once the model is trained, they then use the daytime imagery to predict poverty indicators at a fine level of aggregation covering areas we were totally missing before, and the maps are impressive. If you want to learn more about what they did, they made a video you can watch:

Satellite imagery has become a rapidly growing source of startups in the energy and retail sector. There are firms tracking ships carrying oil across the seven seas in real-time; there are firms tracking drilling activity at fracking sites; and, there are firms tracking the number of vehicles in shopping mall parking lots. All of these firms treat satellite imagery like the average American treats their TV: We watch the imagery presented to us. What Marshall et al. did here is leagues cooler. They combine two types of satellite imagery with some actual survey data to back out predictions of one of the most important economic indicators – poverty. This product is a game changer. I can’t wait to see the energy economic applications of this method.

Posted in Uncategorized | 2 Comments

King Coal is Dethroned in the US – and That’s Good News for the Environment

This is the worst year in decades for U.S. coal. During the first six months of 2016, U.S. coal production was down a staggering 28 percent compared to 2015, and down 33 percent compared to 2014. For the first time ever, natural gas overtook coal as the top source of U.S. electricity generation last year and remains that way. Over the past five years, Appalachian coal production has been cut in half and many coal-burning power plants have been retired.

This is a remarkable decline. From its peak in 2008, U.S. coal production has declined by 500 million tons per year – that’s 3,000 fewer pounds of coal per year for each man, woman and child in the United States. A typical 60-foot train car holds 100 tons of coal, so the decline is the equivalent of five million fewer train cars each year, enough to go twice around the earth.

This dramatic change has meant tens of thousands of lost coal jobs, raising many difficult social and policy questions for coal communities. But it’s an unequivocal benefit for the local and global environment. The question now is whether the trend will continue in the U.S. and, more importantly, in fast-growing economies around the world.

Health benefits from coal’s decline

Coal is 50 percent carbon, so burning less coal means lower carbon dioxide emissions. More than 90 percent of U.S. coal is used in electricity generation, so as cheap natural gas and environmental regulations have pushed out coal, this has decreased the carbon intensity of U.S. electricity generation and is the main reason why U.S. carbon dioxide emissions are down 12 percent compared to 2005.

Perhaps even more important, burning less coal means less air pollution. Since 2010, U.S. sulfur dioxide emissions have decreased 57 percent, and nitrogen oxide emissions have decreased 19 percent. These steep declines reflect less coal being burned, as well as upgraded pollution control equipment at about one-quarter of existing coal plants in response to new rules from the U.S. Environmental Protection Agency.

Coal waits to be added to a train at the Hobet mine in Boone County, West Virginia. Jonathan Ernst/Reuters

These reductions are important because air pollution is a major health risk. Stroke, heart disease, lung cancer, respiratory disease and asthma are all associated with air pollution. Burning coal is about 18 times worse than burning natural gas in terms of local air pollution so substituting natural gas for coal lowers health risks substantially.

Economists have calculated that the environmental damages from coal are US$28 per megawatt-hour for air pollution and $36 per megawatt-hour for carbon dioxide. U.S. coal generation is down from its peak by at least 700 million megawatt-hours annually, so this is $45 billion annually in environmental benefits. The decline of coal is good for human health and good for the environment.

India and China

The global outlook for coal is more mixed. India, for example, has doubled coal consumption since 2005 and now exceeds U.S. consumption. Energy consumption in India and other developing countries has consistently exceeded forecasts, so don’t be surprised if coal consumption continues to surge upward in low-income countries.

In middle-income countries, however, there are signs that coal consumption may be slowing down. Low natural gas prices and environmental concerns are challenging coal not only in the U.S. but around the world, and forecasts from EIA and BP have global coal consumption slowing considerably over the next several years.

Particularly important is China, where coal consumption almost tripled between 2000 and 2012, but more recently has slowed considerably. Some are arguing that China’s coal consumption may have already peaked, as the Chinese economy shifts away from heavy industry and toward cleaner energy sources. If correct, this is an astonishing development, as China represents 50 percent of global coal consumption and because previous projections had put China’s peak at 2030 or beyond.

A smoggy morning in Delhi, India. Anindito Mukherjee/Reuters

The recent experience in India and China point to what environmental economists call the “Environmental Kuznet’s Curve.” This is the idea that as a country grows richer, pollution follows an inverse “U” pattern, first increasing at low-income levels, then eventually decreasing as a country grows richer. India is on the steep upward part of the curve, while China is, perhaps, reaching the peak.

Global health benefits of cutting coal

A global decrease in coal consumption would have enormous environmental benefits. Whereas most U.S. coal plants are equipped with scrubbers and other pollution control equipment, this is not the case in many other parts of the world. Thus, moving off coal could yield much larger reductions in sulfur dioxide, nitrogen oxides, and other pollutants than even the sizeable recent U.S. declines.

Of course, countries like China could also install scrubbers and keep using coal, thereby addressing local air pollution without lowering carbon dioxide emissions. But at some level of relative costs, it becomes cheaper to simply start with a cleaner generation source. Scrubbers and other pollution control equipment are expensive to install and expensive to run, which hurts the economics of coal-fired power plants relative to natural gas and renewables.

Broader declines in coal consumption would go a long way toward meeting the world’s climate goals. We still use globally more than 1.2 tons of coal annually per person. More than 40 percent of total global carbon dioxide emissions come from coal, so global climate change policy has correctly focused squarely on reducing coal consumption.

If the recent U.S. declines are indicative of what is to be expected elsewhere in the world, then this goal appears to be becoming more attainable, which is very good news for the global environment.
The Conversation
This blog post is available on The Conversation.

Posted in Uncategorized | Tagged , | 24 Comments

Fixing a major flaw in cap-and-trade

While many Californians are spending August burning fossil fuels to travel to vacation destinations, the state legislature is negotiating with Gov. Brown over whether and how to extend the California’s cap-and-trade program to reduce carbon dioxide and other greenhouse gases (GHGs).   The program, which began in 2013, is currently scheduled to run through 2020, so the state is now pondering what comes after 2020.

The program requires major GHG sources to buy “allowances” to cover their emissions, and each year reduces the total number of allowances available, the “cap”.  The allowances are tradeable and their price is the incentive for firms to reduce emissions.  A high price makes emitters very motivated to cut back, while a low price indicates that they can get down to the cap with modest efforts.

Before committing to a post-2020 plan, however, policymakers must understand why the cap-and-trade program thus far has been a disappointment, yielding allowance prices at the administrative price floor and having little impact on total state GHG emissions.  California’s price is a little below $13/ton, which translates to about 13 cents per gallon at the gas pump and raises electricity prices by less than one cent per kilowatt-hour.

CapAndTradeExtensionFig1The low prices in the three major markets for GHGs mean little impact on behavior

And it’s not just California. The two other major cap-and-trade markets for greenhouse gases – the EU’s Emissions Trading System and the Regional Greenhouse Gas Initiative in the northeastern U.S. — have also seen very low prices (about $5/ton in both markets) and scant evidence that the markets have delivered the emissions reductions.  In fact the low prices in the EU-ETS and RGGI have persisted even after they have effectively lowered their emissions caps to try to goose up the prices.

In all of these markets, some political leaders have argued the outcomes demonstrate that other policies – such as increased auto fuel economy and requiring more electricity from renewable sources – have effectively reduced emissions without much help from a price on GHGs. That view is partially right, but a study that Jim Bushnell, Frank Wolak, Matt Zaragoza-Watkins and I released last Tuesday shows that a major predictor of variation in GHG emissions is the economy.  While emissions aren’t perfectly linked to economic output, more jobs and more output mean generating more electricity and burning more gasoline, diesel and natural gas, the largest drivers of GHG emissions.

CapAndTradeExtensionFig2Accurately predicting California’s GSP 10-15 years in the future is extremely difficult

Because it is extremely difficult to predict economic growth a decade or more in the future, there is huge uncertainty about how much GHGs an economy will spew out over long periods, even in the absence of any climate policies, what climate wonks call the “Business As Usual” (BAU) scenario.

If the economy grows more slowly than anticipated — as happened in all three cap-and-trade market areas after the goals of the programs were set – then BAU emissions will be low and reaching a prescribed reduction will be much easier than expected.  But if the economy suddenly takes off — as happened in the California’s boom of the late 1990s — emissions will be much more difficult to restrain.  Our study finds that the impact of variation in economic growth on emissions is much greater than any predictable response to a price on emissions, at least to a price that is within the bounds of political acceptability.

CapAndTradeExtensionFig3California emissions since 1990 have fluctuated with economic growth

Our finding has important implications for extending California’s program beyond 2020.     If the state’s economy grows slowly, we will have no problem and the price in a cap-and-trade market will be very low.  In that case, however, the program will do little to reduce GHGs, because BAU emissions will be below the cap.  But if the economy does well, the cap will be very constraining and allowance prices could skyrocket, leading to calls for raising the emissions cap or shutting down the cap-and-trade program entirely.

Our study shows that the probability of hitting a middle ground — where allowance prices are not so low as to be ineffective, but not so high as to trigger a political backlash — is very low.  It’s like trying to guess how many miles you will drive over the next decade without knowing what job you’ll have or where you will live.

So, can California’s cap-and-trade program be saved? Yes. But it will require moderating the view that there is one single emissions target that the state must hit. Instead, the program should be revised to have a price floor that is substantially higher than the current level, which is so low that it does not significantly change the behavior of emitters.   And the program should have a credible price ceiling at a level that won’t trigger a political crisis.  The current program has a small buffer of allowances that can be released at high prices, but would have still risked skyrocketing prices if California’s economy had experienced more robust growth.

The state would enforce the price ceiling and floor by changing the supply of allowances in order to keep the price within the acceptable range. California would refuse to sell additional allowances at a price below the floor. This is already state policy, but the floor is too low. California would also stand ready to sell any additional allowances that emitters need to meet their compliance obligation at the price ceiling.

Essentially, the floor and ceiling would be a recognition that if the cost of reducing emissions is low, we should do more reductions rather than just letting the price fall to zero, and if the cost is high, we should do less rather than letting the price of the program shoot up to unacceptable levels.

But should California’s cap-and-trade program be saved?  I think so.  My first choice would be to replace it with a tax on GHG emissions, setting a reliable price that would make it easier for businesses to plan and invest.  But cap-and-trade is already the law in California and with a credible price floor and ceiling it can still be an effective part of the state’s climate plan.

Putting a price on GHGs creates incentives for developing new technologies, and in the future might motivate large-scale switching from high-GHG to low-GHG energy sources as their relative costs change.  The magnitudes of these effects could be large, but they are extremely uncertain, which is why price ceilings and floors are so important in a cap-and-trade program.  With these adjustments, California can still demonstrate why market mechanisms should play a central role in fighting climate change while maintaining economic prosperity.

A shorter version of this post appeared in the Sacramento Bee August 14 (online Aug 11)

I’m still tweeting energy news articles and studies @BorensteinS

Posted in Uncategorized | Tagged , | 21 Comments