Why are California’s Gasoline Prices So High?

“What?” you may be saying “Gas prices are lower than they have been in a long time.”   That’s true, even in California, but that just reflects the collapse of world oil prices.  And only partially.  You see, while oil prices have been falling across the country, the gap between California gas prices and the rest of the U.S. has climbed to higher levels for a longer stretch than at any time in the last 20 years.

CAvsUSRetailGasPriceWhy? I don’t know, but some people claim to, from consumer advocates arguing it’s collusion to industry representatives saying that it’s just a shortage of supply for California’s special cleaner-burning blend, known as CARB gasoline.

The figure above shows the difference between California’s average gas price and the U.S. average going back to 1995 when the state started requiring CARB gas.  For the decade from 2005 to the end of 2014, California’s retail price averaged about 31 cents above the national average.  That differential lines up well with the fact that our gas taxes were about 20 cents above the nationwide average during that time and making CARB gasoline adds about 10 cents a gallon to the cost.

On January 1, 2015 transportation fuels came under California’s Cap-and-Trade (CaT) program for greenhouse gas (GHG) emissions, as I discussed before.   It is now widely accepted that the CaT program should have, and has, increased gas prices by about ten cents a gallon.  Add that in, and we’d expect the differential between California and the rest of the country (where GHG emissions are still free) to average around 40 cents per gallon.

That’s about where things were for the first month and a half of 2015, but then on February 18 a fire at Exxon’s Torrance refinery near LA shut down the plant’s gasoline production.   That refinery normally produces about 10% of the state’s CARB gasoline.  Since then, California’s gas price has averaged about 82 cents per gallon higher than the national average.  The extra 42 cent premium since February 17 totals up to nearly $4 billion in extra payments – more than $150 for every licensed driver in the state – and still growing.   As of yesterday, the average California price was 71 cents above the US, according to AAA.

NoCAvsSoCARetailGasPriceThe problem is worst in Southern California, where prices since mid-February have averaged 26 cents higher than in the North.   In the previous decade, the North-South differential averaged around one cent.

High prices don’t necessarily mean that anyone is profiting unfairly or doing anything illegal.  Scarcity of a product drives up prices even in the most competitive markets.[1]

Events like the Torrance fire have caused price spikes in California before, but they generally have disappeared within 4-6 weeks, because that’s how long it takes to import CARB-specification gasoline from the many other refineries in the world that can produce it.  In 2012, when the Chevron refinery in Richmond had a major fire, prices jumped 50 cents for a couple weeks, but within a month that excess differential was gone.   As the figure above shows, previous spikes have never before lasted nearly as long as the current one.

So, this spike does suggest that something is amiss in this market.   Why is this spike so long lasting?  And what, if anything, should the state do about it?

Chevron+Posts+Best+Quarterly+Earnings+Record+Svo93DoV961l$4 gas was common in Southern California this May

Some consumer advocates point to increased concentration among in-state producers of CARB gasoline in the last few years and allege these firms are now colluding to reduce competition.  But the evidence presented so far is thin, mainly just that refineries are making a boatload of money.   That could indeed be due to producers restricting the quantity they sell in order to boost prices, but it could instead just reflect refineries having insufficient capacity to replace the lost production capacity when one of the largest producers shuts down unexpectedly.  Either could cause the price to jump.  In a 2004 paper that Jim Bushnell, Matt Lewis and I wrote, we discussed competitive and non-competitive causes of high gasoline prices, how difficult it is to tell them apart, and policies that might address them.

Critics also point to the fact that California refineries have been exporting gasoline despite the high prices at home.  But not all of the gasoline made in our refineries can meet the strict specification for in-state sales.  Non-qualifying gasoline is regularly shipped from California to Nevada, Arizona, Mexico and other places with lower standards.   So, exporting gasoline doesn’t seal the deal on anti-competitive behavior.  Now if California refiners were exporting CARB-specification gasoline since February – or making a choice to produce less CARB gasoline — that would be much more difficult to reconcile with competitive behavior.

Nonetheless, while consumer advocates have not proven their case, their suspicions have merit.  With prices very sensitive to even a slight shortage, and with two companies producing about half the state’s CARB gasoline supply, it seems quite possible that firms might be able to make more money by making less CARB gasoline.  This could be particularly true when a supply shock like a large refinery fire has already tightened the market.  That doesn’t prove they are doing it, but it does – as the lawyers say – go to motive.

In the past, one response from the industry has been that such output restriction would just create an opening for imports of CARB gasoline that would steal their market share.  But that leads us to perhaps the biggest puzzle of the current price shock: where are the imports?  With California’s prices this high – regardless of whether due to real scarcity or insufficient competition among in-state producers — it seems there is ample money to be made bringing in CARB gasoline from afar, as has happened during past spikes.  Why isn’t that happening this time, or happening in sufficient quantity to bring California’s prices back in line with the rest of the country?

More than one of my environmentalist friends has responded to my concerns by asking what’s so bad about high gas prices.  After all, we need to move away from gasoline and this will help.  I think there are a couple reasons that this isn’t the way we want to get off gasoline.

First, high gas prices hurt lower-income working families, so if we were imposing high prices with, say, a carbon tax policy, I at least would want to pair it with some other tax relief for that group to help offset the higher cost of fuel.  This isn’t a government tax policy, just higher profits flowing to private companies, and there is no offsetting tax reduction.

Second, because California is a leader in all things enviro, our energy policies are scrutinized worldwide.  If our fuels policy is viewed as causing inexplicably high gasoline prices, that will undermine political support for similar policies in other jurisdictions.

A year ago, I was named a member of the California Energy Commission’s new Petroleum Market Advisory Committee, five industry experts charged with examining the state’s high and volatile gas prices, and suggesting policy responses.  Three weeks ago, I was appointed chair of the committee.  Working with CEC staff, I hope very soon to hold a workshop at which we can hear the views of all stakeholders – refiners, importers, retailers, consumer groups and others — and ask them detailed questions.   Such an open discussion will, I hope, bring more insight and common understanding than we have gotten from the media-targeted rhetoric that usually accompanies discussions of gas prices.

[1] If you buy a house just before the market rebounds and then sell it a few years later at a tidy profit, is that unfair?

Posted in Uncategorized | Tagged , , | 8 Comments

VW’s Deepwater Horizon?

Last week one of the biggest environmental scandals since the Deepwater Horizon disaster made its way to somewhere near the bottom of page 11 of most major newspapers. VW admitted to systematically cheating on emissions tests of its Diesel vehicles. This might sound snoozy, until you read up on the details.

Vehicles across the US must satisfy emissions standards for criteria air pollutants (e.g., NOx, SOx, CO2). California, of course, has the most stringent of these standards and enforces them for new and used vehicles. If you have an older car, you need to go get your car smog checked every few years to make sure your clunker is still clean enough to be allowed on California roads. It is for this reason that until recently the share of Diesel cars in California was extremely low, since almost no vehicles satisfied these stringent standards. In come the “clean Diesels”, pushed mainly by German manufacturers of normal people (e.g., VW) and luxury (e.g., Mercedes and BMW) vehicles. Diesel was finally salon worthy! Look! It’s fuel efficient and clean! Many of my Birkenstock-wearing, dog-owning, El Capitan-summiting colleagues and graduate students ran out and traded in their Prii for the VW TDI wagon. So much space! So much torque! So much fuel efficiency! So much clean! Well, it turns out what sounded too good to be true was.

In a Lance Armstrongian feat of deception, VW has now admitted to having installed a piece of software called a “defeat device” that turns on the full suite of pollution control gadgets when cars are being smog tested. As soon as you leave the testing station and head out for your Yosemite adventure with Fluffy barking in the back, your car emits 10-40 times (!!!!!!!!!!!) the amount of NOx you just reported on your smog check card. Just to put this in perspective – this is like that 215 calorie Snickers bar having 2150-8600 calories instead. The EPA will almost certainly sue VW. The penalties involved here are significant. The EPA can ask for $37,500 per incident, which amounts to roughly $18 billion in fines. Plus there will likely be criminal charges filed against VW executives. Further, depending on whether these vehicles will continue to be sold in the US after everything is said and done, this is a disaster for VW as they rely heavily on the high fuel efficiency ratings of Diesels to satisfy CAFE.

In my eyes there are two interesting economic points to be made here. The first, maybe more headline worthy, is trying to determine the optimal fine in order to deter other manufacturers from engaging in such behavior. An economist would argue that what we have here is the classic case of an externality. By selling the dirtier vehicles, VW exposed kids, adults and dogs to massive quantities of local air pollutants. VW is responsible and should be liable for this. Hence VW should correct this market failure by paying the full external costs it caused. This calculation would involve estimating the economic damages from this additional air pollution and passing the bill on to VW. My back of the envelope calculation suggests that for the NOx portion this is about $232 per vehicle over three years (far from $37,500).

But, there is a large law and economics literature on determining the fines to achieve the optimal and efficient amount of deterrence. The problem with just passing on the external damages is that VW was not going to be caught with certainty. If the executives thought there was a 1% chance of getting caught, it might have been more worthwhile to cheat than if they thought that they were going to get caught with certainty. In this case, the penalty should be approximated by the external costs divided by the probability of getting caught. This, of course, would be significantly larger than the external costs alone. Getting the external costs right is hard to do (e.g., you need more pollutants and the damages vary across space), but can be done with standard tools in the talented economists’ empirical toolkit.

The broader question is how did this happen? This is not one student cheating on an intermediate microeconomics exam and thinking (s)he would get away with it. This is the world’s largest car manufacturer intentionally deceiving the federal and state governments by gaming their enforcement strategy. While some cynic might remark that folks will always cheat when there’s a dollar to be made, I think we can rethink how we design regulations by building in evaluation from the get go.

Michael Greenstone, who spends his summers two doors down the hall, has thought a lot about this recently. In the US, we pass many of our major regulations based on ex ante cost benefit analyses. Testifying on Capitol Hill, he recently made two suggestions that would significantly improve things. First, he argues that we should institutionalize the ex post review of economically significant rules “in a public way so that these reviews are automatic in nature”. He also argues that rules already in effect should start being reviewed using retrospective analysis. The relevant agencies should commit to changing or abandoning rules based on these evaluations, or possibly create new rules based on these evaluations.

The big issue here is of course, who should review these policies? He argues in favor of the creation of a regulatory analysis division within the Congressional Budget Office. This division would conduct the regularly scheduled reviews and conduct reviews at the request of lawmakers. I would go one step further and argue that these reviews should not only be staffed with government employees, but require the review and participation by independent academics. There is precedent for this model.

The certainty of independent review of policies and enforcement strategies significantly drives up the probability of detection, which would diminish the expected profits from cheating. By firms large and small. Plus, we are spending scarce public funds on environmental regulation. We should spend it on what works. And we need to figure out what that is.

Posted in Uncategorized | Tagged , , , , | 25 Comments

Are We Too Fixated on Rural Electrification?

“Rural electrification” and “energy access” are catchphrases in many energy and development circles. Multilateral lending agencies, many NGOs and the UN are highlighting the 1.3 billion people who currently do not have electricity in their homes. For example, of the UN’s 17 Sustainable Development Goals, number 7 is to “Ensure access to affordable, reliable, sustainable and modern energy for all.” Similarly, the UN and the World Bank launched the Sustainable Energy For All initiative in 2011, whose name basically defines their vision.

Building out the grid in Kenya

Building out the grid in Kenya

Electricity is certainly a vital part of modern life. Without it, people can’t watch TV, refrigerate food and medicine, charge a cell phone, protect themselves from extreme heat, or do many of the things those of us in the developed world take for granted.

I’m concerned, however, that development efforts may be misdirected because of the near singular focus in the energy sphere on this particular goal. I’m worried about two potential outcomes – that we’ll stop short or that we’ll go too far. My concerns may seem contradictory, but I fear that both are made more likely by focusing too exclusively on one binary measurement. (I’ll leave for another blog post the messy ethics of focusing on sustainable energy access – achieving energy access with green sources.)

Perils of Stopping Short

Here’s the potential problem with stopping short. I worry that once a household has a small solar home system, the data collectors will declare it “electrified” and policy makers will put a checkmark in the electricity box and declare victory. But, solar home systems provide very limited services at high per kWh prices and don’t allow people to do many of the things we associate with modern energy access.

For example, mKopa, the leading solar provider in Kenya, sells a tiny 8 Watt system that comes with 3 LED bulbs, a radio and a cell phone charging station. (Unfortunately, this very system was championed in a New York Times op-ed last week.)

The world’s chief energy data collectors at the International Energy Agency recognize that “[a]ccess to electricity involves more than a first supply to the household,” and claim that an appropriate definition of electricity access would include a minimum annual kWh usage level. But, they conclude that:

[t]his definition cannot be applied to the measurement of actual data simply because the level of data required does not exist in a large number of cases. As a result, our energy access databases focus on a simpler binary measure of those that do not have access to electricity…

I am part of a working group at the Center for Global Development that’s advocating for better data and reporting on energy access. A report is due out soon.

Perils of Going Too Far

The dangers associated with going too far are subtler, and may not be empirically relevant, but let me describe my concerns. As the chart below demonstrates, there is clearly a strong positive relationship across countries between GDP per capita and electricity consumption per capita. (The figure plots the natural log of both variables, so you can think of the relationship reflecting percent changes.) The same pattern holds for lots of other development indicators besides GDP per capita.GDPVKWH

Let’s assume that we know the relationship in the above figure is causal, meaning that driving up electricity consumption in a country will cause its per capita GDP to grow. What the chart misses is that not all kWh are created equally. A kWh that replaces a kerosene lamp with a CFL for a month may not be the same as a kWh that helps power a factory that employs 10 people for an hour, and one may have a larger impact on development than the other.

What if governments are less likely to electrify schools if they’re focusing on homes? Or, what if utilities that spend more money on building out their electricity systems to reach homes can spend less money on ensuring factories or hospitals get reliable electricity?

None of the Sustainable Development Goals are targeting the number of schools with electricity or the number of industries with reliable electricity supply, and, to my knowledge, we don’t have a firm analytical grasp on whether spending money on rural versus industrial or health sector electrification helps improves people’s lives by more.

I am not denying that rural electrification brings benefits. Nonetheless, any expenditure of public, World Bank or NGO money has an opportunity cost, so spending money on rural electrification means we can’t spend money somewhere else.

This struck me seeing the juxtaposition of a sleek new electricity meter on a Kenyan woman’s mud wall. She liked replacing her kerosene lamp with an electric light bulb and her neighbor liked having TV, but connecting her to the grid is not cheap. What else could the government have done with that money that may have helped this woman more than the electricity connection?IMG_3397

We asked another woman in the compound whether she would prefer her electricity connection or a new motorbike. She said electricity. But when we gave her the choice between better health services or electricity and better education for her kids or electricity, she chose both of those over electricity. If this woman is representative, electrification in rural households is not yet the right priority.

That said, there are reasons to believe I don’t need to worry about going too far. It is entirely possible that building out the electricity grid to reach homes will make countries more likely to connect health centers and schools, and not less likely. Also, there could be a lot of benefits that come from rural electrification that wouldn’t be captured if the kWh were directed at factories – what economists call spillovers. For example, one person interviewed in an NPR story (which features my co-author TedIMG_2757 Miguel) described how getting an electricity connection made him feel, “part of Kenya.” Similarly, introducing this young boy to engineers installing electricity at his home may incite an interest in engineering and make him more likely to go to university. These indirect effects are difficult to measure, but they may be very important.

Governments and NGOs need to figure out how to get the biggest bang for their buck. So, we need more data and more analysis to figure out the best way to improve people’s lives and how big a role there is for rural electrification. It may turn out that rural electrification has high payoffs relative to alternatives, but there are risks to forging ahead without a richer understanding of how electricity drives economic growth, improved quality of life, and other development goals.

Posted in Uncategorized | Tagged , , , , | 8 Comments

Are We Over-Air Conditioned?

Air conditioning  made lots of headlines this summer. In newspaper accounts, internet memes, and office gripe sessions, there was much commiseration about the over-air conditioning of the American workplace….


Someecard Source

It seems no one is spared from these polar working conditions – not even energy efficiency advocates!  Colleagues of mine recently attended an energy efficiency conference in the midst of a summer heat wave. Organizers courteously (but ironically) distributed blankets to help shivering attendees keep warm in heavily air conditioned meetings (about energy efficiency!). This is like handing out jelly doughnuts at a weight watchers meeting.

On the face of it, widespread accounts of frigid summer working conditions seem to point to an absurd waste of energy.  But thermal comfort is highly subjective. If complaints are all coming from the easily-chilled tail of the thermal preference distribution, is it possible that office temperatures are about right on average?

The Goldilocks question

In theory, indoor office temperatures should be about right. Temperature control standards for most office building are calibrated to optimize something called the “Predicted Mean Vote”. This is a formula that predicts how a large group of people would vote (too cold registers as a negative number; too hot registers as positive; zero = just right) as a function of indoor temperatures, metabolism rate, clothing, etc.

But a provocative study released last month suggests this PMV formula needs updating. These authors claim that one of the primary inputs to current standards – the metabolic rate- is calibrated to the average male. When the researchers measure the average metabolic rate for a small group of young adult females performing light office work, they find significantly lower metabolic rates.

The media spin on this paper played up the irresistible battle of the sexes dimension. This set off a heated/amusing debate between freezing women who decry “sexist thermostats!” and sweaty men who point to the first fundamental law of clothing (you can always put more on -but there is a limit to how much you can appropriately remove!).

The industry response to the study looks quite different. The engineers at the Association of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE)– who know a thing or two about temperature control standards in buildings- say that the authors have misinterpreted how the standards are actually set. They assert that thermal comfort criteria are based on extensive laboratory studies of both men and women. These studies find that when men and women do the same kind of sedentary work in the same type of clothing, there are no relevant differences in preferred temperatures across sexes.

To make sense of all of this, I went to find my Berkeley colleague Stefano Schiavon who studies indoor work environments and building energy consumption. Ask Stefano whether commercial office buildings in the US are kept too cold in the summer and you get an impassioned (he is Italian after all) Yes!!

By 3 degrees Celsius at least, he says. But, he argues, factors such as oversized HVAC systems – not sexist thermostats – are to  blame. For example, a temperature check across a sample of U.S. office buildings finds average summer temperatures in US office buildings are not only below the recommended ASHRAE standards, but colder in summer than in winter!

How much energy wasted?

Whereas office buildings are often cooled to around 72°F  in the summer, experts suggest  something in the range of 77°F can be maintained (assuming good air circulation) with no loss of worker satisfaction.  Lose the necktie, put on some linen shorts, and you have Japan’s “Cool Biz” campaign which recommends an 84°F set point for office air conditioners.


Super Cool Biz : Looking super cool in 28°C!

How much energy could be saved if we increased the average cooling set point in office buildings?  A team of Berkeley researchers recently looked at increasing cooling set points in office buildings from 72°F  to 77°F.  Across climate zones and office buildings types, they estimate a reduction in cooling energy consumption of 29 percent on average.

By my very crude calculations, if we apply this reduction across all air conditioned office space in the United States (estimated at 14,095 million square feet in 2012), this amounts to a reduction in electricity consumption of 11,300 GWh/year.[1]

Compared to total annual electricity consumption, this does not amount to much ( less than half a percent).  But the impact would be comparable with other climate change mitigation measures we get excited about. For example, the EIA estimates nationwide solar PV production in 2014 at  15,874 GWh (utility scale).  In other words, keeping indoor air temperatures too cool in the summer is working to offset hard won emissions reductions achieved elsewhere.

Too much of a good thing

Air conditioning at the office – when used in moderation – is a very good thing.  There is plenty of research demonstrating that air conditioning reduces mortality, boosts productivity, and makes us happier, more agreeable people.  However, in many American workplaces, it seems this good thing is being taken to a wasteful extreme.

We should be paying more attention to how we (over-) cool our commercial buildings. But it can be very hard to get people excited about energy efficiency and/or conservation.  Several new technologies (such as apps that help individuals control their cubicle climate) on the market make energy conservation more accessible and more fun. Between the stereotypical shivering female office worker and her gadget-loving male counterpart, this could lead to smarter cooling – and some energy savings- in our office buildings next summer.

[1] EIA estimates the average cooling energy intensity for office buildings is 8,900 btu/square foot – or 2.764 kWh/square foot accounting for line losses.  2.764 * 14 095 million * 29 percent ~= 11,300 GWh per year.

Posted in Uncategorized | Tagged , | 9 Comments

Can California Ignore its Neighbors?

Today’s post is coauthored by Benjamin Hobbs from Johns Hopkins University

The U.S., almost alone among developed economies, has operated its power system as a collection of balkanized fiefdoms. Evidence from around the country, from work by Mansur and White , and ongoing work by Steven Cicala,  indicates that merging utility system operations can be a big win economically by reducing the amount of fuel burned, improving network utilization, and increasing reliability. A more efficient system makes it easier to fully use varying wind and solar power sources, and lowers the cost of reducing pollution from fossil plants, as the Obama administration hopes to do with its recently announced Clean Power Plan.


Too Many Cooks? United States Electric Control Areas Circa 1999. (source: Cicala, 2015).

This long and twisted road toward more rational operation of our regional electricity grid is now approaching a new milestone. The next major proposed step is full integration of both the day-ahead and real-time markets of the California Independent System Operator (ISO) with the areas operated by PacifiCorp.  Essentially, the California ISO market would expand to embrace power plants and consumers in five states. Such moves are really long overdue.

The ISO has already linked its market for “real-time” sales of electricity (up to approximately a half-hour ahead of when power is produced by power plants and used by consumers) with PacifiCorp.  This “energy imbalance market” (EIM) allows western utilities for the first time to transact power on a more real-time basis. A Nevada utility will EIMmapbe the next member of the EIM, and utilities in Washington and Arizona are considering joining.  However, only some of the benefits of coordination can be gained in the last few minutes before power is produced and used. Only about 5% of the ISO’s volume is scheduled in the fifteen minute-ahead and five minute “real-time” markets of the EIM. Many decisions about how to produce power–for instance from slow moving large fossil fueled plants–and how to use it–for example to change the timing of shifts at a factory–need to be made twenty four hours or more ahead of time.  Thus the logical next step is full integration of the day-ahead, hour-ahead, and real-time markets through an expanded ISO.

So why would California not jump at such an opportunity? There have been two concerns raised. One is that expanding the California ISO would require some sharing of governance and policy decisions with other states.   A second is that tighter integration would harm the environment by somehow making the west more hospitable for coal-based power plants. We’ll address this second concern first.

While it is true that the PacifiCorp system, and much of the western U.S., leans more heavily on coal generation than does California, it is hard to imagine a scenario where ISO expansion increases coal output and several ways in which expansion would likely expand the role of renewable energy.   To understand why this is, one needs to understand what ISO’s do differently than other electric systems.

Utility systems have been trading power, albeit inefficiently, without ISOs for quite some time. California has imported almost a third of its power for decades, a greater fraction than any other state. Without an ISO, however, those trades can be time consuming and difficult to put together, and are vulnerable to the whims of a large transmission owner who may not want to share its grid. ISOs, like California’s, can provide an unbiased, transparent, and timely allocation of transmission resources that was impossible to achieve in the old informal trading regimes.

That means that, in the old world, transactions tended to involve relatively large and stable sources of power in order to make it worth the hassle. Large coal plants, which usually run flat out, have been able to make deals in this kind of ad-hoc environment. Life, though, was more difficult for renewable resources, which are smaller and subject to the vagaries of nature. Renewable resources require “balancing” services to fill in the fluctuations of their output, and these are much easier (and usually cheaper) to acquire in an ISO environment.    In this way, ISOs help renewables compete more successfully against fossil-fueled plants, including coal plants.

A second reason to expect that ISO expansion would not drive an increase in coal output is that western coal plants are already heavily utilized. This is especially true for PacifiCorp. Environmental Protection Agency data show that PacifiCorp’s coal plants were producing at three-quarters or more of their capacity in 2012; this fraction is typical for coal plants that are producing all they can, accounting for downtimes for maintenance and mechanical breakdowns. Therefore, even if California utilities were eager to buy coal output (which they are not), there is little extra supply to be provided right now.

On the other hand, there will be a growing amount of renewable generation coming online as a result of policies in California and other western states. ISO expansion would smooth the way for this extra renewable energy to be marketed in currently coal-heavy states.   There are many times, mainly in the middle of the day when solar panels are producing at their maximums, when supply is so much in excess that prices are negative, and users are actually paid to consume power.   This is because other power plants have to be operated in very inefficient ways to ensure that supply and demand are balanced throughout the day. As California moves towards meeting its 33% renewable electricity goal by 2020, and likely works toward a 50% goal by 2030, such periods will become much more frequent.   Studies show that large amounts of wind and solar power will need to be “spilled” at such times—that is, such plants will be turned down and produce less than they could. Policy makers and the public are rightly concerned about this waste.

There are triple benefits to being able to more easily export this excess power.  First, it will actually lower costs for Californians by improving the efficiency of those other plants. Second, it will lower costs for other states because they can reduce output from their conventional plants. Third, air pollution and greenhouse gas emissions will go down because California plants will operate more efficiently while plants elsewhere will operate less.

One way to estimate the impact of California’s renewable policy on the climate is to look at the energy sources likely to be displaced by renewable energy. A series of papers by Erin Mansur, Stephen Holland, and others, has estimated the marginal source of electricity that responds to increases in demand (say for electric cars) in different regions. They break down this response for California and the rest of the west.

Using their data, one can see that, on the margin, changes in demand in California have very little impact on coal output anywhere in the west.  On average, about 5-10% of a change in California demand in the early morning and less than 1% during mid-day, is met by Coal.   This implies that as we increase our renewable output, we will be displacing increasingly more efficient natural gas plants. Unless something changes, with more integration, that renewable output can get us much more carbon bang for the buck elsewhere. In particular, in the rest of the west, between 20 and 40% of a marginal change in demand is met by coal-based units. Therefore, exporting our renewables to the west (or promoting the development of renewables outside of California) would be much more likely to displace coal generation, roughly doubling the reduction in CO2 emissions.  More studies addressing the specific impacts on CO2 emissions are likely coming.

In sum, we think there are many reasons to believe that expanding power markets will be good for renewable energy and will reduce costs to consumers.   Speculation that coal plant pollution might temporarily increase because of greater trade should not stand in the way of achieving these benefits. This is especially true because it is more likely that pollution will instead decrease in the near term, and because it is widely agreed that integrating markets is essential for achieving all the economic and environmental benefits of renewable energy in the long term as state and federal environmental policy move our power sector towards sustainability.

The other concern about ISO expansion is the fact that California would have to share some decision making with representatives from other states. This should be embraced as an opportunity, not used as a barrier. For one thing, it is important not to overstate California’s control over the CAISO. While it operates as a state-chartered non-profit corporation, the CAISO, like any other electricity system operator is ultimately answerable to the Federal Energy Regulatory Commission. Nor is state control a guarantee against problems. Recall that California went through an electricity crisis with its own state chartered Power Exchange and ISO.

Second, and most importantly, dealing with the energy and climate challenges that are facing us today requires that California and other western states work together.  Electricity flow doesn’t stop at state borders, and nor do greenhouse gasses. The U.S. EPA’s Clean Power Plan encourages States to coordinate their climate policies, but it can also create perverse incentives that could result in smaller carbon reductions and higher costs. Such a disappointing outcome can be avoided if western states work together.

Consider the alternative. In the long run, what is the benefit to the climate of California, alone, achieving emissions reductions through means that intentionally make our electric system run less efficiently, raises costs, and essentially wastes a portion of our zero-carbon renewable production? That is not a model that is likely to appeal to other states and countries. In many dimensions, there is a pressing need for state-level policy coordination and an expanded ISO would provide an institutional setting in which that can happen. Let’s hope provincial concerns don’t trump the potential for game-changing reforms of power markets and air pollution regulation.

Posted in Uncategorized | Tagged , | 9 Comments

The Decline of Sloppy Electricity Rate Making

Back in the “good old days” most customers had no choice about how to buy electricity and a regulator’s life was pretty easy.  The utility needed sufficient revenue to cover its costs, but the regulator approving rates was mostly just deciding whose ox to gore.    How much should industrial customers cover versus commercial or residential customers?  Is a fixed charge fair to those who don’t consume much?  That sort of thing.

Of course, even back in the old days some customers had choices, particularly large industrial firms.  They could self-generate if the utility tried to charge them a price that was too high.  And if they hadn’t already set up shop in the utility’s territory or weren’t too invested in the area, they could take their demand elsewhere.  Regulators were pressured not to foist too much cost on the large customers who had an option to bypass the utility in whole or in part.  That showed up in rate design and, sometimes, in customer-specific arrangements.


For those customers, rates were set to reduce so-called “inefficient bypass,” which described when a customer would find an alternative supplier (or self-generation) that wasn’t actually lower cost than the utility, but offered a lower price.  Avoiding inefficient bypass meant the utility tried to keep customers for which it was the lowest-cost supplier by setting price close to that cost.

Luckily for utilities — and for the stress level of regulators — few customers had real bypass opportunities in those days, certainly not residential or small commercial customers.  But that luck has run out; technology is now making every customer a potential bypasser.

Rooftop solar panels are the leading bypass mechanism for small customers.  They make economic sense for the customer so long as the retail price of the kilowatt-hours crowded out by the solar generation is greater than the cost of solar electricity.  But, as I’ve discussed previously, they are only efficient for society if they lower the overall cost of supplying the electricity needed on the grid.  The gap between retail price and avoided cost opens the door for inefficient bypass.

And solar panels aren’t the only bypass news.   With low natural gas prices, combined heat and power (CHP) installations onsite can lower bills for some customers.   Fuel cell technology continues to advance, pushing closer to the retail cost of electricity.  Batteries can store power from the grid or from onsite generation at lower and lower costs, making it easier for customers to rely less on the grid or to choose when they want to rely on the grid.


Except for some very particular narrow applications, none of these technologies lowers total grid costs by as much as it lowers customer bills, which means bypass leaves the utility with a revenue shortfall.  The potential shortfall, and the need to then raise prices, and the resulting incentive for more bypass, has been dubbed “the utility death spiral.”

The drama, and implied visuals, of a utility spiraling into the abyss (exactly what? a power plant?) creates lots of excitement, but the phrase leads us away from the real issue.  No technology available today — or likely to be available for years to come — will lead more than a fraction of the customer base to fully cut the cord, and operate without the utility.   Decades from now, most customers will still want access to the grid, and will still need the utility.

Something is dying alright, just not the utility.  It’s the ability of regulators, utilities, and interest groups to push around revenue collection among customers without the customers pushing back.

  • Try to punish high-consuming households by raising their price many times above cost – as has been done in California for the last 15 years – and they will now install solar to reduce their grid purchases, undermining revenue collection.
  • Try to use “demand charges” that are based on a customer’s peak usage — regardless of whether its peak coincides with system peak — and soon they will be installing batteries to smooth their peak, but in many cases without helping to lower grid costs.
  • Try to raise retail rates for most customers in order to offer discount electricity to low-income households and the high-price customers will turn to all forms of distributed generation instead of subsidizing the poor.
  • Try to stick commercial and industrial customers with more of the utility costs and they will invest in CHP and other onsite technologies.
  • Try to encourage demand shifting to off peak with exaggerated peak-period prices during all summer weekdays and the customer will use batteries to shift not just on the hottest high-demand days, but also on days when there is no benefit to society, though still an arbitrage play for the customer.

Smart Meter

You may agree with the equity goals behind some rate design choices and may disagree with others.  That’s not the point.  The point is that technology is making it ever easier for customers to respond to prices, and to arbitrage between price differences.  That’s great news when those prices and price differences reflect real cost impacts, because customers can respond to efficient cost-based prices with efficient actions.  But when the prices don’t reflect costs, customers are still going to respond, and that will undermine system efficiency.

That means that the flexibility regulators have had in designing retail rates to pursue other goals – whether helping the poor, subsidizing grid-scale renewables, paying for energy efficiency programs, or just keeping rate design “simple” – is going to come under increasing pressure by market participants ready to exploit any price wedge, whether it is based on a real cost differential or not.

Economists have for years argued that utility rate design should follow cost causation principles, because departures from cost will lead to inefficient customer response.  Regulators have often paid little heed largely because the inefficiency was small when customer ability to respond was limited.  That left regulators a free hand to harness rates for pursuit of other policy agendas.


Distributed generation, storage, electric vehicle charging, and smart customer-side usage technologies (think controllable communicating thermostats) mean that the inefficiencies from sloppy rate design – prices that depart substantially from cost – will be magnified.

But the flip side is that the opportunity to incent efficient customer-side participation in the market with smart rate design is greater than ever.  And that opportunity will grow exponentially in the next few years as we see continued improvement in generation technologies, batteries, and sensors that can control a panoply of household activities.  Accurate cost-based pricing can not only lower costs, but can also use customer-side participation to gain the flexibility that will be required to integrate more wind and solar power.

The pressures to align utility rates with costs are only going to increase.  Here’s hoping regulators will harness these changes to reduce total system costs and smooth the integration of intermittent generation.

Posted in Uncategorized | Tagged | 24 Comments

Why the Pope is Wrong on Markets

On a recent speaking engagement in Germany I ran into Prof. John Schellnhuber, who was pope francison his way to the Vatican to present Pope Francis’ major coming out document on climate change. After I got over feeling oh so cool for being one degree of Kevin Bacon removed from one of the most powerful figures in the world, I did my homework and read the Laudato Si, which carries the subtitle “On care for our common home”. This is a well researched position paper which touches on a variety of topics and makes it very clear that the pope cares much more about distributional issues than the average economist. This is not difficult as we are too often obsessed with efficiency (maximizing the size of the pie) rather than equity (who gets what size slice). Even though I am a Bavarian protestant married to a lovely South African Jewish lady, I have been a big fan of Pope Francis until I got to point 190 in the Laudato Si:

“it should always be kept in mind that “environmental protection cannot be assured solely on the basis of financial calculations of costs and benefits. The environment is one of those goods that cannot be adequately safeguarded or promoted by market forces”. […] Once more, we need to reject a magical conception of the market, which would suggest that problems can be solved simply by an increase in the profits of companies or individuals. Is it realistic to hope that those who are obsessed with maximizing profits will stop to reflect on the environmental damage which they will leave behind for future generations?”

I went and sat in front of a wall and meditated on this statement for a little while (yes, my mom tried to make a Zen monk out of me). I agree with some of this sentiment. It is clear that profit/utility maximization has led to much of the environmental conundrum we find ourselves in. In a perfect world, firms pay for the full costs of their activities (which we call social cost of production). Consumers then only buy the product if these costs are at most as large as their willingness to pay for the good. If firms don’t have to pay for the full cost of their production (e.g. they get to use the atmosphere as a free dumping ground for greenhouse gases) the cost of production is artificially low and consumers buy more than they should at artificially low prices. Does this happen? Well yes! Most places in the world do not charge firms for their carbon emissions. California, Europe, parts of Canada are some noteworthy exceptions, though even in these places the price is well below the environmental cost of the emissions. Most Chinese firms, for example do not currently pay for their use of the atmosphere. Neither do India’s, Japan’s, Australia’s…. This leads to an overproduction of greenhouse gases.


Is the optimal level of greenhouse gas emissions zero? The economist’s answer is a clear no. We derive great benefits from the combustion of fossil fuels. Light to read, heat to cook, gasoline combustion for transport. But is the price of fossil fuels too low? Nearly everywhere, the answer is yes.

The clear answer to fix this is to put a price on carbon, which makes producers (and in turn consumers) pay for the full cost of their use of the atmosphere. This ain’t rocket science. My undergrads get this. President Obama gets this. In fact, Michael Greenstone – one of the most prominent environmental economists in the world and frequent EI visitor – led a federal working group to determine what the social cost of carbon is. The answer he and his coauthors came up with is approximately $40. What this means is that we should be adding approximately $40 to each ton of CO2 produced. This would raise the price of gas by roughly 40 cents per gallon.

The two ways economists argue one does this is by either charging a carbon tax or putting in place a cap and trade system. The word tax is political suicide, so we are most optimistic about the prospects of cap and trade. What you do is you issue a permit for each ton of CO2 and let firms trade these permits. This has been shown to be quite effective at reaching a prescribed amount of pollution reduction. At least cost. The trick is to issue just enough permits, so the price in the market reflects the social cost of carbon. Most economists are on board with this. It’s a Nobel worthy idea.

Well, the Pope does not agree.

“The strategy of buying and selling “carbon credits” can lead to a new form of speculation which would not help reduce the emission of polluting gases worldwide. This system seems to provide a quick and easy solution under the guise of a certain commitment to the environment, but in no way does it allow for the radical change which present circumstances require. Rather, it may simply become a ploy which permits maintaining the excessive consumption of some countries and sectors.”

And I just don’t agree with Pope Francis. I think this hostility towards market-based instruments comes from three possible lines of thought:

  1. Imperfect markets are the source of the current dire state of the environment, hence why would we use markets as a fix?
  2. There was evidence of some fraud in the ETS, showing that these markets are subject to manipulation.
  3. There is no way a cap and trade market will get us to 80% emissions reductions by 2050. You just have to tell people what to do. Command and control is better at that.

My response to 1) is simple. The reason the environment is in such bad shape is that some markets fail. We teach this to undergraduates as they walk through the door. You can use cap and trade markets to fix market failures! This is what they are designed to do. Markets to fix markets! We sometimes use dynamite to extinguish bad fires! The response to 2) is simple. Yes. Markets can be manipulated. But we learn and design better more foolproof instruments over time. No regulation is perfect. My response to 3) is that standards are expensive, provide little incentive for technological innovation and are a pain to enforce. Don’t get me wrong. Emissions trading is not the only policy we should engage in. I am strongly in favor of significantly subsidizing R&D for example.

What I wish the pope would have said is that market failures are the source of environmental degradation and we need to do everything we can to fix this. Our own governor Jerry Brown, who left a catholic seminary after three years to study classics at Berkeley is a staunch supporter of cap and trade.

So I would humbly ask Pope Francis to leave it up to science not faith to help us figure out how to fix the biggest environmental market failure mankind has faced in its history.

Posted in Uncategorized | Tagged , | 31 Comments

If Someone Replaced Your Car with a Prius, Would You Drive More?

Source: Caranddriver.com

Most of us drive cars that are less fuel efficient than a Prius, but this is likely to change over the next decade as the new Corporate Average Fuel Economy (CAFE) standards are phased in. Regulators project that these new standards will increase the average fuel economy of new vehicles to 39 miles per gallon (MPG) by 2025, compared to 26 MPG in 2010. A new Prius C is rated at 46 MPG on the highway.

While the Clean Power Plan has been getting a lot of attention this past week, the new CAFE standards are another major component of the Obama Administration’s climate action plan. Increased vehicle fuel efficiency could account for nearly the same reductions in GHG emissions as the Clean Power Plan.

There is a lot of guesswork involved in coming up with the projected reductions in GHG’s achieved by the new CAFE regulations. Some of it involves estimating how much people will drive as they buy more fuel-efficient vehicles.

One factor is what’s known at the “rebound effect”: as cars get more fuel efficient, the price of driving a mile goes down. Economists project that a price reduction will lead people to consume more, which in this context means they will drive their fuel-efficient cars more. Regulators estimate that they will drive about 10% more.

I’m not sure this effect would hold in my family, though.

We rented a Prius one summer vacation in Maine. The back roads of Maine can be fun to drive, as they are built over the landscape and not through it, so they twist and turn over the rocky countryside. There’s an aptly named “thrill hill” near our vacation spot that leaves your stomach in your mouth better than most roller coasters.

A Prius in Maine?

A Prius in Maine?

My husband – who prides himself on being an “aggressive” driver – taught my kids some new adjectives to modify “engine” the summer we had the Prius. One was “lawn-mower” and the rest aren’t printable. Uncharacteristically in our family, he let me drive for some of the longer trips.

“Prius” has now become synonymous with “wimpy” in my household, as in, “Mom, why didn’t you pull out in front of that car? It’s a Prius.”

It’s not as though my husband goes out joyriding, but I would guess that if a Prius magically showed up to replace his higher horsepower car, there would be a couple instances each month where he would opt to carpool or ride his bike to work where he wouldn’t have otherwise.

Is this Monster Prius photoshopped?

Is this Monster Prius photoshopped?

A recent paper by Jeremy West, Mark Hoekstra, Jonathan Meer and Steve Puller (WHMP) suggests that my husband is not alone. It finds that higher fuel-efficiency cars similarly turn off other drivers. Counter to the predictions of the rebound effect, they find that drivers who were nudged into more fuel efficient cars by the Cash for Clunkers program end up driving if anything less than similar new-car owners who bought less efficient cars.

As WHMP point out, fuel efficiency is correlated with other vehicle attributes that drivers tend to dislike, including lower weight, which recent papers (see here and here) confirm is not good for occupants in an accident, and lower horsepower, which makes it harder to accelerate enough to get thrill-hill bumps.

As an aside, the authors use a clever empirical strategy to show that more fuel-efficient cars led people to drive less. The difficulty is that most people buy cars anticipating how much they are likely to drive. So, just looking at the raw correlation between the fuel efficiency of a new car and the number of miles it is subsequently driven may wildly over-state the rebound effect if people who know they have long commutes purposely buy fuel efficient cars.

WHMP look at Texans in the year following the Cash for Clunkers program, which gave large incentives to households that turned in a clunker – defined as a car that got less than 18 MPG – as long as they replaced it with something considerably more fuel efficient. WHMP compare two groups of new car purchasers – those who were barely eligible for the program because their old car was 18 MPG and those who barely missed being eligible because their old car was 19 MPG. The households with 18 MPG clunkers bought new cars that were more fuel efficient (plus smaller and less powerful), while the households who just missed being eligible did not get nudged into fuel-efficient cars, so they serve as a kind of control group. Other than the fact that one group is eligible for the program and the other isn’t, the two groups of households are very similar.

WHMP’s findings do not mean that the rebound effect is wrong – it’s just misapplied in this case. A pure rebound effect describes changes in the energy efficiency of a good, but leaves all other attributes unchanged.

Similarly, some people are quick to differentiate energy efficiency from energy conservation. Ideally, an energy efficiency investment leaves everything else unchanged, and simply reduces the energy consumed to perform a particular function, like driving a mile.

If WHMP’s result holds as the CAFE standards nudge more of us into fuel-efficient cars (at least given the current fleet), this will be good news for reducing greenhouse gas emissions. They don’t report an implied increase in savings, but, roughly, I would guess they’d be about 10% higher if the current estimates embed a 10% increase in driving.

While good news for the climate, this result is bad news for drivers like my husband who dislike driving less powerful cars. In econ-speak, there is lost welfare as people are pushed into cars they don’t like. That’s OK – solving the climate crisis is bound to involve some sacrifices, but we should aim to select regulations that minimize what people have to forego.

On this dimension, both CAFE and the Clean Power Plan are lacking. Both are examples of standards, which are a form of what’s known as “command and control” regulation. An ideal regulation would put a price on GHG emissions, either explicitly with a carbon tax or implicitly through a cap-and-trade program. Then, people and firms would make decisions appropriately embedding the damage they are imposing on the climate. (Note, however, that one compliance option under the Clean Power Plan is for states to join a cap-and-trade program.)

IMG_0067Regulation with standards may be the only device remaining in the Obama Administration’s climate tool chest given the political environment (e.g., the Senate’s failure to pass climate legislation in 2010), but the approach is necessarily worse. Figuring out how much worse is difficult, as WHMP’s paper points out. It involves estimating things like how people are driving in cars subject to the regulations and how much worse off they are. It’s important to keep these kinds of unintended consequences in mind, even if they hard to quantify.

Posted in Uncategorized | Tagged , , | 13 Comments

What We Can Learn from Germany’s Windy, Sunny Electric Grid

Today’s release of the final Clean Power Plan by President Obama ushers in an exciting period of change on US power grids. Wind and solar energy will get a big boost through the plan. The plan recognizes that to sustain public support for this strategy, the costs of integrating the new technologies need to be kept down and the grid needs to remain reliable. Several revisions to the draft plan were made specifically to provide time and flexibility to ensure the lights stay on while more clean energy sources join the grid. An important accommodation is carefully phasing in targets.

Much of the US and many countries rely on wholesale markets to help manage electric grids. It’s a good strategy. Well-designed markets can maintain reliability while also keeping costs down. However, policymakers and regulators are faced with a bewildering number of choices to ensure these markets function well.

Fortunately, the US can look to Germany. In Germany, wind and solar already represent 43% of installed generating capacity. While the US is at a modest 7%, some regions are approaching 20% (see graph). The Energy Information Administration projects that under the Clean Power Plan, US-wide wind and solar penetration could reach an overall average of 22% by 2030. Again, though, this masks significant regional variation.

SOURCE: Energy Information Administration (EIA) Electric Power Monthly, June 2015; EIA, Analysis of the Impacts of the Clean Power Plan (2015); Fraunhofer.

SOURCE: Energy Information Administration (EIA) Electric Power Monthly, June 2015; EIA, Analysis of the Impacts of the Clean Power Plan (2015); Fraunhofer.

As Max discussed in a recent blog, Germany is also phasing out nuclear energy and accelerating coal’s exit.

German policymakers need to know that with these changes its market will work harmoniously into the future — maintaining reliability and keeping costs reasonable. Fossil fuel power plant owners see problems on the horizon. German energy policies are pushing down wholesale energy prices and could potentially cause fossil fuel power plants to go out of business. As a result, the country could experience shortages and blackouts, say the fossil fuel plant owners.

Concerns about the future have prompted the Germans to review their wholesale markets. American and German policymakers shared their perspectives on this important topic at a recent joint US-Germany Electricity Market Design Workshop held in Berlin, Germany. With support from the US Department of State, I attended the event, which was co-hosted by the German Ministry of Economic Affairs and Energy and the U.S. Departments of State and Energy.

Germany’s electricity market is currently structured as what’s referred to as an “energy-only “ market. In an energy-only market, power plants earn revenues primarily by selling energy. The German government is evaluating whether they should create an additional market, referred to as a “capacity market”.

Power market concepts can be a bit peculiar, so let me try to describe the capacity market concept with an analogy.

Think about milk before the era of affordable refrigeration. Buying energy is like buying milk. You buy it when you need it, perhaps by getting daily deliveries from the milkman. Buying capacity is like buying a cow. When you want milk, your cow can provide it. Except the cow’s milk production doesn’t exactly line up with your needs. You need to buy enough cows to make sure everyone in your family has enough milk for their breakfast cereal. When you have extra milk, hopefully you can find someone to sell it to. Otherwise it spoils. If all your neighbors are selling milk at the same time, you won’t get much for it.

A power market that compensates power plants for energy and capacity is like having consumers pay for milk and also pay an extra amount to sustain the herd of cows. They aren’t your cows, but you happily pay knowing the cows are out there somewhere. When you or anyone else needs milk, these herds of cows are supposed to make it.

SOURCE: "Wb deichh drei kuhs" by Dirk Ingo Franke - Own work. Licensed under CC BY-SA 2.0 de via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Wb_deichh_drei_kuhs.jpg#/media/File:Wb_deichh_drei_kuhs.jpg

SOURCE: “Wb deichh drei kuhs” by Dirk Ingo Franke – Own work. Licensed under CC BY-SA 2.0 de via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Wb_deichh_drei_kuhs.jpg#/media/File:Wb_deichh_drei_kuhs.jpg

On July 3rd the German Ministry of Economic Affairs and Energy completed its deliberations with the release of its White Paper. So far the paper has only been released in German, but an English-language summary is here and the predecessor Green Paper is here.

The Ministry has concluded that a slightly reformed energy-only market—what they are calling electricity market 2.0—will serve the country into the future. The new market reforms will enter legislation in the Fall and reforms will be implemented in 2016.

The Ministry does not buy into the narrative that fossil fuel plants need to be compensated through a capacity market to ensure grid reliability. The Ministry sees other ways to ensure reliability, such as by increasing links to neighboring countries. This would allow Germany and its neighbors to benefit from differences in demand and supply between countries.

The Ministry is also recommitting to not impose explicit or implicit price caps in the energy markets. Flexible power plants can earn revenues by selling energy if prices spike as wind and solar ramp up and down.

How will we know if the Germans made the “right” choice? Ideally, we could run an experiment with multiple Germanys, implement a capacity market in one Germany, but not the other. Maybe we should designate the recently discovered earth-like Kepler-452b for economics experiments like this!

Until NASA takes up my proposal, we’re stuck in the world of observation, theory, and conjecture.

As the German experience unfolds, policymakers in markets such as the US should take a hard look at their own markets. Should capacity market structures be introduced, as was recently debated in Texas? Or should price caps continue to be raised, as the US Federal Energy Regulatory Commission has been urging? Should markets be kept local as in Texas or better integrated with their neighbors, as is beginning to happen in California?

We should continue to keep an eye on Germany.

Posted in Uncategorized | Tagged | 19 Comments

Membership has its Co-benefits

Last week marked the first “informal ministerial consultations” in the run up to the UN climate talks in December. The objective of these informal meetings before The Meeting is to provide the opportunity to find common ground and an organizing framework for what the UN Climate Chief is ominously calling the “last chance for a meaningful agreement”.

Two decades worth of efforts to broker a binding global climate change treaty from the top down have largely failed. But hope springs eternal, and there is belief that a more “bottom up” approach which allows countries to define their own contributions will break the impasse.  Each country has been asked to submit a self-determined national “intention” to curb its greenhouse gas emissions. These pledges will provide the foundation for any climate deal reached in Paris.


Every time a country registers a pledge, this UNFCC tree sprouts a new green leaf  linked to the plan. As of today, 20 pledges have been submitted.

The prospects for free-riding make this process particularly daunting.  As Gernot Wagner and Marty Weitzman note in their recent book:

“Why act, if your actions cost you more than they benefit you personally? Total benefits of your actions may outweigh costs. Yet the benefits get spread across seven billion others, while you incur the full costs. The same logic holds for everybody else. Too few are going to do what is in the common interest. Everyone else free-rides.”

In other words, why would a country voluntarily commit to making significant, costly reductions in domestic greenhouse gas emissions (GHGs)?

Last week, I saw a paper presented by Ian Parry at a summer gathering of environmental economists that suggests this free riding problem might not be quite as dire as it appears. The jumping off point is that reducing the use of GHG producing fuels can generate domestic co-benefits for the countries that undertake them (such as improvements in local air quality or reduced traffic congestion). These authors set out to quantify these environmental co-benefits by country and evaluate the possible implications for GHG emissions. In my mind, the paper’s findings strengthen the pragmatic case for the new “bottom up” approach to global climate change mitigation.

Climate change policy from the bottom up

At the heart of the new approach to climate change negotiations is a new climate change acronym:  Intended Nationally Determined Contributions (INDCs). These national pledges describe steps that countries intend to take to reduce their GHG emissions. Countries have tremendous flexibility in drafting these plans.  They can pledge to cut emissions by a lot – or a little. Commitments can be binding – or voluntary.  The idea is to let countries decide what they are willing and able to contribute to this global effort and take it from there.

The figure below, taken from a special IEA report on climate change, projects the emissions associated with the climate pledges countries have already declared. This INDC trajectory (in blue) is contrasted with a “450 scenario” (in green) that would achieve the widely accepted target of limiting global warming to 2 degrees Celsius, or 450 parts per million of CO2 equivalent in the atmosphere.


The gap between the blue and green line is downright depressing. Although the INDC scenario could improve as more countries sign on, even Al Gore acknowledges that these INDC pledges will fall short of the critical target.

But the glass half full (or at least not empty) view is that these initial pledges are the sign of meaningful global cooperation taking hold.  Ultimately, the success of an approach that relies on voluntary contributions will depend on whether countries find the political will to engage in this global effort and pursue significant GHG emissions reductions. Co-benefits could provide a leg up in this regard.

Membership (in the global climate change mitigation club) has its co-benefits

When a country takes steps to reduce GHG emissions benefits beyond climate change mitigation often result.  Domestic “co-benefits” of GHG reductions include, for example, reductions in the number of deaths caused by air pollutants that are emitted along with CO2 when fossil fuels are burned, and reductions in congestion, accidents, and other externalities from motor vehicle use.

The paper I saw last week calculates the domestic co-benefits of pricing of carbon dioxide emissions for the top twenty emitting countries that are responsible for about 80 percent of global CO2 emissions.  Using country-level estimates of (non-CO2) environmental damages by fossil fuel from this study, together with fuel price data and fuel tax/subsidy information,  the researchers derive efficient CO2 prices that reflect domestic (non-internalized) environmental benefits and costs:


The figure summarizes the nationally efficient carbon prices that reflect domestic co-benefits excluding climate benefits.  The average (emissions weighted) price is remarkably high: $57/ton CO2. This exceeds the current, mid-range estimates of global climate-change damages per ton CO2.

The graph also shows how prices vary dramatically across countries. Some of this variation reflects differences in the extent to which fuels are subsidized/taxed across countries.  The extremely high prices in Saudi Arabia and Iran, for example, are largely due to large subsidies on transportation fuels and natural gas. The analysis assumes that these subsidies remain, and that the carbon tax works to offset the subsidy. The negative tax in Brazil reflects the fact that the existing fuel taxes exceed the author’s calculations of non-carbon external costs per unit of fuel use.

To put these tax estimates in perspective, the authors ask: How would CO2 emissions change if these nationally efficient carbon taxes were implemented?  The figure below summarizes emissions reductions estimates for each country (relative to the 2010 emissions that were actually observed).


Across all 20 countries, the authors estimate a 14 percent reduction in 2010 emissions. The majority of emissions reductions come from reduction in coal consumption. In  countries where the tax effectively offsets transportation fuel subsidies, reductions in diesel and gasoline play a larger role.

These numerical results are, of course, sensitive to some of the underlying (and sometimes uncertain) assumptions that are documented in the paper.  Qualitatively, the take away is that domestic co-benefits from climate change mitigation appear to be significant on average and highly variable across countries.

Boosting local motivation for global cooperation

The mere existence of co-benefits need not imply that countries will implement climate policies to pursue them. There are political constraints, distributional concerns, and other considerations that help explain why most countries have neglected to address domestic externality problems and other distortions in the first place.  These constraints will presumably limit a government’s ability to reduce greenhouse gas emissions via a carbon tax or other means

But large co-benefits can make it easier for countries to drum up support for pursuing reductions in domestic GHGs among a wide range of domestic actors, not all of whom are motivated by the spirit of global cooperation or the will to lead. Here at home, President Obama introduced the proposed Clean Power Plan, the centerpiece of his Climate Action Plan, in the asthma ward of a Children’s hospital.  Health co-benefits from reductions in local air pollution, including avoided asthma attacks, were estimated to yield approximately 60 percent of the gross benefits under the proposed Clean Power Plan. China offers another example of a country where concerns about air pollution are accelerating action on climate change (and vice versa).

Many economists will cringe at the thought of using climate change policies to address other unpriced externality problems. This is not the ideal, first-best path forward. Climate change policy is an indirect tool for addressing related but different problems of air pollution, traffic congestion, etc.  However, until efficient corrective policies are implemented, countries can and should consider these co-benefits in the design and implementation of climate change policy. This will help to mitigate domestic damages associated with the burning of fossil fuels at home while greasing the wheels of the global response to climate change in Paris and beyond.

Posted in Uncategorized | Tagged , , | 3 Comments