King Coal is Dethroned in the US – and That’s Good News for the Environment

This is the worst year in decades for U.S. coal. During the first six months of 2016, U.S. coal production was down a staggering 28 percent compared to 2015, and down 33 percent compared to 2014. For the first time ever, natural gas overtook coal as the top source of U.S. electricity generation last year and remains that way. Over the past five years, Appalachian coal production has been cut in half and many coal-burning power plants have been retired.


This is a remarkable decline. From its peak in 2008, U.S. coal production has declined by 500 million tons per year – that’s 3,000 fewer pounds of coal per year for each man, woman and child in the United States. A typical 60-foot train car holds 100 tons of coal, so the decline is the equivalent of five million fewer train cars each year, enough to go twice around the earth.

This dramatic change has meant tens of thousands of lost coal jobs, raising many difficult social and policy questions for coal communities. But it’s an unequivocal benefit for the local and global environment. The question now is whether the trend will continue in the U.S. and, more importantly, in fast-growing economies around the world.

Health benefits from coal’s decline

Coal is 50 percent carbon, so burning less coal means lower carbon dioxide emissions. More than 90 percent of U.S. coal is used in electricity generation, so as cheap natural gas and environmental regulations have pushed out coal, this has decreased the carbon intensity of U.S. electricity generation and is the main reason why U.S. carbon dioxide emissions are down 12 percent compared to 2005.

Perhaps even more important, burning less coal means less air pollution. Since 2010, U.S. sulfur dioxide emissions have decreased 57 percent, and nitrogen oxide emissions have decreased 19 percent. These steep declines reflect less coal being burned, as well as upgraded pollution control equipment at about one-quarter of existing coal plants in response to new rules from the U.S. Environmental Protection Agency.


Coal waits to be added to a train at the Hobet mine in Boone County, West Virginia. Jonathan Ernst/Reuters

These reductions are important because air pollution is a major health risk. Stroke, heart disease, lung cancer, respiratory disease and asthma are all associated with air pollution. Burning coal is about 18 times worse than burning natural gas in terms of local air pollution so substituting natural gas for coal lowers health risks substantially.

Economists have calculated that the environmental damages from coal are US$28 per megawatt-hour for air pollution and $36 per megawatt-hour for carbon dioxide. U.S. coal generation is down from its peak by at least 700 million megawatt-hours annually, so this is $45 billion annually in environmental benefits. The decline of coal is good for human health and good for the environment.

India and China

The global outlook for coal is more mixed. India, for example, has doubled coal consumption since 2005 and now exceeds U.S. consumption. Energy consumption in India and other developing countries has consistently exceeded forecasts, so don’t be surprised if coal consumption continues to surge upward in low-income countries.

In middle-income countries, however, there are signs that coal consumption may be slowing down. Low natural gas prices and environmental concerns are challenging coal not only in the U.S. but around the world, and forecasts from EIA and BP have global coal consumption slowing considerably over the next several years.

Particularly important is China, where coal consumption almost tripled between 2000 and 2012, but more recently has slowed considerably. Some are arguing that China’s coal consumption may have already peaked, as the Chinese economy shifts away from heavy industry and toward cleaner energy sources. If correct, this is an astonishing development, as China represents 50 percent of global coal consumption and because previous projections had put China’s peak at 2030 or beyond.


A smoggy morning in Delhi, India. Anindito Mukherjee/Reuters

The recent experience in India and China point to what environmental economists call the “Environmental Kuznet’s Curve.” This is the idea that as a country grows richer, pollution follows an inverse “U” pattern, first increasing at low-income levels, then eventually decreasing as a country grows richer. India is on the steep upward part of the curve, while China is, perhaps, reaching the peak.

Global health benefits of cutting coal

A global decrease in coal consumption would have enormous environmental benefits. Whereas most U.S. coal plants are equipped with scrubbers and other pollution control equipment, this is not the case in many other parts of the world. Thus, moving off coal could yield much larger reductions in sulfur dioxide, nitrogen oxides, and other pollutants than even the sizeable recent U.S. declines.

Of course, countries like China could also install scrubbers and keep using coal, thereby addressing local air pollution without lowering carbon dioxide emissions. But at some level of relative costs, it becomes cheaper to simply start with a cleaner generation source. Scrubbers and other pollution control equipment are expensive to install and expensive to run, which hurts the economics of coal-fired power plants relative to natural gas and renewables.

Broader declines in coal consumption would go a long way toward meeting the world’s climate goals. We still use globally more than 1.2 tons of coal annually per person. More than 40 percent of total global carbon dioxide emissions come from coal, so global climate change policy has correctly focused squarely on reducing coal consumption.

If the recent U.S. declines are indicative of what is to be expected elsewhere in the world, then this goal appears to be becoming more attainable, which is very good news for the global environment.
The Conversation
This blog post is available on The Conversation.

Posted in Uncategorized | 13 Comments

Fixing a major flaw in cap-and-trade

While many Californians are spending August burning fossil fuels to travel to vacation destinations, the state legislature is negotiating with Gov. Brown over whether and how to extend the California’s cap-and-trade program to reduce carbon dioxide and other greenhouse gases (GHGs).   The program, which began in 2013, is currently scheduled to run through 2020, so the state is now pondering what comes after 2020.

The program requires major GHG sources to buy “allowances” to cover their emissions, and each year reduces the total number of allowances available, the “cap”.  The allowances are tradeable and their price is the incentive for firms to reduce emissions.  A high price makes emitters very motivated to cut back, while a low price indicates that they can get down to the cap with modest efforts.

Before committing to a post-2020 plan, however, policymakers must understand why the cap-and-trade program thus far has been a disappointment, yielding allowance prices at the administrative price floor and having little impact on total state GHG emissions.  California’s price is a little below $13/ton, which translates to about 13 cents per gallon at the gas pump and raises electricity prices by less than one cent per kilowatt-hour.

CapAndTradeExtensionFig1The low prices in the three major markets for GHGs mean little impact on behavior

And it’s not just California. The two other major cap-and-trade markets for greenhouse gases – the EU’s Emissions Trading System and the Regional Greenhouse Gas Initiative in the northeastern U.S. — have also seen very low prices (about $5/ton in both markets) and scant evidence that the markets have delivered the emissions reductions.  In fact the low prices in the EU-ETS and RGGI have persisted even after they have effectively lowered their emissions caps to try to goose up the prices.

In all of these markets, some political leaders have argued the outcomes demonstrate that other policies – such as increased auto fuel economy and requiring more electricity from renewable sources – have effectively reduced emissions without much help from a price on GHGs. That view is partially right, but a study that Jim Bushnell, Frank Wolak, Matt Zaragoza-Watkins and I released last Tuesday shows that a major predictor of variation in GHG emissions is the economy.  While emissions aren’t perfectly linked to economic output, more jobs and more output mean generating more electricity and burning more gasoline, diesel and natural gas, the largest drivers of GHG emissions.

CapAndTradeExtensionFig2Accurately predicting California’s GSP 10-15 years in the future is extremely difficult

Because it is extremely difficult to predict economic growth a decade or more in the future, there is huge uncertainty about how much GHGs an economy will spew out over long periods, even in the absence of any climate policies, what climate wonks call the “Business As Usual” (BAU) scenario.

If the economy grows more slowly than anticipated — as happened in all three cap-and-trade market areas after the goals of the programs were set – then BAU emissions will be low and reaching a prescribed reduction will be much easier than expected.  But if the economy suddenly takes off — as happened in the California’s boom of the late 1990s — emissions will be much more difficult to restrain.  Our study finds that the impact of variation in economic growth on emissions is much greater than any predictable response to a price on emissions, at least to a price that is within the bounds of political acceptability.

CapAndTradeExtensionFig3California emissions since 1990 have fluctuated with economic growth

Our finding has important implications for extending California’s program beyond 2020.     If the state’s economy grows slowly, we will have no problem and the price in a cap-and-trade market will be very low.  In that case, however, the program will do little to reduce GHGs, because BAU emissions will be below the cap.  But if the economy does well, the cap will be very constraining and allowance prices could skyrocket, leading to calls for raising the emissions cap or shutting down the cap-and-trade program entirely.

Our study shows that the probability of hitting a middle ground — where allowance prices are not so low as to be ineffective, but not so high as to trigger a political backlash — is very low.  It’s like trying to guess how many miles you will drive over the next decade without knowing what job you’ll have or where you will live.

So, can California’s cap-and-trade program be saved? Yes. But it will require moderating the view that there is one single emissions target that the state must hit. Instead, the program should be revised to have a price floor that is substantially higher than the current level, which is so low that it does not significantly change the behavior of emitters.   And the program should have a credible price ceiling at a level that won’t trigger a political crisis.  The current program has a small buffer of allowances that can be released at high prices, but would have still risked skyrocketing prices if California’s economy had experienced more robust growth.

The state would enforce the price ceiling and floor by changing the supply of allowances in order to keep the price within the acceptable range. California would refuse to sell additional allowances at a price below the floor. This is already state policy, but the floor is too low. California would also stand ready to sell any additional allowances that emitters need to meet their compliance obligation at the price ceiling.

Essentially, the floor and ceiling would be a recognition that if the cost of reducing emissions is low, we should do more reductions rather than just letting the price fall to zero, and if the cost is high, we should do less rather than letting the price of the program shoot up to unacceptable levels.

But should California’s cap-and-trade program be saved?  I think so.  My first choice would be to replace it with a tax on GHG emissions, setting a reliable price that would make it easier for businesses to plan and invest.  But cap-and-trade is already the law in California and with a credible price floor and ceiling it can still be an effective part of the state’s climate plan.

Putting a price on GHGs creates incentives for developing new technologies, and in the future might motivate large-scale switching from high-GHG to low-GHG energy sources as their relative costs change.  The magnitudes of these effects could be large, but they are extremely uncertain, which is why price ceilings and floors are so important in a cap-and-trade program.  With these adjustments, California can still demonstrate why market mechanisms should play a central role in fighting climate change while maintaining economic prosperity.

A shorter version of this post appeared in the Sacramento Bee August 14 (online Aug 11)

I’m still tweeting energy news articles and studies @BorensteinS

Posted in Uncategorized | Tagged , | 19 Comments

What the Heck Is Happening in the Developing World?

One of the most important energy graphs these days shows actual and projected energy consumption in the world, separated between developed and developing countries. A version based on data from the Energy Information Administration (EIA) is below.
Screen Shot 2016-08-07 at 8.38.43 PMThe vertical axis measures total energy consumption, including gasoline, diesel, natural gas, electricity from all sources, etc. – all converted to a common unit of energy (the Btu, or British Thermal Unit). It reflects commercial energy sources, but excludes things like firewood that people collect on their own. The horizontal axis plots time, and the straight lines reflect historical (actual) data while the dotted lines reflect projections.

Strikingly, the developing world – approximated on the graph as countries that are not members of the OECD – has already passed the developed world (in 2007) and is projected to consume almost twice as much energy by 2040.

To me, this suggests strongly that anyone worried about world energy issues – including climate change, oil prices, etc. – should be focusing on the developing world.

Unfortunately, I fear that we know woefully little about energy consumption in the developing world. The series of graphs below depicts our ignorance starkly.

Let’s start with China, which single-handedly consumed 22% of world energy in 2013 (still far less per capita than in the US). The vertical axis again plots total energy consumption, but this time it’s measured relative to 1990 levels. The black line plots actual numbers. For example, since the black line is at 3.5 in 2010, that means that by 2010, China was consuming 3.5 times more energy than it had in 1990. Pretty amazing growth! By comparison, US consumption in 2010 was only 15% higher than 1990 levels.

 

Screen Shot 2016-08-07 at 9.10.13 PM

The colored lines on the graph depict the EIA’s projections, published in different annual issues of the International Energy Outlook (IEO). If you stare at 2010, 2015 and 2020, you see that the EIA has revised its projections upward considerably over a relatively short time period.

Start with the light blue line at the bottom, which reflects projections that were part of the 2002 IEO. At that time, the EIA thought China would only consume twice as much energy in 2010 as it did in 1990. But, China’s actual consumption surpassed that level midway through 2003, 6.5 years earlier than projected. So, by 2005, the EIA had increased their projection for 2010 by 30%. That’s a huge upward revision.

But it wasn’t nearly enough. The EIA continued to increase its projection, struggling to keep up with China’s actual growth.

Ah, you say. This is just a story about China, where there are lots of possible explanations for underestimated growth in energy, including faster than expected GDP growth, rapid industrialization, etc.

But, similar stories emerge for Africa and India. The EIA has recently revised projections pretty dramatically, and most of the revisions are upwards.

Screen Shot 2016-08-07 at 9.12.13 PM

Finally, for India more than Africa, the projections have been too low.

Screen Shot 2016-08-07 at 9.14.02 PM

And, this is not a problem in the developed world. The figure below contains a similar graph for the US. Note that the scale is different than for the developing regions, so the revisions have been pretty miniscule in comparison. Also, they’ve generally been downward.

Screen Shot 2016-08-07 at 9.15.18 PM

A couple points to keep in mind:

  • It may seem like I’m picking on the EIA. I’m not trying to. They are doing an incredibly important job with very few resources. (The International Energy Outlook was recently demoted from an annual publication to approximately biannual.) Also, the EIA are not alone. The International Energy Agency and BP – two other big names in world energy reporting – have also had to revise projections upward to keep up with energy demand in developing countries.
  • The EIA and other organizations are careful not to describe their projections as forecasts. The EIA, for example, notes that, “potential impacts of pending or proposed legislation, regulations, and standards are not reflected in the projections.” I doubt that omission explains the discrepancies in the developing regions, though. I have tried to back out how much of the underestimate is due to misjudged GDP growth, and I don’t think that’s a big share either, at least in China. I suspect that we need a better underlying model for how GDP translates to energy consumption in the developing world, the point of this academic paper.
  • Policymakers in the developing world appear to appreciate this issue. We recently launched a 5-year research project, funded by the Department for International Development (the UK’s analog of USAID) and joint with Oxford Policy Management, to study energy in the developing world, focusing on sub-Saharan Africa and South Asia. As part of this project, we hosted a policy conference in Dar es Salaam to hear from East African policymakers about the pressing issues they faced. One of the main themes that emerged was the difficulty of planning without better demand forecasts.
  • Some might argue that markets will solve this problem. The EIA is just some government agency that few are paying attention to, or so the argument might go. If you have real money at stake in understanding future energy consumption in the developing world, you would not hire someone who was off by 75% (3.5 divided by 2).

I do not know who is using the EIA projections for what, but I believe this logic breaks down for several reasons. For one, in many parts of the world, the private sector is not investing in energy infrastructure and the public sector may be relying on organizations like the EIA. Also, most investors don’t really care about 2040. Their discount rates are high enough that it doesn’t really matter what’s happening 25 years out. But, from the perspective of climate change, the world should care about energy consumption in 2040, 2050 and 2100.

This brings us back to the first graph in the post, which contained projections out to 2040. I fear that we are underestimating the 25-year out projections, just like we’ve underestimated recent trends. As researchers, we need to get under the hood and understand more about what is driving energy consumption in the developing world.

Posted in Uncategorized | Tagged | 23 Comments

Evaluating Evaluations – Energy Efficiency in California

Last year, Governor Jerry Brown signed a law, Senate Bill 350, that sets out to double energy efficiency savings by 2030. Last week at the Democratic National Convention, Governor Brown focused his remarks on the importance of policies such as this to tackle climate change.

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

The precise energy efficiency targets haven’t been finalized, but they will be ambitious.

Meeting these targets will require an expansion of energy efficiency policymaking. Policymakers need to understand which programs work in energy efficiency and which don’t.

This is a daunting task. The California Public Utilities Commission’s (CPUC’s) energy efficiency efforts fund roughly 200 programs. The California Energy Commission (CEC) is regularly introducing new appliance and building standards. The evaluations of these activities are made public, but they can be hard to find and difficult to interpret. Additionally, policymakers may not have the time or training to critically assess the methodologies being used.

As a result, individual programs may not be getting enough scrutiny.

Many people working on energy efficiency may think the last thing we need is MORE evaluation. Energy efficiency is heavily evaluated.

I disagree. Today we have an opportunity to step up our game. We have access to more data and more rigorous evaluation techniques than ever before. It’s time for more evaluation, not less. In particular, it’s time to evaluate the evaluations.

To illustrate what I’m talking about, let’s look at an example from another heavily evaluated sector, criminal justice. The context is quite different, but the basic lessons are instructive.

In the 1980s many US states enacted stricter laws to reduce domestic violence. Rather than putting every offender in jail, courts began to mandate that offenders go through batterer intervention programs (BIPs). The initial evaluations of these programs found they were highly effective. These evaluations contributed to the justice system’s growing reliance on BIPs. In a 2009 report, the Family Violence Prevention Fund and US government’s National Institute of Justice estimated that between 1,500 and 2,500 such programs were operating.

As the cumulative number of evaluations grew, researchers began to undertake reviews that evaluated the evaluations, referred to as meta-analyses or systematic reviews. What they found was disappointing.

Many of the past evaluations that showed positive effects had methodological shortcomings. While some men completed a BIP and did not reoffend, others failed to complete court-mandated BIPs. Many men also became difficult to track down for surveys. The positive evaluations left out these populations, who were the people most likely to re-offend. More recently, careful studies that recognized the systematic differences between men who stuck with the programs and those that didn’t found that mandating the programs had a small or no effect.

There is disagreement on what to do next. Some researchers and practitioners have argued that BIPs could still be effective for some people. What is needed is better targeting and tailoring of the BIPs, coupled with evaluation. Others have taken the position that policymakers should stop relying on these programs because they waste valuable resources and create a false sense of security for women who think their batterer will be reformed through the programs. This is a really important evidence-based debate that should result in more effective policy.

This example is not unique. Evaluations of evaluations, known as systematic reviews, are becoming prevalent in many sectors including medicine, international development, education and crime and justice.

 

The way a systematic review works is that a team of reviewers focuses on a specific policy intervention. The reviewers do an exhaustive search for all the evaluations on the intervention. This includes academic and consultant evaluations, and includes other geographies. Then the reviewers carefully consider each study. They particularly focus on how carefully each study considered what would have happened in the absence of the intervention—the counterfactual – and whether there is a risk that the results may be skewed one way or another.

The systematic review report discusses each study’s risk of bias and then reaches a conclusion about the intervention based on the studies with the lowest risk of bias. In some cases a systematic review may conclude that a program is effective, or that it is not. In other cases a review finds that there is insufficient evidence to reach a conclusion. In these cases the review recommends how evaluations should be performed in the future to reach a firmer conclusion.

There are several reasons why now is the time to begin doing systematic reviews of energy efficiency evaluations. First, a very large number of evaluations have been completed across the country and world. There is value in reviewing and synthesizing these evaluations so that policymakers everywhere have access to the best evidence. Second, new statistical approaches are taking hold in energy, fueled in part by smart meter data. Systematic reviews can help policymakers make sense of the diversity of approaches. Third, energy efficiency is taking on increasing importance, as reflected in ambitious goals and growing spending. The evidence base needs to be strong to ensure the resources are being used effectively.

Research conducted at The E2e Project points to questions that systematic reviews could help answer. When are ground-up engineering estimates most appropriate to use? How important is the rebound effect? What considerations are most important when embedding evaluations into program design? What can interval smart meter data tell us about the effectiveness of programs that other approaches cannot?

Several of these were highlighted by agency staff at an energy efficiency workshop held by the CEC last month.

California produces only 1% of global greenhouse gas emissions. Given that, as Severin emphasized in a prior blog, the state’s policies can’t possibly have a meaningful direct impact on climate change. Instead, the way California can best address the climate change challenge is through invention and learning, then exporting the knowledge to the world.

In the case of energy efficiency, California should focus on finding which policy interventions are most effective and sharing the findings. Policymakers should take a look at systematic reviews as a tool to accomplish this.

Posted in Uncategorized | Tagged | 8 Comments

The Promise and Perils of Linking Carbon Markets

The theme of the week is “We’re stronger together“.  This rallying cry applies in lots of places.. including climate change mitigation!   So this week’s blog looks at how this theme is playing out in carbon markets. A good place to start is California’s recent proposal to extend its GHG cap and trade program beyond 2020. One of the many notable developments covered by this proposal is a new linkage between California’s carbon market and the rest of the world.

CAlink

Notes: The graph plots 2020 emissions caps. Quebec and California have been linked since 2014.  The proposed link with Ontario would take effect in 2017.  Emissions numbers summarized in the graph come from here, and here.

Admittedly, I am uniquely positioned to get really excited about linking the province of Ontario (where I was born and raised) with the state of California (my home of 10+ years) under the auspices of the California carbon market (an institution I spend a lot of time thinking about).  But excitement and interest in this “Ontegration” extends well beyond the Canadian economist diaspora. Why?  Because many see this kind of linkage between independent climate change policies as the most promising – albeit circuitous- means to an elusive end (meaningful climate change mitigation).

How did we get here?

After years of work to establish to globally coordinated “top-down” climate policy with very limited success, there’s been an important pivot towards a more decentralized, bottom-up strategy.  This change in course is motivated  by the idea that more progress can be made if each jurisdiction is free to tailor its climate change mitigation efforts to match its own appetite for climate policy action.  Whether, how, and when these independent carbon policies should link together so that regulated entities in one region can use allowances from another is viewed as “one of the most important questions facing researchers and policy-makers.”

To grease the wheels of this coming-together process, the Paris agreement provides a framework to support bottom-up policy linkages. International organizations such as the World Bank are working hard to translate this framework into on-the-ground success stories.  But so far, real-world carbon market policy linkages are few and far between.

I can count the number of linkages between independent trading programs on one hand (the  EU ETS is  linked to Norway, Iceland, Switzerland, and Liechtenstein. California is linked with Quebec).  Post-Brexit, we’ll probably see one more (after Brexiting, a likely outcome is that the UK will establish its own carbon market to link with the EU ETS).  The California-Ontario link is a good news addition to this list, which is why Ontegration is generating both hope and headlines.

Why link?

The most fundamental argument for linking emissions trading programs boils down to simple economics.  Why pay $20 to reduce a metric ton of carbon in California when you can pay $1 to reduce a metric ton in China?  If marginal abatement costs differ across regional cap-and-trade programs, allowing emissions permits to flow between programs to seek out the least cost abatement options will reduce the overall cost of meeting a collective emissions target. Of course, how this net gain is allocated across linkers will depend on how the linkage is implemented.

Other benefits include:

  • More integrated carbon markets are more liquid and can be less volatile, although market linkages can also propagate shocks more directly from one country to another. The EU ETS provides a case in point. The chaos that followed the Brexit referendum has directly (and significantly) impacted the price of carbon in 31 countries.
  • Economies of scale. Some jurisdictions are simply too small to support a well-functioning carbon market. If program operations are combined, administrative costs and effort can be shared across multiple jurisdictions. Larger markets also reduce risk of market power – a major concern for small jurisdictions trying to go it alone.
  • Political considerations. Politics are critical in determining whether a linkage will fly or die. Ontegration offers a case in point. California is happy to demonstrate that its climate policy initiative has brought other jurisdictions onto the carbon market board. In Ontario, the case for moving ahead with cap-and-trade is easier to make when the proposal involves plugging in to an established carbon market operation versus building a market from the ground up.

Market linkage comes with strings attached

The appeal of a bottom-up climate policy is that individual jurisdictions have the autonomy to pick and choose their own policy parameters.  But I am not going to link my carbon market with yours if I’m worried you’re going to introduce rogue policy changes that drive my carbon price and/or carbon emissions in an unpalatable direction. In other words, mutually acceptable linkage agreements will almost certainly impose limits on autonomy because the policy design choices in one jurisdiction affect outcomes in others.

Linkage does not require that all market design features are perfectly harmonized, but it does require careful coordination of design elements deemed to be critical. The Quebec-California linkage agreement provides a well documented example.   These kinds of deliberations get increasingly complex as the number of jurisdictions increases. Negotiations also become much more complicated when the benefits from linkage are distributed unequally across regions.

An important, related concern is that a linked network of carbon markets is only as strong as its weakest link. If one region lacks the capacity to monitor and enforce market rules effectively, this can undermine the environmental integrity of the entire system.

chain

Source

Limits to linkage?

Recent developments in Europe and California are demonstrating how carbon markets can be linked when partners see (mostly) eye to eye, market designs are similar, and political objectives are aligned.  Given current carbon market conditions, linkages have yet to deliver much (if anything) in terms of economic gains from trade.  But they have expanded the scope of carbon markets and laid down foundations for future cooperation. Some good news for a change.

Forging linkages between less compatible systems will require more effort and ingenuity.   It has been suggested, for example, that regions with more aggressive caps might be convinced to link with countries imposing less aggressive caps if “carbon exchange rates”  define favorable  terms of permit trade for regions with more ambitious mandated reductions.  Distorting market incentives in this way might help eliminate political barriers to linkage – but this would also undermine a fundamental economic reason for linking markets in the first place.  Mitigation costs will not be minimized if linkage agreements drive a wedge between regional mitigation incentives. At some point, the costs of policy coordination start to outweigh the economic and environmental benefits of linking.

We’re stronger when we work together. This is particularly true in fighting a global threat like climate change. But the explicit linking of carbon markets is only one way to join together and move global climate change mitigation forward.  We should celebrate recent carbon market  linkages, but realize they are one means to an end – not an end in themselves.

 

Posted in Uncategorized | Tagged , | 4 Comments

Who’s Stranded Now?

Utility costs are like taxes.  Everyone knows they have to be paid, but most people have a reason that their own share should be smaller.  And, just as with taxes, there are limitless ways to divide up the revenue burden.

It’s been 20 years since electricity deregulation raised the specter of stranded utility costs – past investments that have turned out to deliver less value than was originally expected — and the question of who should pay those costs: Electricity ratepayers? Customers switching to buy from a competing electricity supplier? Utility shareholders?

StrandMe2So now it’s 2016 and we are back to the same question.  Electricity customers are leaving or are greatly reducing purchases.  Some customers are installing rooftop solar while still buying some power from the utility.  Others are switching to a community choice provider (as I discussed in February) or proposing municipalization.   As utility sales decline, once again we are debating who should pay for utility investments that are less valuable in the new regime.

Utilities are responding mostly as they did in the 1990s, arguing that their investments were deemed prudent by regulators at the time they were made, so their own shareholders should not be on the hook.  In somber tones they invoke a “regulatory compact” that is supposed to assure them a reasonable return on investments in exchange for an obligation to provide safe, affordable, reliable service.  Basically, they argue, a deal’s a deal, even when the market or regulatory environment changes in ways that devalue their installed capital.

StrandMe1Opponents respond by saying “Not so fast.  Utility shareholders have received investment returns comparable to the rates earned by unregulated companies while bearing far less risk. Yes, the market is changing and that is hurting your company. Welcome to a world with some risk.”  And furthermore, the reply continues, the utility commissions that approved those investments were too cozy or politically connected with the utilities, so the deals made shouldn’t be binding.

Both arguments have some merit.  Regulators should try to fulfill commitments, out of fairness, to maintain credibility, and to create a financial environment that can support investment.  But if the regulatory process that made those commitments was so broken that it was not legitimate, then the argument for sticking with unfair commitments is less compelling.

So it has been ironic to now see the arguments of each side flip as regulators reconsider some of the terms set for the current wave of partially exiting customers.

In December 2015, regulators in Nevada changed the rules for rooftop solar, abandoning net metering both for new installations and for customers who already have panels on their roofs.  The decision followed another ruling in Arizona that reduced incentives to install rooftop solar.  The howls of protest from solar customers included many references to unfairly changing the rules, even though there was no explicit long-term commitment to those rules, just expectations.

While Nevada was rocking the DG world, California was considering many of the same issues, and the solar advocates’ arguments on net metering policies followed along similar lines: the state has made a commitment to building rooftop solar and breaking that commitment would have dire consequences.

In California, the advocates were mostly victorious.  Net metering was extended for a few years, though the commissioners suggested they could follow Nevada next time if they don’t see more evidence solar customers are paying their fair share of costs.

While solar customers viewed these unwritten commitments in California, Nevada and elsewhere as sacrosanct, utilities argued they are over-the-top subsidies that don’t make public policy sense and should be scaled back.  More than one utility executive, or manager at a grid-scale renewables company, has complained that subsidies for distributed generation are being driven by the outsize political influence of DG solar companies, and that state regulators have lost sight of the original goals of reducing greenhouse gases while maintaining affordable electricity.

StrandMe3At about the same time as they were reviewing policies towards distributed generation, the California Public Utilities Commission was also resetting exit fees for departing customers who join community choice providers, using a formula that had been established in a previous decision.  These fees were created to compensate utilities for the power contracts they signed at what are now above-market prices — many for renewable power contracts in the early, expensive days — and to protect remaining customers from having to cover an unfair share of those contracts.  Community choice advocates argued for delaying or abandoning the increase, while the utilities returned to the view that a deal’s a deal.

Watching the different sides repeatedly invoke and abandon the imperative of sticking with policy directions set in previous decisions is a bit like watching the Republicans and Democrats in the Senate fight over legislative procedures. Whichever side is in ascendancy uses the rules to support their agenda, while the opposing side is shocked by the blatant abuse of power.  And then instantly the roles reverse when power shifts.

The big difference, of course, is there is no regulatory agency overseeing Congress that can call them on their hypocritical arguments. Electricity regulators can, and should, do so when market participants selectively argue the sanctity of whatever existing policy they support.

That’s not to say that regulators should blithely switch policies ignoring the cost of the uncertainty it creates.  Policy consistency is important, up to a point.  New information, new analysis, and new technologies, however, constantly alter the energy landscape. Policies that are written with clear dates of future review and potential off ramps may discourage some investment, but they seem just as likely to maintain pressure for verifiable high performance.  Given the dynamism in energy technology and climate science, regulators should be extremely cautious about making inflexible policy commitments to specific technologies.

When policies are re-evaluated it is crucial to separate the determination of whether overall a policy merits continuation from the allocation of gains and losses if it is halted.    Some party will always lose when policy changes.  The regulatory or legal process can determine if losers are due compensation, but that mustn’t be allowed to lock policy into the status quo.

In the next 10 years, we will likely see more change in energy systems than we have seen in the last 50.  While government policy should be fair to market participants it must also be nimble and adaptive to a changing landscape.  Only with such flexibility will we be able to address the growing environmental impact and affordability challenges that we face.

Posted in Uncategorized | Tagged , | 12 Comments

Move Over PEMEX

Gasoline stations in Mexico have all been exactly the same for decades. PEMEX, the state-owned behemoth has been the only show in town. Pull up to any of 11,400 stations nationwide and the experience is very similar: PEMEX stations selling PEMEX gasoline.

pemex_sign

This is all changing.  Starting April 1, 2016, private companies can now import, transport, store, and distribute gasoline and diesel. The change is part of a broader set of energy reforms aimed at increasing private investment throughout the Mexican energy sector. Already non-PEMEX stations are starting to appear like “La Gas” and “Hidrosina”.

lagas.pnghidrosina

Making this a competitive market will not be easy, but the reforms have great potential to improve service quality and, eventually, to increase efficiency and reduce prices.

The reaction thus far has been positive. Twitter users like Rubén Linares have reported that the new stations are better illuminated, cleaner, and more modern. The new stations are also introducing other innovations like electronic payment systems. If these stations can earn a reputation for better quality service they will pull business away from PEMEX stations.

tweetPEMEX stations are franchised. So there is already some incentive for station owners to provide good service. The problem is, however, that because all stations are branded PEMEX, there is also severe free riding. Why improve your station’s service quality when your efforts will mostly benefit other owners? Moreover, PEMEX’s franchising rules severely limit the scope for differentiation. For example, all PEMEX stations have the exact same limited snack and drink options.

Eventually, retail competition will also put downward pressure on prices. Currently, retail prices for gasoline and diesel are set nationally by the Mexican finance ministry. The current price for non-premium (“magna”) gasoline is $2.75 per gallon compared to an average price in the United States of $2.29 per gallon. All stations in Mexico charge this price, including the non-PEMEX stations.

However, these price controls for retail gasoline and diesel will be removed starting January 1, 2018. It will be very interesting to see what happens to prices. As in any market, there will be stations that enjoy local market power. But there will also be stations that cut prices to increase market share and new stations that open in high-demand locations.

truck.png

In the longer-run, a competitive retail market will also help increase efficiency upstream.  You might ask, where do gas stations in Mexico buy gasoline? For the moment, they buy it from PEMEX. This vertical structure raises a couple of serious concerns. Probably most importantly, you worry about input foreclosure i.e. that PEMEX will favor PEMEX-branded stations. PEMEX could try to charge lower prices to PEMEX stations, or could try to refuse to sell products to non-PEMEX stations. It will be important for the Mexican regulator to keep a close watch on this type of non-competitive behavior, perhaps with the assistance of an independent advisory panel like California’s Petroleum Market Advisory Committee.

We are also beginning to see private investment in these upstream sectors. In particular, some major petroleum consumers are starting to import their own petroleum products. It will be important to ensure that new products conform with Mexico’s environmental regulations (e.g. low sulfur gasoline), but this end run around PEMEX is extremely promising. Not only can this reduce costs for consumers, but it will also put competitive pressure on PEMEX to reduce costs.

racktank

These upstream markets will not become competitive overnight. PEMEX has long dominated petroleum production, refining, imports, transport, and storage, so price regulation will be necessary for wholesale petroleum products for the foreseeable future. But, over time, this price regulation is going to become less and less necessary as private investment expands. And moving forward, these investment decisions will be increasing driven by market factors, presumably leading to more efficient choices.

So move over PEMEX.  It is going to be an exciting next couple of years in the Mexican petroleum sector.

 

 

Posted in Uncategorized | Tagged , | 3 Comments

Mitigation Bingo

I am clearly not a historian, but has there ever been a more dynamic, physically fit and forward-looking trio in charge of North America’s future? It’s not gender balanced, but hey, maybe we can fix that in November. At their recent summit, the TOP powerhouse (get it? Trudeau, Obama, Peña Nieto) announced that by 2025, 50 percent of the continent’s electricity will come from “clean” sources. Clean here means hydro, wind, solar, geothermal, CCS, demand reduction, and of course nuclear power. Currently these sources provide 37 percent of the three countries’ power. So this is an ambitious goal over a relatively short time horizon. It provides lots of flexibility to meet the goal, as Canada has ample potential hydro resources and Mexico’s solar potential is vast.

Don’t get me wrong, I am excited about this proposal, but I would like to take a step back. Climate change will affect the power sector in a number of ways. We have largely focused on reducing greenhouse gas emissions (mitigation), which is of course the root of the problem itself. Regulators outdo themselves with ambitious goals. California is going 80% below baseline by 2050! Everywhere you look there are sexy combinations of targets and timelines. Now it’s 50/25 for the electricity sector nationally! On this blog, my economics friends and I have argued repeatedly that it would be nice to set one GHG target and achieve it in a least-cost fashion, not in this game of mitigation bingo. Such a beautifully coordinated abatement effort across sectors and sources would make me happier than Kevin Durant moving to the Golden State Warriors.

Unfortunately some relative that clearly doesn’t get me, already got me “Mitigation Bingo” for my birthday. So I am going to spend the rest of my money on a game of “Adaptation Pursuit”.  A comprehensive private and public sector plan on climate change for the power sector should look more broadly at what’s coming down the atmosphere and think of plans to deal with them. Many impacts of climate could make reducing greenhouse gas emissions more difficult. To clarify thoughts, I drew you the picture below (another reason no one likes economists is the fact that we can’t really draw).

climate change

The four plagues of the climate apocalypse in this context are fire (more frequent, and likely more intense), heat, floods (sea level rise) and drought (related to heat and in some areas less rainfall). Here are a number of ways these four will negatively affect the power sector:

1)      Wildfires will affect transmission capacity. Lines do not like to get hot or dusty (fires generate a ton of dust). More fires may lead to lower transmission capacity at peak times. So maintenance and construction plans for existing and new transmission lines (which will ship those green electrons from the middle of nowhere to you) will have to take into account the new normal. Wildfires may also affect substation and generation facilities directly, of course.

2)      Many power plants are built near bodies of water, which is needed for cooling. Sea level rise will lead to higher flood events, which might negatively affect these plants and substations and require investments in additional seawalls. Or maybe one would want to build plants elsewhere.

3)      Nuclear power plants are also frequently located near the ocean. As we lack permanent storage deep underground for spent fuel rods in the US, spent fuel rods live in secure facilities at the plants. If sea levels rise, and the higher 100-year flood events change the new normal, we have to think about storage more carefully in the intermediate run. In the very long run, if there is really drastic sea level rise, the value of deep underground storage goes up.

4)      As we have recently experienced, droughts are bad for California and just about everywhere else. In agricultural areas, drought is fought by pumping water from wells, which requires lots of energy (e.g., electricity or diesel). If in the long run we have to drill deeper and deeper for water, these energy costs will rise.

5)      As streams and small rivers run dry and hot during drought periods, this may lead to insufficient supplies of cooling water necessary for both fossil and nuclear power plants. Nobody likes to think about overheating power plants. Of course, less water in streams is bad for hydropower generation as well.

6)      But the big Kahuna is heat. The biggest economic impacts will likely come from increased demand for cooling during hot days. Lucas, Catherine and I have written extensively on the subject. More people will install air conditioners and operate them more frequently. The big costs here are from more electricity consumption. Given the possibly sizable necessary investments in peak capacity, which costs about $750 per kW, higher demand during peak load will be a pricey endeavor.

7)      We already know that transmission lines lose capacity at high temperatures. But on top of that some types of gas fired power plants generate less power per unit of gas. This means that when it’s hot outside, not only is demand higher, but many plants see decreases in output.

None of these seven points require us to reach for our anti-anxiety medication. There is time to address all of these issues. Which is what the average 20 year old thinks in terms of heart conditions while biting into a triple bacon cheeseburger. It’s time to come up with a comprehensive plan on how we are going to deal with these issues and act accordingly. This will save us a lot of headaches later on.

Posted in Uncategorized | Tagged , | 3 Comments

Finding Energy Efficiency in an Unexpected Place – The Cockpit

I suspect that most energy economists think there are more unexploited opportunities for energy efficiency in homes than in firms. Firms are cost-minimizers, after all – they’re in the business of making things with the fewest possible inputs. And, energy is an important input for many firms, particularly airlines. So, not even McKinsey – in their exhaustive catalog of potential energy efficiency measures – identify improved fuel efficiency from airlines.

Surprisingly, a new NBER working paper finds a significant opportunity for fuel savings in the airline industry. In a research coup that makes people like me drool with envy, Greer Gosnell, John List, and Rob Metcalfe convinced Virgin Atlantic Airways to let them run an experiment. And, not just any experiment – one that involved their captains, the head honchos, as in, “This is your captain speaking.”

The cockpit of a Virgin Atlantic Airbus A340

The cockpit of a Virgin Atlantic Airbus A340

The researchers sent three randomly selected groups of Virgin Atlantic captains either (a) information about average fuel efficiency on the flights the captain made in the previous month, (b) the same personalized information as group (a) plus personalized targets for the coming month, or (c) personalized information and targets, plus an offer to donate 10 GBP (which was worth more pre-Brexit…) to a charity for each target achieved. A fourth set of captains was in the control group and knew the experiment was going on, but didn’t receive a monthly mailing.

Pilots who only received information – group (a) – significantly improved their fuel efficiency, while pilots who received personalized targets improved their fuel efficiency even more, achieving about the same gains as the pilots who could donate savings bonuses to charity.

Perhaps most surprisingly, even pilots in the control group improved their fuel efficiency considerably relative to the months before the study began. The authors speculate that this is an example of the so-called Hawthorne Effect, which suggests that people behave differently when they know they’re being studied.

The adjustments yielded substantial savings, netting the airline more than $5.4 million in Virgin-Atlantic-Airways-airbus2fuel savings over the 8-month pilot period. Presumably, this would scale up to about $8 million over a year – not too shabby considering that the company’s profits (before taxes and exceptional items) were around $20 million in the year the study took place. And, since the costs of running the experiment were very low – less than $3,000 to send a couple hundred letters a month – this appears to be an example of a negative cost abatement strategy. The authors calculate approximately -$250 per ton of CO2 (yes, negative, since the company saved money on net).

In my mental model, pilots are super-humans. They’re better than super-computers, and could probably even beat Gary Kasparov at chess. In other words, they always, always make the right decisions. This might be what my psyche requires to get on a plane, plus a function of being a teenager when the movie Top Gun came out.

2A07652A00000578-3141557-image-m-7_1435416901354

So, what adjustments did these super-humans make to save fuel? And, do we really want them to be thinking about fuel efficiency and not MY SAFETY? As it turns out, most of the improvements involved simply – ahem – following the rules.

For example, when they’re taxiing, pilots are usually supposed to turn off half of the engines (e.g., one on a two-engine plane and two on a four-engine plane). Before the experiment, pilots did this on 35% of the flights, and after they knew they were being observed, this increased to 51%. Other adjustments involved following advice from air traffic control about more efficient routes or using real-time information about the baggage onboard to adjust fuel levels.

So, if even super-human pilots can be nudged into saving fuel, can we expect to find lots of similar opportunities across other industries? Think of the energy inputs controlled by power plant operators, truck drivers, or building supervisors. I’m personally optimistic, but two points temper my enthusiasm:

  • Nudging people to do things they weren’t doing before likely imposes additional costs. (Hunt Allcott and Judd Kessler explore this point in more detail here.) So, even if the cost of sending letters to the pilots is low, there may be other, unobserved costs. For example, whatever extra time it takes the pilots to update the fuel calculations is time they were previously spending doing something else. And, even if it was just getting a cup of coffee at the airport, it was something they must have enjoyed, because they chose to do it.

Greer, List and Metcalfe thought about this, and surveyed pilots after the experiment. They found that if anything pilots reported higher job satisfaction, especially those who met their personal goals and donated to charity. While we can’t rule out additional costs (unobserved costs are notoriously hard to quantify…), that result at least suggests they are small and possibly negative.

  • While the fuel savings are decent sized and statistically significant, they amount to small tweaks to the way an industry does business. But, climate scientists suggest that we need to reduce GHG emissions by around 80% to avoid dramatic disruptions, which we can’t do with only small tweaks. Every little bit certainly helps, but we can’t stop p19he9v0a7157b1eo01ib518a619rr4looking for more fundamental changes within sectors. (The Wall Street Journal had a recent piece on electric and hybrid planes, which still sound a bit futuristic, but you never know.)

As the researchers point out, providing feedback to the pilots involved large amounts of data (on over 40,000 flights), which were collected, quickly analyzed and sent to the pilots. At the most fundamental level, I see this experiment as an example of how using lots of personalized, high frequency data can give valuable feedback to decision-makers. Let’s hope more companies take similar opportunities to find savings this way – it’s in their best interest, as well as the climate’s.

Posted in Uncategorized | Tagged | 11 Comments

Time to Unleash the Carbon Market?

What’s a ton of carbon (dioxide equivalent) worth? Not much if you ask the world’s carbon markets. The graph below summarizes prices and quantities covered by existing carbon emissions trading programs (green) and carbon taxes (blue).  Nearly all carbon market prices are below $13/ton.

CAT

          Source: State and trends of carbon pricing 2015

These low carbon prices have been making headlines, particularly in the context of the two largest carbon trading programs: Europe’s Emissions Trading Scheme (ETS) and California’s GHG emissions trading program. In California last month, the carbon allowance auction price hit the floor of $12.73, with only 11 percent of the 77.75 million allowances up for sale finding a willing buyer. European allowance prices have averaged around €6  in the first half of 2016, far below the €30 or more needed to encourage a shift away from coal-fired electricity generation.

The European lawmaker in charge of overhauling the EU ETS post-2020 has compared his carbon market without a price to “ a car without an engine”.  I would spin this automotive analogy a little differently.  As far as I can tell, these carbon markets have a working engine. We’re just not allowing them enough room to drive.

Here in California, the GHG emissions trading program (covering 85 percent of the state’s GHG emissions) has been cast in a supporting role. The updated scoping plan projects that over 70 percent of emissions abatement required under the 2020 target will be driven by “complementary measures” (e.g. mandated investments in low carbon technologies) rather than the permit price.  Once you factor in offsets and the potential for emissions leakage and reshuffling, there’s not much work left for the carbon market to do.

In Europe the story is a little different. Because the ETS is less comprehensive (covering  approximately 45 percent of emissions), many complementary measures are designed to tap abatement potential that lies outside the reach of the carbon market.  But there are also important prescriptive measures mandating  emissions reductions that fall within the scope of EU ETS. To put this into some perspective, the value of interventions (i.e. subsidies, feed in tariffs, etc. ) designed to accelerate investments in  renewable energy has significantly exceeded the market value of emissions allowances in recent years (thanks Carolyn Fischer  for highlighting this fact).

Prescriptive policies come at a cost

This preference for using prescriptive policies –rather than market mechanisms- to coordinate abatement helps explain why carbon prices are so low.   Some simple graphs summarize the basics behind this cause and effect.

In the cartoon graph below, each colored block represents a different abatement activity (e.g. coal-to-gas fuel switching, renewable energy investments,  energy  efficiency improvements, etc.). Think Sesame Street meets the McKinsey curve. The width of the block measures achievable emissions reductions. The height of the blocks measures the cost per ton of emissions reduced.

MAC1_Page_2

In this cartoon cap-and-trade story, suppose baseline emissions are 200 and policy makers are seeking a 25% reduction. If we rely entirely on a permit market to get us there, we’d allocate 150 permits and let the market figure out where the 50 units of abatement will come from. An efficient market would drive investment in  the lowest cost options:  A  + B + 1/2 C. The total abatement cost incurred to meet the target would be  (20 x $10) + (20 x $20) + (10X$50) = $1100. The market clearing price (and the marginal abatement cost/ton) would be $50.

Now imagine that, in addition to the permit market, complimentary measures are introduced to mandate deployment of options D  and E.  These mandates take us 80% of the way towards meeting the emissions target.  The role of the carbon market has been seriously diminished –  we need only  10 more units of abatement to hit the target.

MAC2

Under this scenario, the carbon market will incentivize investment in 10 units of A. The permit price drops to $10.  The total cost of meeting the emissions  target rises to  10 x $10 + 20 x $100 + 20 x $150 = $5100. And we wring our hands about low carbon prices and broken carbon markets.

Of course, this cartoon picture omits lots of real-world complexities (see this important EI paper for a more detailed analysis of California’s abatement supply and allowance demand). But it illustrates two real-world considerations. First, when complementary measures mandate relatively expensive abatement options, the carbon price we observe in the market will not reflect the marginal cost of reducing emissions. Second, a reliance on complementary measures to reduce emissions can significantly drive up the costs of hitting a given emissions target.

In California and in Europe, there is growing evidence that low allowance prices in the carbon market belie much higher abatement costs associated with complimentary policies.  For example, this paper estimates that the California Solar Initiative delivered emissions reductions at a cost of $130 – $196 per metric ton of CO2.  California’s  LCFS credit price (which reflects the marginal incentive to reduce a ton of MCO2e)  is currently averaging around $120 per metric ton CO2. In Europe, researchers estimate that the implicit costs of renewable energy targets per metric ton of CO2 are on the order of hundreds of euros for solar (and wind in some locations).

Time to unleash the carbon markets?

Looking out past 2020, more ambitious targets are being set and the process of charting a course to meet these targets is now underway.  This could be a turning point for carbon markets. How heavily are we going to lean on prescriptive policies versus carbon markets to meet these future emissions abatement goals?  If the increased stringency of future emissions targets is met with increasingly aggressive mandates and measures, we may be signing up for another round of low carbon prices.

I’m not suggesting we should leave *all*  of the driving to the carbon markets. There are good reasons for complementing carbon markets with some truly complementary policies and mandates (some of which are fleshed out here). But there are also costs associated with keeping the carbon market mechanism on a tight leash while chasing emissions reductions with prescriptive mandates and programs (see, for example, Jim’s recent post here).  Right now, carbon markets are hamstrung by a growing medley /cacophony of policies that drive allowance prices down. If we want to see carbon markets really work, we need to give them more work to do.

 

 

Posted in Uncategorized | Tagged , | 26 Comments