Addicted to Oil: U.S. Gasoline Consumption is Higher than Ever

August was the biggest month ever for U.S. gasoline consumption. Americans used a staggering 9.7 million barrels per day. That’s more than a gallon per day for every U.S. man, woman and child.

The new peak comes as a surprise to many. In 2012, energy expert Daniel Yergin said, “The U.S. has already reached what we can call`peak demand.” Many others agreed. The U.S. Department of Energy forecast in 2012 that U.S. gasoline consumption would steadily decline for the foreseeable future.


Source: Constructed by Lucas Davis (UC Berkeley) using EIA data ‘Motor Gasoline, 4-Week Averages.’

This seemed to make sense at the time. U.S. gasoline consumption had declined for five years in a row and, in 2012, was a million barrels per day below its July 2007 peak. Also in August 2012, President Obama had just announced aggressive new fuel economy standards that would push average vehicle fuel economy to 54 miles per gallon.

Fast forward to 2016, and U.S. gasoline consumption has increased steadily four years in a row. We now have a new peak. This dramatic reversal has important consequences for petroleum markets, the environment and the U.S. economy.

How did we get here? There were a number of factors, including the the Great Recession and a spike in gasoline prices at the end of the last decade, which are unlikely to be repeated any time soon. But it should come as no surprise. With incomes increasing again and low gasoline prices, Americans are back to buying big cars and driving more miles than ever before.

image-20160924-29889-18mi4tbGas is cheap and Americans are back in their cars and trucks. viriyincy/flickr, CC BY-SA

The Great Recession

The slowdown in U.S. gasoline consumption between 2007 and 2012 occurred during the worst global recession since World War II. The National Bureau of Economic Research dates the Great Recession as beginning December 2007, exactly at the beginning of the slowdown in gasoline consumption. The economy remained anemic, with unemployment above 7 percent through 2013, just about when gasoline consumption started to increase again.

Economists have shown in dozens of studies that there is a robust positive relationship between income and gasoline consumption – when people have more to spend, gasoline usage goes up. During the Great Recession, Americans traded in their vehicles for more fuel-efficient models, and drove fewer miles. But now, as incomes are increasing again, Americans are buying bigger cars and trucks with bigger engines, and driving more total miles.

Gasoline Prices

The other important explanation is gasoline prices. During the first half of 2008, gasoline prices increased sharply. It is hard to remember now, but U.S. gasoline prices peaked during the summer of 2008 above US$4.00 gallon, driven by crude oil prices that had topped out above $140/barrel.


Gasoline prices in Washington D.C. top $4 a gallon in 2008. brownpau/flickr, CC BY

These $4.00+ prices were short-lived, but gasoline prices nonetheless remained steep during most of 2010 to 2014, before falling sharply during 2014. Indeed, it was these high prices that contributed to the decrease in U.S. gasoline consumption between 2007 and 2012. Demand curves, after all, do slope down. Economists have shown that Americans are getting less sensitive to gasoline prices, but there is still a strong negative relationship between prices and gasoline consumption.

Moreover, since gasoline prices plummeted in the last few months of 2014, Americans have been buying gasoline like crazy. Last year was the biggest year ever for U.S. vehicle sales, with trucks and SUVs leading the charge. This summer Americans took to the roads in record numbers. The U.S. average retail price for gasoline was $2.24 per gallon on August 29, 2016, the lowest Labor Day price in 12 years. No wonder Americans are driving more.

Can Fuel Economy Standards Turn the Tide?

It’s hard to make predictions. Still, in retrospect, it seems clear that the years of the Great Recession were highly unusual. For decades U.S. gasoline consumption has gone up and up – driven by rising incomes – and it appears that we are now very much back on that path.

This all illustrates the deep challenge of reducing fossil fuel use in transportation. U.S. electricity generation, in contrast, has become considerably greener over this same period, with enormous declines in U.S. coal consumption. Reducing gasoline consumption is harder, however. The available substitutes, such as electric vehicles and biofuels, are expensive and not necessarily less carbon-intensive. For example, electric vehicles can actually increase overall carbon emissions in states with mostly coal-fired electricity.


Americans are buying less fuel-efficient vehicles. http://www.shutterstock.com

Can new fuel economy standards turn the tide? Perhaps, but the new “footprint”-based rules are yielding smaller fuel economy gains than was expected. With the new rules, the fuel economy target for each vehicle depends on its overall size (i.e., its “footprint”); so as Americans have purchased more trucks, SUVs and other large vehicles, this relaxes the overall stringency of the standard. So, yes, fuel economy has improved, but much less than it would have without this mechanism.

Also, automakers are pushing back hard, arguing that low gasoline prices make the standards too hard to meet. Some lawmakers have raised similar concerns. The EPA’s comment window for the standards’ midterm review ends Sept. 26, so we will soon have a better idea what the standards will look like moving forward.

Regardless of what happens, fuel economy standards have a fatal flaw that fundamentally limits their effectiveness. They can increase fuel economy, but they don’t increase the cost per mile of driving. Americans will drive 3.2 trillion miles in 2016, more miles than ever before. Why wouldn’t we? Gas is cheap.

The ConversationThis blog is available on The Conversation

Posted in Uncategorized | Leave a comment

I’m Not Really Down with Most Top Down Evaluations

Lunches at Berkeley are never boring. This week I had an engaging discussion with a colleague from out of town who asked me what I thought about statistical top down approaches to evaluating energy efficiency programs. In my excitement, I almost forgot about my local organic Persian chicken skewer.

For the uninitiated, California’s Investor Owned Utilities (the people who keep your lights on…if you live around here) spend ratepayer money to improve the energy efficiency of their customers’ homes and businesses. Think rebates for more efficient fridges, air conditioners, lighting, and furnaces. The more efficient customers are, the less energy gets consumed, which is especially valuable at peak times. For doing this, the utilities get rewarded financially for energy savings produced from the programs. The million kWh question of course is how much do these programs actually save? I’m glad you asked.

fish

Multiple Ways of Looking at Energy Efficiency

The traditional way is to take the difference in energy consumption between the old and new gadget. If you’re really fancy you downward adjust the estimated savings by a few percent to account for free riders like me, who send in energy efficiency rebates for things they would have bought anyway. These so-called “bottom up” analyses have been shown to provide decent estimates of what is possible in terms of savings, but completely ignore human behavior. Hence, when tested for accuracy, bottom up estimates have over and over again been shown to overestimate savings. There are many factors that contribute to this bias, but the most commonly cited one is the rebound effect.

Another way of course, as we have so often advocated, is using methods that have their origin in medical research. For a specific program, say a subsidy for a more efficient boiler, you give a random subset of your customers access to the subsidy and compare the energy consumption of people who had access to the program to that of the customers who didn’t. These methods have revolutionized (in a good way), the precision and validity of program evaluations. My colleagues at the Energy Institute are at the forefront of this literature and are currently teaching me (very patiently) how you do these. I am always a bit slow to the party. These methods are not easy to implement and require close collaborations with the utilities and significant upfront planning. But that is a small price to pay for high quality estimates that allow us to make the right decision as to whether to implement programs that cost ratepayers hundreds of millions of dollars.

A third option, which has given rise to a number of evaluation exercises, is called top down measurement. The idea here is to look at average energy consumption by households in a region (say census block group) for many such regions over a long time period and use statistical models to explain what share of changes in energy consumption over time can be explained by spending on energy efficiency programs. The proponents of these methods argue that this is an inexpensive way to do evaluation, the data requirements are small, the estimates can be updated frequently, and – maybe most importantly – that these estimates include some spillover effects (if your neighbor buys a unicorn powered fridge because you did). Sounds appealing.

The big problem with the majority of these studies is that they do not worry enough about what drives differences in the spending on these programs across households. I am sure you could come up with a better laundry list, but here is mine:

  • Differences in environmental attitudes (greenness)
  • Income
  • Targeting by the utilities of specific areas
  • Energy Prices
  • Weather
  • ….

What these aggregate methods do not allow you to do is to separate the effects of my laundry list from those of the program. Or in economics speak, they are fundamentally unidentified. No matter how fancy your computer program is, you will never be able to estimate the true effect. It’s in some sense like using an X-ray machine as a lie detector. In practice you are possibly attributing the effect of weather, for example, to programs. Cold winters make me want to be more energy efficient. It’s the winter, not the rebate that made me buy a more efficient furnace. Further, the statistical estimates are just that. They provide a point estimate (best guess) with an uncertainty band around it. And that uncertainty band, as Meredith Fowlie, Carl Blumstein and I showed, can be big enough to drive a double-wide trailer through.

Time to Stop Using 1950s Regression Models

So currently there is a lot of chatter about spending more ratepayer dollars on these studies and I frankly think that majority will not be worth the paper they are printed on. To be clear, this a problem with the method, not the people implementing them. What we have seen so far, is that the estimates often are significantly bigger than bottom up estimates, which is sometimes attributed to spillover effects, but I just don’t buy it. I think we should stop blindly applying 1950s style regression models in this context.

I am also not advocating that everything has to be done by RCT. There are recent papers using observational data in non-experimental settings that try to estimate the impacts of programs on consumption. Matt Kotchen and Grant Jacobsen’s work on building codes in Gainesville, Florida is a great example. They do a very careful comparison of energy consumption by structures built pre  – and post – building code and find significant and credible effects. Lucas Davis has a number of papers in Mexico that use regression techniques to back out the efficacy of rebate programs on adoption and consumption. Judd Boomhower has a nice paper on spillover effects. They all employ 21st century methods, which allow you to make causal statements about program efficiency. These can be much cheaper to do and produce credible numbers. Let’s do more of that and work closely with utilities on implementing RCTs. It’s been a great learning experience for me and an investment worthwhile!

Posted in Uncategorized | 11 Comments

Is the Regulatory Compact Broken in Sub-Saharan Africa?

(Today’s post is co-authored with Paul Gertler. Wolfram and Gertler direct the Applied Research Program on Energy and Economic Growth (EEG) in partnership with Oxford Policy Management. The program is funded by the Department for International Development in the UK.)

As we teach our students in econ 101, the prices of most goods and services reflect both demand and supply factors. So, to use a classic example, the price of snow shovels may go up during a blizzard, even if it costs no more to supply them when it’s snowing.

image17

On the other hand, as we teach our students in regulatory economics 101, prices for regulated utilities are different. Their prices are driven almost purely by costs and, not just current costs, but costs incurred in the past that other businesses might write off as sunk.

In the textbook model, regulated utilities are what we call “natural monopolies.” They are supplying a good for which it makes the most economic sense to have a single supplier. This could be driven by the high fixed costs of building the transmission and distribution system to supply electricity, for example.

shaking-hands-clip-art-png-clipart-panda-free-clipart-images-jcsbzf-clipartRegulated utilities are implicit signatories to what’s called the “regulatory compact.” Basically, the regulator gets to set prices for the utility, ensuring that the company won’t take advantage of its monopoly position to charge prices through the roof. And, the regulator requires that the company offer universal service to anyone who wants it at the regulated prices. In exchange, the company gets assurance that it will be allowed to collect revenues to cover reasonable costs of doing business.

In the US, this is formalized through decades of judicial and regulatory decisions, for example, describing “just and reasonable” rates and “prudently incurred” costs.

According to a fascinating report recently released by the World Bank, the regulatory compact appears seriously out of whack in Sub-Saharan Africa.

The figure below highlights the problem. Each bar reflects the situation in a single country. The red diamonds reflect the cash collected per kWh by the main electricity provider (most are vertically integrated monopolies), and the purple and green bars reflect the costs. Note that for all but two of the countries, the dots are to the left of the bars. This means that the companies’ revenues are not covering their costs.

screen-shot-2016-08-20-at-1-47-09-pm

But, who is breaking the deal? Are companies’ costs too high? Perhaps “imprudent” in some sense, maybe due to corruption? Or, are the local regulators setting prices that are too low? Or, is it some combination of the two?

It’s first worth noting that only some of the countries in Sub-Saharan Africa have regulatory agencies, and only a subset of those have any real power over prices, so we’re using the “regulatory” part of the “regulatory compact” broadly.

We recently saw these issues up close in Tanzania. As part of a DFID-funded research program on Energy and Economic Growth, we organized a policy conference in Dar es Salaam, together with our partners at Oxford Policy Management.

unknown-4Tanzania’s local monopoly, the Tanzania Electric Supply Company (TANESCO), only collects revenues to cover 82%  of its costs (14 out of 17 cents per kWh), based on the World Bank calculations above. TANESCO is a vertically integrated utility, and the government owns 100% of its shares.

According to people close to the company, the rate-setting process is highly politicized, so rates are poorly aligned with TANESCO’s claimed costs. They point out that on the day the new Minister of Energy was appointed, he announced his intention to initiate a rate cut.

On the other hand, the regulators seem to believe that TANESCO’s costs are not “prudently incurred,” though they didn’t use that phrase explicitly. They argue that the company is inefficient, and the rates would easily cover costs if they cut fat.

Ministry of Energy and Minerals, Dar es Salaam

Ministry of Energy and Minerals, Dar es Salaam

It’s difficult to know who is right. Consider TANESCO’s recent experience procuring power from independent backup generators. Historically, over half of the country’s annual generation came from hydroelectric generators. In 2010, a severe drought led to persistent electricity shortages, so TANESCO signed several contracts for what’s been called “emergency” generation, including a contract with a company that owned two 50 megawatt diesel generators. Diesel prices were high in 2011 and 2012, though, and, under the contract, TANESCO had to pay the fuel costs. TANESCO’s losses during that period were reportedly more like 50% of their total costs.

Now, the company is saddled with debt from this period, but the regulator contends that the emergency generation costs were too high and is unwilling to raise rates to cover the accumulated debt. Also, the regulator increased rates by 40% in 2014, so may feel like it’s already done its part. Figuring out what the right prices for generation procured in an emergency is difficult, though. Presumably, the utility did not have much time to shop around. Then again, maybe it should have foreseen the emergency situation and planned to avoid it.

Tanzania, like many countries in the developing world, also experiences high levels of “nontechnical losses” (largely theft). So, even if rates are set to cover costs if most consumers pay, the companies will experience heavy losses. Theft appears to have a political component as well, though. This paper, by Brian Min and Miriam Golden, shows that nontechnical losses in India increase when elections are near.

The World Bank report divides each utility’s losses into four categories: underpricing (meaning the regulators are breaking the deal and setting prices lower than what would be required to cover reasonable costs), bill collection losses (meaning the utility bills for the consumption, but fails to collect), transmission and distribution losses (a combination of technical line losses above an acceptable limit and theft) and overstaffing (relative to a benchmark, suggesting the company’s costs are imprudently high). They do not attempt to identify other types of inefficiencies, such as purchase power costs that are too high.

pic

Upgrading the distribution system in Tanzania

They find no underpricing in Tanzania – suggesting the regulators are upholding their side of the compact. They attribute 80% of the losses to bill collection and nontechnical losses – suggesting the company needs to improve their billing system and distribution network. The remaining 20% is due to over-staffing. In its current situation, though, TANESCO struggles to finance its ongoing operations, let alone the investments needed to achieve fewer billing and distribution losses, so more price increases may be needed in the short run.

The new Energy and Economic Growth program will sponsor research to address some of these key questions and issues. First, we suspect there are real costs in terms of economic growth and other development outcomes due to the kind of institutional breakdown documented in the World Bank report. We need to document the extent to which economic growth is constrained by unreliable power, for example. We aim to measure costs like this by collecting new data and conducting new analyses. Second, we will work with policymakers, regulators, the utilities and other stakeholders to learn about the best ways to improve the institutions.

Posted in Uncategorized | 8 Comments

If a Tree Falls in the Forest…Should We Use It to Generate Electricity?

Every summer vacation, we pack our tree-hugging family into the car and head for the Sierra Nevada mountains. In many respects, our trip this summer was just like any other year, complete with family bonding moments and awe-inspiring wilderness experiences:

tree1

I date myself with this reference                                                     Source

But our 2016 photo album is not all happiness and light.  This year, we saw an unprecedented number of stressed and dying trees. Forest roads were lined with piles of dead wood.

tree2

source                                                                       source

These pictures break a tree hugger’s heart. But they barely scratch the surface of what has been dubbed the worst epidemic of tree mortality in California’s modern history. According to CAL FIRE, over 66 million trees have died since 2010. And it’s not over yet.

The underlying cause is climate change working through drought and bark beetles. Warmer winters and drier summers mean this pesky bark beetle has been reproducing faster and attacking harder.  Drought-stressed trees are more vulnerable to fungi and insects. The big-picture impacts are devastating.

Acres of dying trees raise fundamental questions about how to preserve and protect our national parks and forests in the face of climate change. These existential issues were at the heart of President Obama’s speech in Lake Tahoe last week. But the epidemic also raises some more material questions. This week’s blog looks at the heated debate over what to do with millions of dead trees in the forest.

 66 million trees and counting

I’m an economist, not a woody plant biologist, so I have a hard time thinking in terms of millions of trees. With some expert assistance, I made the following ballpark conversion from trees to some more familiar metrics.

  • 66 million trees hold approximately 68 million tons CO2e.[1] To put that in perspective, California emits about 447 mmt CO2e annually.
  • If all 66 million trees were used to make electricity at existing biomass facilities (a very unlikely scenario), this would generate about 38,600 GWh.[2] To put this in perspective, California’s biomass facilities generated 7,228 GWh (gross) in 2015 .

Upshot is that 66 million dead trees is a big deal, no matter how you measure it.

There seems to be widespread – but not unanimous– agreement that leaving close to 40 million dry tons of wood (my rough estimate) in the forest will increase wildfire risk and intensity to unacceptable levels. So Governor Brown has declared a state of emergency and formed a tree-mortality task force to safely remove the dying trees, especially those that pose immediate danger. Having dragged these trees out of the forest, what to do with them?  Right now, many trees are being burned in open piles or “air curtain incinerators”.   tree3

Wood burning in an air curtain incinerator

CalFire plans to start running these incinerators 24 hours per day in the fall.  Yikes. The thought of incinerating wood in the forest 24/7 begs the question: are we better off using these trees to generate electricity? Researchers, including some esteemed Berkeley colleagues and forest service scientists, have been collecting some of the information we need to answer this question.

Forest-fueled electricity generation – at what cost?

Teams of researchers have been documenting the costs of biomass generation versus “non-utilization” burning  (i.e., burning trees in the woods to reduce fire risk). The punchline: Unless trees are located quite close to biomass generation facilities, the cost of extracting the trees, processing the wood, and transporting it to biomass generation facilities exceeds the market value of the wood fuel for electricity generation.  And this market value is falling as biomass generators struggle to compete with low natural gas prices and falling solar and wind electricity generation costs.

Some stakeholders argue that current market prices and policy incentives are failing to capture all the benefits of biomass generation has to offer. In particular, a growing body of research looks at  relative environmental impacts. The table below summarizes some recent estimates of the quantity of pollution emitted per kg of dry wood across different wood burning alternatives:

Biomass option Emissions (g/kg dry wood)
Air curtain incineration

CO2e

PM10 NMOC CO NOx
Open pile burning 1834 0.7 0.6 10 5
Open pile burning 1894 7.5 5.0 62.5 3
Biomass to energy: gasification 1349 0.062 0.127 0.859 0.25
Biomass to energy: direct combustion 1349 0.111 0.028 0.768 0.45

Sources are here and Placer County Biomass Program. Biomass to energy conversion assume trees are 40 miles from the site of generation.

The first thing to note  is that the  estimates of CO2e emissions from electricity generation (1349 g/kg) are lower than emissions associated with burning wood in the woods, even though additional emissions are generated in the processing and transport of wood fuel. The reason is that these estimates are reported net of “avoided” CO2e emissions. In other words, researchers assume that if a kg of wood is used to fuel biomass generation, it will displace natural gas fired generation and  506 g of CO2e emissions associated with that gas generation. So 1349 g = 1856 g-506g.

It is standard to see avoided emissions from displaced electricity generation counted as an added benefit of biomass generation. Absent binding regulatory limits on GHG emissions, this can make sense. But in California, CO2e emissions are regulated under a suite of climate change policies, some of which are binding.  If the aggregate level of emissions is set by binding regulations, an increase in biomass generation will change the mix of fuels used to generate electricity, but not the level of CO2e emissions.

A quick walk in the policy weeds puts a finer point on this.  In California, an aggressive renewable portfolio standard (RPS) mandates the share of electricity generated by qualifying renewable resources (including forest-sourced biomass). So long as the RPS is binding, an increase in biomass generation will reduce demand for other qualifying renewable resources (such as wind or solar). But it should not reduce overall CO2e emissions from electricity if the biomass generation and the renewable resource it displaces are CO2e equivalent.

If avoided CO2e emissions are set to zero, the estimated CO2e emissions per kg of wood burned look fairly similar across non-utilization burning and biomass generation. In contrast, these alternatives differ significantly as far as harmful pollutants such as NOx and particulates are concerned. Aggregate emissions of these pollutants are not determined by mandated caps or binding standards.  And the quantity of pollution emitted per unit of wood burned differs by orders of magnitude across non-utilization versus electricity generation options.

It is not clear how differences in these (and other) emissions translate into differences in health and environmental damage costs. But accounting for these environmental costs would presumably reduce the net cost of  biomass generation relative to the more polluting alternative.

Dead trees fuel biomass policy developments…

No matter how you measure it, there’s a lot at stake in California’s dead and dying trees. Some of the wood can be harvested for timber. Some of the wood will be left in the woods to provide benefits to soil and wildlife. But given the current trajectory, lots of wood will be burned.

Many of the forest managers and researchers I talked to despair that biomass generation facilities are closing down just as air curtain incinerators fire up. They feel strongly that more of this dead wood should be used to fuel electricity generation. In response to these kinds of concerns, the California legislature recently passed legislation to support biomass power from facilities that generate energy from wood harvested from high fire hazard zones.  The bill is awaiting the Governor’s signature.

Increased support for biomass generation (over and above existing climate change policies) makes sense if the benefits justify the added costs. On the one hand, burning more wood at biomass facilities will incur additional processing, transport, and operating costs. On the other hand, it will generate less local air pollution as compared to non-utilization burning and other potential benefits (such as reduced ancillary service requirements vis a vis intermittent renewables). Getting a better handle on these costs and benefits will be critical if we are going to make the best of this bad situation.

 

 

[1] Assuming a mix of conifer species (pine, Douglas-fir, true fir, cedar), we estimate1800 green pounds per tree. X 66 million trees = 118.8 billion green pounds of wood available or 59.4 million green tons.  If we assume 35% moisture content (dead trees have less moisture) we have 38.6 million BDT (bone dry tons). Multiply the dry tons by 0.5 to obtain a comparable weight of entire tree’s sequestered carbon.  This gets us to 19.3 million tons of carbon. Multiply tons of carbon by 3.67 to get comparable weight in CO2e, and then convert to metric tons = 68.7 million tons. Thanks to Steve Eubanks, Tad Mason, and Bruce Springsteen for assisting with these calculations. All errors are mine.

[2] 1 bone dry ton generates approximately 1 MWh in existing biomass generation facilities.

Posted in Uncategorized | 21 Comments

Spying on You from Space

The chest thumping in economics about how big and cool our datasets are is becoming somewhat unbearable. Bigger is not always better. In fact, one of the many reasons why we love the field of statistics is that we don’t have to know everything about everyone, but we can infer information about the larger population based on a small (and hopefully random) sample. Big data were not useful when computers were essentially electrified wooden spoons holding hands being fed code and data on paper punch cards. Now that my current iPhone has a processor 10,000 times faster than the Mac Color Classic I wrote my undergraduate thesis on, there are few computational constraints and the opportunities are endless. I can connect to the Amazon Cluster and run my programs on thousands of computers at the same time. Many of the papers I read using big(ger) data, however, don’t really add a proportional amount of knowledge.

thinking machine

But, last week our former student Marshall Burke at Stanford jointly with the certified genius David Lobell and some colleagues in the Stanford Computer Science department published a paper in Science that still has me giddy. One of the big issues in trying to learn information about households is that you have to ask them questions. And that is really expensive. The 2010 US Census cost $13 billion. The World Bank spends millions on sending surveyors out across the world to learn about incomes, what homes look like, the health status of members of households, etc. Due to constrained budgets, you cannot ask everyone.

But, we are asking too few people, which leads to a devastating lack of knowledge. In the Burke-Lobell paper we learn that between 2000 and 2010, 25% of African countries did not conduct a survey from which one could construct nationally representative poverty estimates and close to half conducted only a single survey. This is problematic, since we are trying to eliminate poverty by 2030. If we don’t know where the poor are, this is going to be hard.

Earthlights_dmsp_1994–1995

The paper proposes an approach that is likely going to provide high-resolution estimates of poverty at a tiny fraction of the costs of surveys.  The authors used the ubiquitous NASA imagery of earth at night, which shows night lights. Night lights are a decent indicator of energy wealth and higher incomes, since without electricity, no streetlights (usually). This is where the rest of us, me included, stopped our thinking. The Stanford brainiacs used the lower resolution night light data and a machine learning algorithm to look for features in the much higher resolution daytime imagery that predict night lights. They did not tell the machine what to look for in the way an econometrician would, but let the computer learn. The computer found that roads, cities, farming areas are features in the daytime imagery that are useful to predict night lights. The authors then discard the night lights data and use the identified features to predict indicators of wealth found in surveys. They show that the algorithm has very impressive predictive power (think Netflix challenge, but for poverty indicators instead of whether you chose the West Wing over the Gilmore Girls).

Once the model is trained, they then use the daytime imagery to predict poverty indicators at a fine level of aggregation covering areas we were totally missing before, and the maps are impressive. If you want to learn more about what they did, they made a video you can watch: https://www.youtube.com/watch?v=DafZSeIGLNE

Satellite imagery has become a rapidly growing source of startups in the energy and retail sector. There are firms tracking ships carrying oil across the seven seas in real-time; there are firms tracking drilling activity at fracking sites; and, there are firms tracking the number of vehicles in shopping mall parking lots. All of these firms treat satellite imagery like the average American treats their TV: We watch the imagery presented to us. What Marshall et al. did here is leagues cooler. They combine two types of satellite imagery with some actual survey data to back out predictions of one of the most important economic indicators – poverty. This product is a game changer. I can’t wait to see the energy economic applications of this method.

Posted in Uncategorized | 2 Comments

King Coal is Dethroned in the US – and That’s Good News for the Environment

This is the worst year in decades for U.S. coal. During the first six months of 2016, U.S. coal production was down a staggering 28 percent compared to 2015, and down 33 percent compared to 2014. For the first time ever, natural gas overtook coal as the top source of U.S. electricity generation last year and remains that way. Over the past five years, Appalachian coal production has been cut in half and many coal-burning power plants have been retired.

This is a remarkable decline. From its peak in 2008, U.S. coal production has declined by 500 million tons per year – that’s 3,000 fewer pounds of coal per year for each man, woman and child in the United States. A typical 60-foot train car holds 100 tons of coal, so the decline is the equivalent of five million fewer train cars each year, enough to go twice around the earth.

This dramatic change has meant tens of thousands of lost coal jobs, raising many difficult social and policy questions for coal communities. But it’s an unequivocal benefit for the local and global environment. The question now is whether the trend will continue in the U.S. and, more importantly, in fast-growing economies around the world.

Health benefits from coal’s decline

Coal is 50 percent carbon, so burning less coal means lower carbon dioxide emissions. More than 90 percent of U.S. coal is used in electricity generation, so as cheap natural gas and environmental regulations have pushed out coal, this has decreased the carbon intensity of U.S. electricity generation and is the main reason why U.S. carbon dioxide emissions are down 12 percent compared to 2005.

Perhaps even more important, burning less coal means less air pollution. Since 2010, U.S. sulfur dioxide emissions have decreased 57 percent, and nitrogen oxide emissions have decreased 19 percent. These steep declines reflect less coal being burned, as well as upgraded pollution control equipment at about one-quarter of existing coal plants in response to new rules from the U.S. Environmental Protection Agency.


Coal waits to be added to a train at the Hobet mine in Boone County, West Virginia. Jonathan Ernst/Reuters

These reductions are important because air pollution is a major health risk. Stroke, heart disease, lung cancer, respiratory disease and asthma are all associated with air pollution. Burning coal is about 18 times worse than burning natural gas in terms of local air pollution so substituting natural gas for coal lowers health risks substantially.

Economists have calculated that the environmental damages from coal are US$28 per megawatt-hour for air pollution and $36 per megawatt-hour for carbon dioxide. U.S. coal generation is down from its peak by at least 700 million megawatt-hours annually, so this is $45 billion annually in environmental benefits. The decline of coal is good for human health and good for the environment.

India and China

The global outlook for coal is more mixed. India, for example, has doubled coal consumption since 2005 and now exceeds U.S. consumption. Energy consumption in India and other developing countries has consistently exceeded forecasts, so don’t be surprised if coal consumption continues to surge upward in low-income countries.

In middle-income countries, however, there are signs that coal consumption may be slowing down. Low natural gas prices and environmental concerns are challenging coal not only in the U.S. but around the world, and forecasts from EIA and BP have global coal consumption slowing considerably over the next several years.

Particularly important is China, where coal consumption almost tripled between 2000 and 2012, but more recently has slowed considerably. Some are arguing that China’s coal consumption may have already peaked, as the Chinese economy shifts away from heavy industry and toward cleaner energy sources. If correct, this is an astonishing development, as China represents 50 percent of global coal consumption and because previous projections had put China’s peak at 2030 or beyond.


A smoggy morning in Delhi, India. Anindito Mukherjee/Reuters

The recent experience in India and China point to what environmental economists call the “Environmental Kuznet’s Curve.” This is the idea that as a country grows richer, pollution follows an inverse “U” pattern, first increasing at low-income levels, then eventually decreasing as a country grows richer. India is on the steep upward part of the curve, while China is, perhaps, reaching the peak.

Global health benefits of cutting coal

A global decrease in coal consumption would have enormous environmental benefits. Whereas most U.S. coal plants are equipped with scrubbers and other pollution control equipment, this is not the case in many other parts of the world. Thus, moving off coal could yield much larger reductions in sulfur dioxide, nitrogen oxides, and other pollutants than even the sizeable recent U.S. declines.

Of course, countries like China could also install scrubbers and keep using coal, thereby addressing local air pollution without lowering carbon dioxide emissions. But at some level of relative costs, it becomes cheaper to simply start with a cleaner generation source. Scrubbers and other pollution control equipment are expensive to install and expensive to run, which hurts the economics of coal-fired power plants relative to natural gas and renewables.

Broader declines in coal consumption would go a long way toward meeting the world’s climate goals. We still use globally more than 1.2 tons of coal annually per person. More than 40 percent of total global carbon dioxide emissions come from coal, so global climate change policy has correctly focused squarely on reducing coal consumption.

If the recent U.S. declines are indicative of what is to be expected elsewhere in the world, then this goal appears to be becoming more attainable, which is very good news for the global environment.
The Conversation
This blog post is available on The Conversation.

Posted in Uncategorized | Tagged , | 22 Comments

Fixing a major flaw in cap-and-trade

While many Californians are spending August burning fossil fuels to travel to vacation destinations, the state legislature is negotiating with Gov. Brown over whether and how to extend the California’s cap-and-trade program to reduce carbon dioxide and other greenhouse gases (GHGs).   The program, which began in 2013, is currently scheduled to run through 2020, so the state is now pondering what comes after 2020.

The program requires major GHG sources to buy “allowances” to cover their emissions, and each year reduces the total number of allowances available, the “cap”.  The allowances are tradeable and their price is the incentive for firms to reduce emissions.  A high price makes emitters very motivated to cut back, while a low price indicates that they can get down to the cap with modest efforts.

Before committing to a post-2020 plan, however, policymakers must understand why the cap-and-trade program thus far has been a disappointment, yielding allowance prices at the administrative price floor and having little impact on total state GHG emissions.  California’s price is a little below $13/ton, which translates to about 13 cents per gallon at the gas pump and raises electricity prices by less than one cent per kilowatt-hour.

CapAndTradeExtensionFig1The low prices in the three major markets for GHGs mean little impact on behavior

And it’s not just California. The two other major cap-and-trade markets for greenhouse gases – the EU’s Emissions Trading System and the Regional Greenhouse Gas Initiative in the northeastern U.S. — have also seen very low prices (about $5/ton in both markets) and scant evidence that the markets have delivered the emissions reductions.  In fact the low prices in the EU-ETS and RGGI have persisted even after they have effectively lowered their emissions caps to try to goose up the prices.

In all of these markets, some political leaders have argued the outcomes demonstrate that other policies – such as increased auto fuel economy and requiring more electricity from renewable sources – have effectively reduced emissions without much help from a price on GHGs. That view is partially right, but a study that Jim Bushnell, Frank Wolak, Matt Zaragoza-Watkins and I released last Tuesday shows that a major predictor of variation in GHG emissions is the economy.  While emissions aren’t perfectly linked to economic output, more jobs and more output mean generating more electricity and burning more gasoline, diesel and natural gas, the largest drivers of GHG emissions.

CapAndTradeExtensionFig2Accurately predicting California’s GSP 10-15 years in the future is extremely difficult

Because it is extremely difficult to predict economic growth a decade or more in the future, there is huge uncertainty about how much GHGs an economy will spew out over long periods, even in the absence of any climate policies, what climate wonks call the “Business As Usual” (BAU) scenario.

If the economy grows more slowly than anticipated — as happened in all three cap-and-trade market areas after the goals of the programs were set – then BAU emissions will be low and reaching a prescribed reduction will be much easier than expected.  But if the economy suddenly takes off — as happened in the California’s boom of the late 1990s — emissions will be much more difficult to restrain.  Our study finds that the impact of variation in economic growth on emissions is much greater than any predictable response to a price on emissions, at least to a price that is within the bounds of political acceptability.

CapAndTradeExtensionFig3California emissions since 1990 have fluctuated with economic growth

Our finding has important implications for extending California’s program beyond 2020.     If the state’s economy grows slowly, we will have no problem and the price in a cap-and-trade market will be very low.  In that case, however, the program will do little to reduce GHGs, because BAU emissions will be below the cap.  But if the economy does well, the cap will be very constraining and allowance prices could skyrocket, leading to calls for raising the emissions cap or shutting down the cap-and-trade program entirely.

Our study shows that the probability of hitting a middle ground — where allowance prices are not so low as to be ineffective, but not so high as to trigger a political backlash — is very low.  It’s like trying to guess how many miles you will drive over the next decade without knowing what job you’ll have or where you will live.

So, can California’s cap-and-trade program be saved? Yes. But it will require moderating the view that there is one single emissions target that the state must hit. Instead, the program should be revised to have a price floor that is substantially higher than the current level, which is so low that it does not significantly change the behavior of emitters.   And the program should have a credible price ceiling at a level that won’t trigger a political crisis.  The current program has a small buffer of allowances that can be released at high prices, but would have still risked skyrocketing prices if California’s economy had experienced more robust growth.

The state would enforce the price ceiling and floor by changing the supply of allowances in order to keep the price within the acceptable range. California would refuse to sell additional allowances at a price below the floor. This is already state policy, but the floor is too low. California would also stand ready to sell any additional allowances that emitters need to meet their compliance obligation at the price ceiling.

Essentially, the floor and ceiling would be a recognition that if the cost of reducing emissions is low, we should do more reductions rather than just letting the price fall to zero, and if the cost is high, we should do less rather than letting the price of the program shoot up to unacceptable levels.

But should California’s cap-and-trade program be saved?  I think so.  My first choice would be to replace it with a tax on GHG emissions, setting a reliable price that would make it easier for businesses to plan and invest.  But cap-and-trade is already the law in California and with a credible price floor and ceiling it can still be an effective part of the state’s climate plan.

Putting a price on GHGs creates incentives for developing new technologies, and in the future might motivate large-scale switching from high-GHG to low-GHG energy sources as their relative costs change.  The magnitudes of these effects could be large, but they are extremely uncertain, which is why price ceilings and floors are so important in a cap-and-trade program.  With these adjustments, California can still demonstrate why market mechanisms should play a central role in fighting climate change while maintaining economic prosperity.

A shorter version of this post appeared in the Sacramento Bee August 14 (online Aug 11)

I’m still tweeting energy news articles and studies @BorensteinS

Posted in Uncategorized | Tagged , | 21 Comments

What the Heck Is Happening in the Developing World?

One of the most important energy graphs these days shows actual and projected energy consumption in the world, separated between developed and developing countries. A version based on data from the Energy Information Administration (EIA) is below.
Screen Shot 2016-08-07 at 8.38.43 PMThe vertical axis measures total energy consumption, including gasoline, diesel, natural gas, electricity from all sources, etc. – all converted to a common unit of energy (the Btu, or British Thermal Unit). It reflects commercial energy sources, but excludes things like firewood that people collect on their own. The horizontal axis plots time, and the straight lines reflect historical (actual) data while the dotted lines reflect projections.

Strikingly, the developing world – approximated on the graph as countries that are not members of the OECD – has already passed the developed world (in 2007) and is projected to consume almost twice as much energy by 2040.

To me, this suggests strongly that anyone worried about world energy issues – including climate change, oil prices, etc. – should be focusing on the developing world.

Unfortunately, I fear that we know woefully little about energy consumption in the developing world. The series of graphs below depicts our ignorance starkly.

Let’s start with China, which single-handedly consumed 22% of world energy in 2013 (still far less per capita than in the US). The vertical axis again plots total energy consumption, but this time it’s measured relative to 1990 levels. The black line plots actual numbers. For example, since the black line is at 3.5 in 2010, that means that by 2010, China was consuming 3.5 times more energy than it had in 1990. Pretty amazing growth! By comparison, US consumption in 2010 was only 15% higher than 1990 levels.

 

Screen Shot 2016-08-07 at 9.10.13 PM

The colored lines on the graph depict the EIA’s projections, published in different annual issues of the International Energy Outlook (IEO). If you stare at 2010, 2015 and 2020, you see that the EIA has revised its projections upward considerably over a relatively short time period.

Start with the light blue line at the bottom, which reflects projections that were part of the 2002 IEO. At that time, the EIA thought China would only consume twice as much energy in 2010 as it did in 1990. But, China’s actual consumption surpassed that level midway through 2003, 6.5 years earlier than projected. So, by 2005, the EIA had increased their projection for 2010 by 30%. That’s a huge upward revision.

But it wasn’t nearly enough. The EIA continued to increase its projection, struggling to keep up with China’s actual growth.

Ah, you say. This is just a story about China, where there are lots of possible explanations for underestimated growth in energy, including faster than expected GDP growth, rapid industrialization, etc.

But, similar stories emerge for Africa and India. The EIA has recently revised projections pretty dramatically, and most of the revisions are upwards.

Screen Shot 2016-08-07 at 9.12.13 PM

Finally, for India more than Africa, the projections have been too low.

Screen Shot 2016-08-07 at 9.14.02 PM

And, this is not a problem in the developed world. The figure below contains a similar graph for the US. Note that the scale is different than for the developing regions, so the revisions have been pretty miniscule in comparison. Also, they’ve generally been downward.

Screen Shot 2016-08-07 at 9.15.18 PM

A couple points to keep in mind:

  • It may seem like I’m picking on the EIA. I’m not trying to. They are doing an incredibly important job with very few resources. (The International Energy Outlook was recently demoted from an annual publication to approximately biannual.) Also, the EIA are not alone. The International Energy Agency and BP – two other big names in world energy reporting – have also had to revise projections upward to keep up with energy demand in developing countries.
  • The EIA and other organizations are careful not to describe their projections as forecasts. The EIA, for example, notes that, “potential impacts of pending or proposed legislation, regulations, and standards are not reflected in the projections.” I doubt that omission explains the discrepancies in the developing regions, though. I have tried to back out how much of the underestimate is due to misjudged GDP growth, and I don’t think that’s a big share either, at least in China. I suspect that we need a better underlying model for how GDP translates to energy consumption in the developing world, the point of this academic paper.
  • Policymakers in the developing world appear to appreciate this issue. We recently launched a 5-year research project, funded by the Department for International Development (the UK’s analog of USAID) and joint with Oxford Policy Management, to study energy in the developing world, focusing on sub-Saharan Africa and South Asia. As part of this project, we hosted a policy conference in Dar es Salaam to hear from East African policymakers about the pressing issues they faced. One of the main themes that emerged was the difficulty of planning without better demand forecasts.
  • Some might argue that markets will solve this problem. The EIA is just some government agency that few are paying attention to, or so the argument might go. If you have real money at stake in understanding future energy consumption in the developing world, you would not hire someone who was off by 75% (3.5 divided by 2).

I do not know who is using the EIA projections for what, but I believe this logic breaks down for several reasons. For one, in many parts of the world, the private sector is not investing in energy infrastructure and the public sector may be relying on organizations like the EIA. Also, most investors don’t really care about 2040. Their discount rates are high enough that it doesn’t really matter what’s happening 25 years out. But, from the perspective of climate change, the world should care about energy consumption in 2040, 2050 and 2100.

This brings us back to the first graph in the post, which contained projections out to 2040. I fear that we are underestimating the 25-year out projections, just like we’ve underestimated recent trends. As researchers, we need to get under the hood and understand more about what is driving energy consumption in the developing world.

Posted in Uncategorized | Tagged | 26 Comments

Evaluating Evaluations – Energy Efficiency in California

Last year, Governor Jerry Brown signed a law, Senate Bill 350, that sets out to double energy efficiency savings by 2030. Last week at the Democratic National Convention, Governor Brown focused his remarks on the importance of policies such as this to tackle climate change.

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

The precise energy efficiency targets haven’t been finalized, but they will be ambitious.

Meeting these targets will require an expansion of energy efficiency policymaking. Policymakers need to understand which programs work in energy efficiency and which don’t.

This is a daunting task. The California Public Utilities Commission’s (CPUC’s) energy efficiency efforts fund roughly 200 programs. The California Energy Commission (CEC) is regularly introducing new appliance and building standards. The evaluations of these activities are made public, but they can be hard to find and difficult to interpret. Additionally, policymakers may not have the time or training to critically assess the methodologies being used.

As a result, individual programs may not be getting enough scrutiny.

Many people working on energy efficiency may think the last thing we need is MORE evaluation. Energy efficiency is heavily evaluated.

I disagree. Today we have an opportunity to step up our game. We have access to more data and more rigorous evaluation techniques than ever before. It’s time for more evaluation, not less. In particular, it’s time to evaluate the evaluations.

To illustrate what I’m talking about, let’s look at an example from another heavily evaluated sector, criminal justice. The context is quite different, but the basic lessons are instructive.

In the 1980s many US states enacted stricter laws to reduce domestic violence. Rather than putting every offender in jail, courts began to mandate that offenders go through batterer intervention programs (BIPs). The initial evaluations of these programs found they were highly effective. These evaluations contributed to the justice system’s growing reliance on BIPs. In a 2009 report, the Family Violence Prevention Fund and US government’s National Institute of Justice estimated that between 1,500 and 2,500 such programs were operating.

As the cumulative number of evaluations grew, researchers began to undertake reviews that evaluated the evaluations, referred to as meta-analyses or systematic reviews. What they found was disappointing.

Many of the past evaluations that showed positive effects had methodological shortcomings. While some men completed a BIP and did not reoffend, others failed to complete court-mandated BIPs. Many men also became difficult to track down for surveys. The positive evaluations left out these populations, who were the people most likely to re-offend. More recently, careful studies that recognized the systematic differences between men who stuck with the programs and those that didn’t found that mandating the programs had a small or no effect.

There is disagreement on what to do next. Some researchers and practitioners have argued that BIPs could still be effective for some people. What is needed is better targeting and tailoring of the BIPs, coupled with evaluation. Others have taken the position that policymakers should stop relying on these programs because they waste valuable resources and create a false sense of security for women who think their batterer will be reformed through the programs. This is a really important evidence-based debate that should result in more effective policy.

This example is not unique. Evaluations of evaluations, known as systematic reviews, are becoming prevalent in many sectors including medicine, international development, education and crime and justice.

 

The way a systematic review works is that a team of reviewers focuses on a specific policy intervention. The reviewers do an exhaustive search for all the evaluations on the intervention. This includes academic and consultant evaluations, and includes other geographies. Then the reviewers carefully consider each study. They particularly focus on how carefully each study considered what would have happened in the absence of the intervention—the counterfactual – and whether there is a risk that the results may be skewed one way or another.

The systematic review report discusses each study’s risk of bias and then reaches a conclusion about the intervention based on the studies with the lowest risk of bias. In some cases a systematic review may conclude that a program is effective, or that it is not. In other cases a review finds that there is insufficient evidence to reach a conclusion. In these cases the review recommends how evaluations should be performed in the future to reach a firmer conclusion.

There are several reasons why now is the time to begin doing systematic reviews of energy efficiency evaluations. First, a very large number of evaluations have been completed across the country and world. There is value in reviewing and synthesizing these evaluations so that policymakers everywhere have access to the best evidence. Second, new statistical approaches are taking hold in energy, fueled in part by smart meter data. Systematic reviews can help policymakers make sense of the diversity of approaches. Third, energy efficiency is taking on increasing importance, as reflected in ambitious goals and growing spending. The evidence base needs to be strong to ensure the resources are being used effectively.

Research conducted at The E2e Project points to questions that systematic reviews could help answer. When are ground-up engineering estimates most appropriate to use? How important is the rebound effect? What considerations are most important when embedding evaluations into program design? What can interval smart meter data tell us about the effectiveness of programs that other approaches cannot?

Several of these were highlighted by agency staff at an energy efficiency workshop held by the CEC last month.

California produces only 1% of global greenhouse gas emissions. Given that, as Severin emphasized in a prior blog, the state’s policies can’t possibly have a meaningful direct impact on climate change. Instead, the way California can best address the climate change challenge is through invention and learning, then exporting the knowledge to the world.

In the case of energy efficiency, California should focus on finding which policy interventions are most effective and sharing the findings. Policymakers should take a look at systematic reviews as a tool to accomplish this.

Posted in Uncategorized | Tagged | 8 Comments

The Promise and Perils of Linking Carbon Markets

The theme of the week is “We’re stronger together“.  This rallying cry applies in lots of places.. including climate change mitigation!   So this week’s blog looks at how this theme is playing out in carbon markets. A good place to start is California’s recent proposal to extend its GHG cap and trade program beyond 2020. One of the many notable developments covered by this proposal is a new linkage between California’s carbon market and the rest of the world.

CAlink

Notes: The graph plots 2020 emissions caps. Quebec and California have been linked since 2014.  The proposed link with Ontario would take effect in 2017.  Emissions numbers summarized in the graph come from here, and here.

Admittedly, I am uniquely positioned to get really excited about linking the province of Ontario (where I was born and raised) with the state of California (my home of 10+ years) under the auspices of the California carbon market (an institution I spend a lot of time thinking about).  But excitement and interest in this “Ontegration” extends well beyond the Canadian economist diaspora. Why?  Because many see this kind of linkage between independent climate change policies as the most promising – albeit circuitous- means to an elusive end (meaningful climate change mitigation).

How did we get here?

After years of work to establish to globally coordinated “top-down” climate policy with very limited success, there’s been an important pivot towards a more decentralized, bottom-up strategy.  This change in course is motivated  by the idea that more progress can be made if each jurisdiction is free to tailor its climate change mitigation efforts to match its own appetite for climate policy action.  Whether, how, and when these independent carbon policies should link together so that regulated entities in one region can use allowances from another is viewed as “one of the most important questions facing researchers and policy-makers.”

To grease the wheels of this coming-together process, the Paris agreement provides a framework to support bottom-up policy linkages. International organizations such as the World Bank are working hard to translate this framework into on-the-ground success stories.  But so far, real-world carbon market policy linkages are few and far between.

I can count the number of linkages between independent trading programs on one hand (the  EU ETS is  linked to Norway, Iceland, Switzerland, and Liechtenstein. California is linked with Quebec).  Post-Brexit, we’ll probably see one more (after Brexiting, a likely outcome is that the UK will establish its own carbon market to link with the EU ETS).  The California-Ontario link is a good news addition to this list, which is why Ontegration is generating both hope and headlines.

Why link?

The most fundamental argument for linking emissions trading programs boils down to simple economics.  Why pay $20 to reduce a metric ton of carbon in California when you can pay $1 to reduce a metric ton in China?  If marginal abatement costs differ across regional cap-and-trade programs, allowing emissions permits to flow between programs to seek out the least cost abatement options will reduce the overall cost of meeting a collective emissions target. Of course, how this net gain is allocated across linkers will depend on how the linkage is implemented.

Other benefits include:

  • More integrated carbon markets are more liquid and can be less volatile, although market linkages can also propagate shocks more directly from one country to another. The EU ETS provides a case in point. The chaos that followed the Brexit referendum has directly (and significantly) impacted the price of carbon in 31 countries.
  • Economies of scale. Some jurisdictions are simply too small to support a well-functioning carbon market. If program operations are combined, administrative costs and effort can be shared across multiple jurisdictions. Larger markets also reduce risk of market power – a major concern for small jurisdictions trying to go it alone.
  • Political considerations. Politics are critical in determining whether a linkage will fly or die. Ontegration offers a case in point. California is happy to demonstrate that its climate policy initiative has brought other jurisdictions onto the carbon market board. In Ontario, the case for moving ahead with cap-and-trade is easier to make when the proposal involves plugging in to an established carbon market operation versus building a market from the ground up.

Market linkage comes with strings attached

The appeal of a bottom-up climate policy is that individual jurisdictions have the autonomy to pick and choose their own policy parameters.  But I am not going to link my carbon market with yours if I’m worried you’re going to introduce rogue policy changes that drive my carbon price and/or carbon emissions in an unpalatable direction. In other words, mutually acceptable linkage agreements will almost certainly impose limits on autonomy because the policy design choices in one jurisdiction affect outcomes in others.

Linkage does not require that all market design features are perfectly harmonized, but it does require careful coordination of design elements deemed to be critical. The Quebec-California linkage agreement provides a well documented example.   These kinds of deliberations get increasingly complex as the number of jurisdictions increases. Negotiations also become much more complicated when the benefits from linkage are distributed unequally across regions.

An important, related concern is that a linked network of carbon markets is only as strong as its weakest link. If one region lacks the capacity to monitor and enforce market rules effectively, this can undermine the environmental integrity of the entire system.

chain

Source

Limits to linkage?

Recent developments in Europe and California are demonstrating how carbon markets can be linked when partners see (mostly) eye to eye, market designs are similar, and political objectives are aligned.  Given current carbon market conditions, linkages have yet to deliver much (if anything) in terms of economic gains from trade.  But they have expanded the scope of carbon markets and laid down foundations for future cooperation. Some good news for a change.

Forging linkages between less compatible systems will require more effort and ingenuity.   It has been suggested, for example, that regions with more aggressive caps might be convinced to link with countries imposing less aggressive caps if “carbon exchange rates”  define favorable  terms of permit trade for regions with more ambitious mandated reductions.  Distorting market incentives in this way might help eliminate political barriers to linkage – but this would also undermine a fundamental economic reason for linking markets in the first place.  Mitigation costs will not be minimized if linkage agreements drive a wedge between regional mitigation incentives. At some point, the costs of policy coordination start to outweigh the economic and environmental benefits of linking.

We’re stronger when we work together. This is particularly true in fighting a global threat like climate change. But the explicit linking of carbon markets is only one way to join together and move global climate change mitigation forward.  We should celebrate recent carbon market  linkages, but realize they are one means to an end – not an end in themselves.

 

Posted in Uncategorized | Tagged , | 4 Comments