Will Smog in China Spur Climate Solutions?

I have read a number of news stories about air pollution in the major Chinese cities recently. A soupy smog of particulates, ozone, sulfur and nitrogen oxides hangs over Beijing, Tianjin and other northern cities. The concentration of particulate matter (PM2.5) in Beijing recently registered at 501 μg/m3, more than 15 times the highest recorded value in Los Angeles County.

Beijing Smog

Beijing Smog

Ex pats are fleeing the country, while the lifespans of people who live in these cities fall. The primary culprits for much of the air pollution are the coal-fired power plants, which produce roughly 80 percent of China’s electricity.

Some of my clean tech colleagues seem to be almost cheering for Chinese smog, though. They seem to believe that the Chinese will be forced to invest in renewables and cleanup their energy sector to address the local pollution. Because it is visible to the naked eye, has a distinctive smell and has immediate impacts on quality of life, smog, unlike greenhouse gases, will spur a clean energy transformation. Or, so some argue.

I love the idea of killing two birds with one stone as much as the next person, but I’m skeptical of this particular application. I worry about the greenhouse gas implications of both demand- and supply-side responses to smog.

On the demand side, I worry that people will react to air pollution by consuming more energy. I was in Singapore recently and stunned to learn that 30% of the households do not have air conditioning — this in a country with the third highest average income and beastly hot (to my Minnesota-born tastes) weather. If I had to live in Singapore without air conditioning, I might never sleep.

Natural Air Conditioning in Singapore?

Natural Air Conditioning in Singapore?

But, a good share of the local Singaporeans seem to think that “air conditioning” involves opening the windows wide and capturing any wisp of a breeze.

As air pollution increases, the natural, low-energy approach to air conditioning becomes less attractive. My colleague at the National University of Singapore, Alberto Salvo, is working on a study that will document by just how much air conditioner purchases and electricity consumption increased in a recent episode of poor air quality.

Similarly, wealthy Chinese are investing in air conditioners, air purifiers, and more people are spending time in the miles and miles of air conditioned underground shopping centers that seamlessly connect with above ground buildings. If the air is hot, muggy and polluted, why ever go outdoors?

But, if smog encourages governments to adopt renewables for energy production, it won’t matter that city-dwellers are consuming more energy. Will that work? I have concerns about the supply side responses to smog as well.

AC.on.buildingUnfortunately, most commercial-scale technologies that remove local pollution from the energy sector create more greenhouse gases. In other words, greenhouse gases and local pollutants are typically substitutes and not complements in the production process.

Consider coal gasification, a process that transforms coal into methane. Power plants that burn natural gas emit many fewer criteria pollutants than coal plants, so turning coal into natural gas and then burning the gas to make electricity can reduce local air pollution significantly.

China currently has one operating coal gasification plant and four under construction. The government recently announced plans to produce the equivalent of more than 10% of its total gas demand using the technology by 2020. In fact, if the gas that was created from the five plants under construction plus four others that are already permitted were all used to generate electricity in an efficient combined cycle natural gas plant, it would produce more electricity than China’s wind turbines.

So, coal gasification will help reduce local pollution and it appears commercially viable, at least in China. Unfortunately, it’s a disaster for climate change.

This study, reports that, “If all 40 or so of the projected [coal to gas] facilities are built, the GHG emissions would be an astonishing ~110 billion tonnes of CO2 over 40 years.” To put this in context, all of China currently emits less than 10 billion tons annually. Gasifying coal to burn in a natural gas power plant can produce almost twice as much greenhouse gas as a coal power plant.

As far as I’m concerned, the only potential silver lining is that it appears much easier to sequester the CO2 emitted from coal that has been first been converted to gas than to sequester the CO2 from a coal power plant.

But, this will involve convincing the Chinese government that they need to address both climate change, by investing in sequestration, and local smog, by gasifying their coal. Unfortunately, there’s no free lunch from addressing smog.

Of course, coal gasification is not the only, nor necessarily even the cheapest, means of reducing local air pollution. Other options include building more nuclear plants, accessing Chinese shale gas reserves and burning gas instead of coal, replacing old and inefficient coal plants with newer more efficient plants that are also fitted with pollution control technology (scrubbers/bag houses, etc.). But, other than nuclear, these will go much further to reducing local air pollution than to reducing greenhouse gases.

So, we need to continue pushing for real climate solutions as we are unlikely to see a silver bullet emerge as the by-product of some other goal.

Posted in Uncategorized | Tagged , , | 4 Comments

Shipping oil by rail: A modern-day problem of social cost

While environmental groups and other stakeholders have been working hard to delay – if not derail- major pipeline projects like Keystone, oil companies have been working hard to find alternative ways to get their crude oil to market:

blog1A single unit train can move up to 90,000 barrels of oil.  Photo source

Rail transportation is viewed by some as a stop-gap measure because it is more costly per gallon-mile as compared to pipeline transport. But it is has some clear advantages. Trains can directly access virtually any market in North America.  Producers can more quickly shift where the oil is shipped.  In general, expanding rail transport capacity requires less regulatory oversight.

The graph below documents the striking increase in the number of oil-filled tank cars in recent years. To put these numbers in perspective, the percent of domestic crude oil production carried by rail has risen from approximately 1 percent in 2010 to 10 percent in 2013. In California, the Energy Commission is projecting that rail deliveries of crude oil could account for 25 percent of the state’s total by 2016.

blogp2

Source: Association of American Railroads

As the number of oil-filled tank cars increases, so has the number of train accidents. Last year more oil spilled from trains in the United States than in the previous four decades. A series of tragic, high profile derailments has focused attention on the damages that can result when crude oil is translated by rail. The debate over what – if anything – the government should do to reduce the risks of further damage is both old and new.

Railways and the problem of social cost

Almost a century ago, trains throwing sparks into neighboring fields and forests helped ignite a canonical debate in economics. These railroad sparks sometimes set fire to farms and woodlands.  Writing in 1920, Alfred Pigou observed that if the railroads fail to account for these damages, profit maximizing operating decisions would not be socially optimal. He proposed taxation as a means of aligning private and social interests.

In 1960, Ronald Coase revisited this example in a famous paper titled The Problem of Social Cost. He observed that if property rights are well defined and costless to enforce, private bargaining between railroads and landowners should result in a socially efficient outcome. (Interested readers should see Severin’s post celebrating Coase’s influential insights).

Current debates about transporting oil by rail bring us back to the question of how to internalize this canonical social cost. There are at least two reasons why modern-day costs of transporting crude oil might not be fully internalized by the firms making key operating decisions.

More than sparks

When Pigou and Coase were pondering  “railway nuisances”, they had in mind relatively small sparks thrown off railroad tracks. Damages caused by these sparks were presumably less devastating than that which can result when a unit train carrying  crude oil runs off the rails.

blogp3
Tragic derailment in Lac Megantic, Quebec (Paul Chiasson / THE CANADIAN PRESS)

Today’s railroads are liable for the crude they carry.  But a recent Wall Street Journal article reports that current insurance coverage does not begin to cover the damages associated with a worst-case scenario

Take, for example, the tragic derailment in Lac-Mégantic, Quebec. It is estimated that the clean up costs alone will exceed $200 million. The train’s operator had liability insurance of $25 million. The railway has sought bankruptcy protection. The government has had to step in to cover the remaining expenses.

So long as taxpayers serve as the backstop, some fraction of the damages will remain external to railway operating decisions. This can result in under-investment in risk mitigation. (This “judgement proof” problem is not unique to the railroad industry. See, for example, Lucas’s recent paper which discusses how existing bond requirements provide inadequate incentive to protect against accidents in the natural gas industry).

A principle-agent problem?

Railway operators own the locomotives, employ crews, and provide infrastructure (tracks, signals, etc).  But there has been a relatively recent shift in rail car ownership. Over 50 percent of the tons shipped on the North American railroads are now moved in cars owned by non-railroad leasing companies. These railcar lessors make important decisions about the safety attributes of the tank cars that ride the rails.

Common-carrier obligations prevent railroads from refusing hazardous cargo. This could give rise to a principal-agent problem when a railway is liable for damages caused by accidents involving rail cars that they do not own or maintain. Moreover, industry analysts have argued that railroads are limited in their ability to pass along product-specific insurance costs in their rates.  If rail prices cannot fully signal the insurance costs of carrying risky cargo, carriers may under-invest in risk mitigation measures.

Last month, the nation’s largest hauler of crude oil (BNSF Railway Company) made headlines when it announced that it would purchase its own fleet of 5,000 oil tank cars with safety features that exceed the latest industry standards. This was hailed as important voluntary commitment to the improvement of safety standards for the transportation of crude by rail.  However, this case is unique in that the parent company stands to benefit from the railroad’s voluntary investment in safer cars. BNSF is owned by Warren Buffet’s Berkshire Hathaway which also happens to own one of the largest U.S. railcar makers. Other railroad companies with less to privately gain from such a commitment seem unlikely to follow suit.

An old problem in need of a new solution

The punchline here is that current levels of investment in railway and rail car safety are almost certainly too low.  Ideally, the government would intervene with policies designed to incentivize efficient investments in risk mitigation. But to implement these policies cost-effectively, we need to know how and where rail accidents are most likely to happen, and how costly these accidents are likely to be.

The Congressional Research Service recently noted that data tracking oil spills from rail transport do not currently exist. This seems like an important – and highly policy relevant – area for future research. Given current – and projected – volumes of oil moved by rail, a better understanding of how damages from rail transport of hazardous materials manifest could play a critical role in informing new policy solutions.

Posted in Uncategorized | Tagged , , | Leave a comment

It’s Time to Refocus California’s Climate Strategy

You know this already, but let’s review:

  • Climate change is a global emissions problem.
  • California produces about 1% of the world’s greenhouse gas emissions.
  • Over the next few decades, the majority of emissions will come from developing countries.
  • If we don’t solve the problem in the developing world, we don’t solve the problem.

And lastly,

  • The world is making negative progress on climate change.  Evidence of the potential for drastic climate change is growing, but worldwide GHG emissions and concentrations of GHGs in the atmosphere are still rising. Exxon’s just-released Energy Outlook, predicts world oil consumption will rise 19% over the next 25 years, while natural gas will rise 66%, and coal will be flat, no decline.

Nearly all of this was known back in 2006, when California passed the Global Warming Solutions Act, though the massive growth in China’s coal consumption was just getting momentum.  Back then, the argument for California emissions targets was “leadership” and that is still the word one hears most often from defenders of the state’s current package of GHG markets and mandates.

CoalConsumpRegion

I’ve heard many different meanings of leadership in the context of California emissions targets:

  1. Showing that the regulations and cap & trade market are logistically feasible, and developing implementation models that could be adopted at national and international levels
  2. Showing that people are willing to sacrifice or change their way of life to fight climate change
  3. Showing that people won’t have to sacrifice because reducing GHGs will improve the economy
  4. Recognizing that someone has to move first to start a worldwide movement to reduce GHGs

There is something to each of these arguments (well, maybe not #3. Most economists think addressing climate change will be a small drag on the economy—if you don’t count the worldwide economic value of averting climate change).

But it’s 2014 now.  The U.S. is further from adopting a price on GHG emissions than it was in 2006.  Fewer members of Congress than 8 years ago even believe climate change is a problem.  The three largest market mechanisms for reducing GHGs (California’s cap-and-trade, the EU-ETS, and the eastern U.S. RGGI program for utility emissions) all have very low prices that are doing little to change the course of emissions.

For these reasons, I think it’s time to have a frank review of California’s climate policy.  We need to refocus on how California can realistically contribute to solving the problem of global climate change.  Reaching emissions targets for California may be part of that strategy, but that should not be the singular or even the primary goal.

The primary goal of California climate policy should be to invent and develop the technologies that can replace fossil fuels, allowing the poorer nations of the world – where most of the world’s population lives – to achieve low-carbon economic growth.  If we can do that, we can avert the fundamental risk of climate change.  If we don’t do that, reducing California’s carbon footprint won’t matter.

Focusing on solving global climate change would mean that a major test of any policy proposal would be whether it is exportable to the developing world.  It’s always hard to predict what will work, but “working” in California isn’t particularly valuable if the approach doesn’t work where most of the planet’s emissions will be coming from in the 21st century.  GHG-reduction strategies that are very expensive – but bearable for a rich country – only make sense if they have a plausible path for getting to near cost competitiveness in poor countries.

That means less emphasis on numerical measures of California emissions and more emphasis on learning.    What more are we likely to know at the end of a program and will that knowledge be applicable in other parts of the world?

Implications of a learning-driven strategy to tackle global climate change include:

  • In procuring renewables, California’s current “least cost, best fit” approach should be augmented with “most learning.”  That means a new technology about which we (and the rest of the world) will learn a lot may get funded even if it is likely to be more expensive than replicating a mature technology.
  • We need greater emphasis on technology creation, both in the lab and downstream, where a lot of the learning goes on.  California should consider creating a Climate Change Solutions Institute akin to the California Institute for Regenerative Medicine.  The goal would be to research and develop approaches that could be applied by a large share of the world’s population.
  • Every California energy efficiency program needs rigorous evaluation of what worked and why, and what didn’t work and why not.  And we need to study where else in the world the same sort of efficiency policies would (or wouldn’t) be effective.  The greatest value from the state’s energy efficiency leadership is likely to be knowledge creation, not GHG reduction.

This does not mean California should abandon pricing GHG emissions.  Putting a price on emissions helps boost green technologies across the board.  In addition, substituting cap-and-trade revenues (or GHG taxes) for income or sales taxes is a clear move towards improving economic efficiency and welfare.

California’s current strategy may eventually allow us to say “we’ve done our share; now the rest of you need to step up.”   But that isn’t leadership when more than 80% of the “rest of you” are living at less than one-quarter of our standard of living.  It’s time to make our Global Warming Solutions Act about global solutions.

Posted in Uncategorized | Tagged , , | 21 Comments

Too Big to Fail?

(This post is co-authored by Catie Hausman)

The San Onofre Nuclear Generating Station (SONGS) was closed abruptly in February 2012. During the previous decade, SONGS had produced about 8% of the electricity generated in California, so its closure had a pronounced impact on California’s wholesale electricity market, requiring large and immediate increases in generation from other sources.

In a new EI@Haas Working Paper titled, “The Value of Transmission in Electricity Markets: Evidence from a Nuclear Power Plant Closure”, we use publicly available data to examine the impact of the closure on economic and environmental outcomes. Because of the plant’s size and prominence, the closure provides a valuable natural experiment for learning about firm behavior in electricity markets.

Aerial_San_Onofre_Generating_Station_May_2012

We find that the SONGS closure increased the cost of electricity generation by $370 million during the first twelve months. This is a large change, equivalent to a 15% increase in total generation costs. The SONGS closure also had important implications for the environment, increasing carbon dioxide emissions by 9.2 million tons over the same period. Valued at $35 per ton (IWG 2013), this is $330 million worth of emissions, the equivalent of putting more than 2 million additional cars on the road.

The closure was particularly challenging because of SONGS’ location in a load pocket between Los Angeles and San Diego. Transmission constraints and other physical limitations of the grid mean that a substantial portion of Southern California’s generation must be met locally. When SONGS closed, these constraints began to bind, essentially segmenting the California market. The figure below shows the price difference at 3 p.m. on weekdays between Southern and Northern California. After the closure there were many more days with positive differentials, including a small number of days in which prices in the South exceeded prices in the North by more than $40 per megawatt hour.

SONGS_Fig2

These binding transmission constraints meant that it was not always possible to meet the lost output from SONGS using the lowest cost available generating resources. Southern plants were used too much, and Northern plants weren’t used enough. Of the $370 million in increased generation costs, we attribute about $40 million to transmission constraints and other physical limitations of the grid. This number is less precisely estimated than the overall impact, but is particularly interesting in that it provides a measure of the value of transmission.

The paper provides all the gory details about how we made these calculations. It turns out to be more difficult than a simple before-and-after comparison because during this period the California market was also experiencing a whole set of simultaneous changes to hydroelectric resources, renewables, demand, and fuel prices. What is helpful, however, is that transmission constraints were rarely binding prior to the closure. This means that observed behavior during the pre-period provides a good sense of how firms would have behaved during the post-period had there not been transmission constraints.

Our findings provide empirical support for long-held views about the importance of transmission constraints in electricity markets (Bushnell 1999; Borenstein, Bushnell and Stoft 2000; Joskow and Tirole 2000), and contribute to a growing broader literature on the economic impacts of infrastructure investments (Jensen 2007, Banerjee, Duflo and Qian 2012, Borenstein and Kellogg 2014).

The episode also illustrates the challenges of designing deregulated electricity markets. A new book chapter by Frank Wolak (here) argues that while competition may improve efficiency, it also introduces cost in the form of greater complexity and need for monitoring. Transmission constraints add an additional layer to this complexity, by implicitly shrinking the size of the market. Constraints increase the scope for non-competitive behavior, but only for certain plants during certain high-demand periods, so understanding and mitigating market power in these contexts is difficult and requires a sophisticated system operator.

Posted in Uncategorized | Tagged , | 9 Comments

It just doesn’t add up. Why I think not building Keystone XL will likely leave a billion barrels worth of bitumen in the ground.

I am not a fan of blanket statements. Whenever oil sands come up in casual conversation, many of my economist friends argue that “the stuff will come out of the ground whether we like it or not”. When the discussion turns to Keystone XL, the general attitude is that “it simply doesn’t matter. The Canadians are just going to build pipelines to the East and West and ship the stuff to Asia and elsewhere.” So I started reading and learned a number of interesting things (which I have written up in more detail here).

Alberta’s oil sand reserves are estimated at 168.7 billion barrels, which eclipses the reserves of Iran, Iraq, Kuwait and Russia. What makes these reserves different from those in Saudi Arabia, Venezuela and the countries named above is that they are in the form of crude bitumen. As has been discussed widely, the mining, upgrading, transport and refining of this resource is very energy intensive. As a consequence, well to wheel emissions from these oil sands are 14-20% higher than those of a weighted average of transportation fuels used in the United States.

The problem the owners of this precious resource have is that there simply so much of it and currently there is nowhere near enough transport capacity to get the desired number of barrels to refineries. This is not news. What I argue below, however, is that even if every pipeline project on record is built on time and rail capacity is expanded aggressively, there still is not enough transport capacity to meet industry projected supply. This means, of course, that Keystone XL matters in terms of how much of the oil sands will be extracted over the next 26 years (the “official” time horizon adopted by the State Department). I think that even under the best-case scenario in terms of supply, where all other pipeline projects are approved and built, not permitting Keystone XL will likely leave 1 billion barrels in the ground by 2030. If other projects are not built, Keystone become marginal earlier and that number becomes even bigger. Now on to the pesky details.

Projected Supply and Capacity

The Canadian Association of Petroleum Producers, an industry alliance, anticipates rapid growth in the production of oil sands until 2030. According to their projections the supply of total crude, both heavy and light, will grow from 3.438 million barrels per day (mbpd) to 7.846 mbpd. 97% of this growth is projected to be due to the development of oil sands. This represents a 3.8 fold increase in the supply of oil sands compared to today.

As local refinery capacity is severely limited (Goldman Sachs, 2013), the majority of the additional industry projected 4.285 mbpd by 2030 have to be shipped out of Northern Alberta. This can be done by pipeline to the West, East or South or surface transport (rail or barge). Pipelines are the least expensive mode of transportation and allow the shipment of both light and heavy crude. Goldman Sachs estimated total takeaway capacity via pipeline at 2.9 mbpd in 2013 plus another 0.454 mbpd of local refining capacity. Currently estimated rail capacity to the US is 150,000 bpd and could reach 500,000 bpd by 2017/8. Therefore the required additional takeaway or local refining capacity in 2030 will be 4.492 mbpd.

In order to meet this demand, several pipeline projects have been proposed, which I list below with their proposed starting dates and capacities as provided by Goldman Sachs:

  • Alberta Clipper 1 (2014): 0.120 mbpd
  • Alberta Clipper 2 (2015): 0.230 mbpd
  • Keystone XL (2015): 0.830 mbpd
  • Northern Gateway (2017): 0.525 mbpd
  • Energy East (2017): 0.850 mbpd
  • Transmountain (2017): 0.590 mbpd

Each one of these projects is facing regulatory hurdles and it is not clear that any of these projects will be approved with certainty. I would like to consider the following three scenarios:

1. All pipelines get built, rail capacity is ramped up to 500,000 bpd by 2018 and continues to grow by 76,000 bpd per year thereafter.

2. All pipelines except Keystone XL get built, rail capacity is ramped up to 500,000 bpd by 2018 and continues to grow by 76,000 bpd per year thereafter.

3. All pipelines except Keystone XL, Alberta Clipper 1 & 2 get built, rail capacity is ramped up to 500,000 bpd by 2018 and continues to grow by 76,000 bpd per year thereafter.

4. No pipelines get built, rail capacity is ramped up to 500,000 bpd by 2018 and continues to grow by 76,000 bpd per year thereafter.

fig2

The figure above plots available takeaway capacity for each of the scenarios against projected takeaway need. The first thing to note is that the only scenario that provides sufficient takeaway capacity to get the 2030 level of producer projected bitumen out of Alberta is scenario 1, which assumes that all proposed projects get approved and rail capacity is aggressively built out. This scenario also has plenty of spare capacity until 2030 and would minimize the need for railway transport until the end of the period. Scenario 2, which is a world sans Keystone XL, has plenty of capacity until 2024, which is when both high and low cost transport modes are filled to capacity. At this point, there is no currently proposed pathway, which would be able to ship out the difference between the red line and the black dotted line as of 2024. The total amount of this triangle is roughly one billion barrels that would “stay in the ground” by 2030 – assuming no additional projects. For scenario 3, which is the green dotted line, producers run out of shipping capacity by 2023 and the missing capacity results in 1.9 billion barrels remaining in the ground. If none of the pipelines get built within and out of Canada and one has to rely on this rail scenario, capacity would run out this year and roughly 10 billion barrels stay in the ground. The last scenario would require that all pipeline projects are denied and no alternate projects proposed and granted. What is noteworthy about the last scenario, is that if no pipelines get built, rail does not provide sufficient capacity to meet projected takeaway demand in the short run. Not building Keystone XL would make the rail capacity constraint binding and therefore lead to slower extraction even in the short run.

Other factors affecting investment in tar sands

As discussed in the previous section, the first factor which will slow development of oil sands in the absence of Keystone is the fact that according to the scenarios I describe above, there is simply not sufficient transport capacity to realize the supply projections by Canadian Petroleum Producers out to 2030 even if all other projects are built and rail capacity grows most rapidly. It is important to note that this argument is independent of the marginal cost of resource extraction, which is addressed by a variety of other reports.

The second factor has to do with regulatory uncertainty. As none of the projects so far have been approved and as it is less than certain that the Northern Gateway or any of the other projects will gain regulatory approval, large-scale investments in oil sands extraction remain a risky investment. The path forward does not look to be a smooth one for many of these projects, largely due to local and regional resistance and multiple court challenges. Goldman Sachs has in the past downgraded the resource for this reason.

The third factor, which is of key importance, is that currently oil sands enjoy an unfair advantage as they are a very carbon intensive form of transportation fuel. In the absence of a carbon tax or other price based mechanism, its price is artificially lower than socially optimal. Should a global or US only life cycle analysis (LCA) based carbon tax emerge in the next decade, this will decrease the price per barrel received by producers of oil sands and lower their profit margins. An LCA based carbon tax would also mean that product shipped via rail would carry a higher penalty than product carried via pipeline. Building Keystone would therefore provide oil sands with an advantage – even in a world with a carbon tax. All of this again would lower returns to investment in oil sands development.

Fourth, the future demand for petroleum based transportation fuels depends heavily on the availability and cost competitiveness of renewable low carbon alternatives and overall demand. Fuel efficiency regulations, which are already on the books, and further tightening of these standards will shift in the demand for transportation fuels. More competitive renewables will further shift in the demand for fossil transportation fuels. While it is not clear whether these alternative fuels will become cost competitive by 2030, delaying extraction of oil sands now will lead to lower demand in the future.

The fifth factor is related to environmental regulation of rail transport. Shipping crude oil, be it heavy or light, by rail is risky as accidents have significant environmental and economic costs associated with them. A significant increase in rail transport (in our scenario an 11 fold increase from today) would likely result in increased safety and environmental regulation, which would further drive up the costs of rail transport. Higher transport costs by rail would result in lower profit margins, since in my calculations rail is used fully in all scenarios.

Finally, the one factor that is uncertain is the very costly development of local refining capacity in Alberta. The refined product would still have to find its way to market and this is done by pipeline or rail, which we show is significantly constrained already.

Final Thoughts

If we use these industry projections on light, medium and heavy crude oil supply out of Western Canada, my calculations suggest that in order to be able to ship projected supply by 2030 all proposed pipeline projects have to be built, along with a significant increase in rail transport. My calculations suggest that not permitting Keystone XL will result in a binding transport constraint by 2024 at the very latest. If all planned pipeline projects are significantly delayed, not permitting Keystone XL will very likely reduce production in the short run and continue to do so unless additional pipeline capacity comes online, which is less than certain. While this post does not conduct an oil industry wide equilibrium analysis, it suggests that not permitting Keystone XL to proceed will keep a minimum of one billion barrels of heavy crude from Canadian bitumen in the ground by 2030 – in the absence of additional transport or refining projects. Of course, globally speaking, 1 billion barrels sounds like a lot, but the US consumes that amount in about 50 days.

As carbon is a stock pollutant as far as human time frames are concerned, not permitting Keystone “buys time” for alternative transportation fuels and climate policies to develop. This would allow all transportation fuels to compete on a level playing field, where carbon is taxed at its marginal external cost, which is a comprehensive policy solution. Trying to cure this large-scale burn with thousands of Band-Aids is simply not an efficient approach.

Posted in Uncategorized | Tagged | 17 Comments

Why Aren’t We Talking About Net Energy Metering for LEDs?

The fights over net energy metering have gotten loud and heated. For those of you who have missed the drama, here, in a nutshell, is what “net metering” means. Say I install enough solar panels on my roof to provide about half of my electricity over the course of a year. On a sunny afternoon, if I’ve turned off my Tivo and my refrigerator and dryer aren’t running, my system might be generating more electricity than my house is consuming.

Net metering means that my utility will credit me for the “extra” power my system generates at times like this and charge me based on the difference between my total consumption and my total solar production, i.e. my net consumption. I will be selling back to the grid during the sunny afternoons when my own consumption is low. (See Severin’s previous post on this.)

Will you neighbors admire your solar panels?

Will you neighbors admire your solar panels?

So, how does this apply to LEDs? From a purely technical perspective, it doesn’t. An LED would never bring a house below zero consumption so that it’s selling back to the grid. But the zero consumption threshold isn’t what all the fights are about. If my consumption without the solar panels puts me on the fourth tier, where my rates are as high as 36 cents per kWh, my solar system helps me avoid paying for some pretty expensive electricity. Even if I never go below zero, a solar system will keep me down on the lower tiers, paying only 13 cents per kWh. (On a side note, the solar installers who have approached me understand this and size their systems to avoid the expensive power but not the cheaper stuff.)

But, PG&E has fixed costs, which won’t disappear just because I install solar – it still has to run distribution wires to my house and pay for my meter. It’s paying property taxes and financing costs for its power plants no matter how much electricity they produce.

The basic problem is that utilities are collecting fixed costs – which by definition do not vary as a function of how many kWh customers consume – on a volumetric basis. So, every time someone installs  solar panels, the remaining ratepayers have to pay slightly more to cover those costs. Yes, my neighbors (and all of PG&E’s residential customers) would have to pay for my meter and my “share” of the distribution lines running down our street if I got a large solar system.

The opponents of net metering argue that it provides unfair incentives for people to install solar, which leave the rest of the users on the hook to cover the utilities’ fixed costs. They cite the “death spiral,” meaning that rates get higher for non-solar customers, which induces more of them to switch to solar. The more sophisticated opponents note that solar installations are generally on rich people’s houses, so net metering is regressively subsidizing the rich.

This brings us to the question posed in the title. Why isn’t anyone complaining (at least very loudly) about unfair cost shifting or the death spiral when I buy an LED bulb? Just like a solar system, my LED bulb will help me avoid the 36 cent power, and, given that some of that is collecting fixed costs, my neighbors will be left paying a tiny bit more – if not tomorrow, after PG&E’s next rate case.

My first guess was that we’re not talking about an energy efficiency death spiral since we’re still talking about the energy efficiency gap, which implies that customers are not investing in seemingly cost-effective energy efficiency measures. In other words, customers have been leaving the proverbial $20 bills on the sidewalk and bypassing energy efficiency opportunities, like replacing their incandescent bulbs with LEDs, even though the switch could save them money.

But, I did some rough calculations and concluded that annual energy savings from LEDs could be on par with or even larger than distributed solar, especially when you bring in commercial lighting. Very roughly, California added approximately 1,000 MW of solar in 2013. At an estimated capacity factor of 17%, that’s roughly 1,500 GWh of annual solar production. This report estimates nearly 1,000 GWh of annual savings from light bulb standards in California, climbing to 11,000 GWh by 2018.

Lots of potential?

Lots of potential?

And, we’re talking about potentially large amounts of money shifted across customers, as a large share of the typical utility’s costs is fixed. The California Public Utilities Commission put out a study on net energy metering that calculated that the typical solar customer paid bills that were 54% higher than the utility’s incremental cost of serving them before they installed solar and 12% less than the incremental cost of serving them after they installed solar. This is suggestive of just how much fixed costs are collected on a volumetric basis, particularly on the higher tiers.

So, I predict many more years of heated discussions about rate restructuring. I’d guess that we will look back with amusement at the contentious debates about whether to add a monthly fixed fee to PG&E rates, because the typical customer will be paying a much larger share of their bill as a fixed charge.

Posted in Uncategorized | Tagged , , | 42 Comments

Why the cool kids are flocking to energy and not water economics

Why do kids like to go to birthday parties? Because there is lots of sugar and other kids. Academic economists are not that different. Energy economics has attracted a lot of new bright minds both young and not so young. The reason for it is simple: It’s an important topic, the people working in this space are people you want to hang out with (yes Severin, Lucas, Catherine, and Meredith I mean you) and there are lots of really good data (the sugar).

What surprises me is that water economics has not experienced the same influx. Although there are some world class economists working on this important resource, there is no rush into this field. The reason for it is simple: The data are terrible. Why am I thinking about this?  The repeated calls for me to conserve water during this drought. They go something like this. “Dear Dr. Auffhammer, please reduce your water consumption by 20% this summer. There’s a drought. Thanks a lot. Your utility.” Well here is a letter I am sending to my water utility:

Dear Sir/Madam:

I received your very nice note asking me to conserve water. It was printed on recycled paper and all future generations and I thank you for it. But I have three issues with your call for conservation:

1) I get a letter every three months telling me how much water I consumed. I have no idea or way to figure out how much water the drip irrigation I put in with my bare hands uses (compared to the inefficient spray system the previous owner had). Trust me. I tried. I lifted the 30 pound concrete plate over my water meter and chased away a few black widows the size of chickens to find out that my water meter is analog (yes with a needle). Even running my irrigation system at full speed for 15 minutes did not move it significantly. There must be a better way. I can monitor real time electricity load for my house using my cell phone and a rainforest gateway. It must be possible to replace the 19th century meter on my house with something that uploads consumption data to my phone. The black widows want to be left alone.

2) How about charging me more for water when it’s scarce? In summers when there is a drought, charge me more! It does not have to be perfect, but if you raised my second tier rate by 25% I would decrease my consumption significantly. Even without a meter. I am scared of second tiers. 

3) How about you roll out some real time meters and let us energy people do some experiments in your service territory! The electric utilities (e.g. the cool kids) are doing it and are learning a lot from the essentially free consulting (I know how much McKinsey charges) from us academics. Some of the best work in energy economics has exploited randomized controlled trials to help us better understand households’ responses to quasi-shaming (Hunt Alcott’s work on OPOWER), used service territory borders to better understand the price elasticity of electricity demand, analyzed the magnitude of the rebound effect and so on and so on.

While I understand that this is a major change in the way we study water demand, now is the time. Monitoring has become much cheaper and the information we gain from better monitoring will lead to a better understanding and potentially more efficient allocation of water. We will gladly help you look at the effects of shaming your neighbors. We could have a startup called H2Opower! We will help you run experiments which will evaluate the effectiveness of different pricing strategies! The opportunities are endless. 

Best wishes,

Dr. Max

Posted in Uncategorized | Tagged | 14 Comments