Building Codes Don’t Save Electricity… or Do They?

It’s been blistering hot in the western U.S. Last week’s record-breaking temperatures had roads buckling, planes grounded, and electricity demand soaring.

Temperatures in California last week: Source

High temperatures drive up peak electricity demand which drives up investment in costly power lines and power plants. Looking ahead, it will be important to better understand the relationship between hot weather and energy demand as system operators gear up for climate change.

The relationship between heat and electricity demand can also help us look back in time to assess some demand-side investments we’ve already made. A new working paper uses this relationship to estimate energy savings from California’s building energy codes.

If you’re a regular reader of this blog, you might be having a déjà vu moment. Haven’t we tackled the “Do California building codes actually save energy?” question before? Do we really need to go down this path again? The answer is yes and yes.  This paper shows how ‘big data’ can shed new light on this old (but still relevant) question.

Do Building Energy Codes Save Energy?

If you are looking for a landmark energy efficiency program to evaluate, California’s Title 24  is a good place to start. It boldly went where no building code had gone before, and it has provided an energy efficiency model/inspiration for the nation and the world.

The original Title 24 code (circa 1977) for new buildings was designed to reduce the energy required to heat in the winter and cool in the summer. Given the size of the projected energy savings (significant for electricity, even larger for natural gas), we might expect to see an impact on energy consumption when we look at energy use today in homes built just before and just after these codes took effect. But annual electricity consumption at California homes built in the early 1980s is slightly higher, on average, as compared to homes built just before Title 24 was implemented.

Of course, there are all sorts of problems with this simple before/after comparison. New homes changed in many ways between the 1970s and 1980s (e.g. size and location, air conditioning penetration, the popularity of shag carpet). And some pre-1977 homes have presumably been updated.

 Shag carpet can add a cooler feel to a hot day

Constructing a what-would-have-happened-without-Title-24 benchmark that accounts for all the other time-varying factors that determine energy consumption is not easy. For one, it’s hard, if not impossible, to find homes that are truly identical but for the Title 24 “treatment”. Second, even if you manage to find comparable houses, it can be hard to detect the effect of building codes amidst noisy electricity consumption.

 The signal in the noise

One way to cut down on the electricity consumption noise is to focus on the energy uses that were targeted by these building codes: heating in the winter and cooling in the summer. If Title 24 is delivering real energy savings, we should see energy consumption responding differently to changes in outdoor temperatures.

Howard Chong was the first to use this clever strategy using electricity data from Riverside, California. He found that newer homes subject to more stringent building codes use more electricity in hot weather. However, he has fairly limited information about these homes and the people who live in them. So his estimates could be picking up the effects of other house characteristics that are changing over time.

Cue Arik Levinson and his provocative 2016 paper, which brings more detailed data from over 14,000 California households to this building code question. He looks at how monthly energy consumption responds to temperature after removing the effects of neighborhood characteristics (e.g. average income, education) and home characteristics (e.g. house size, air conditioning, number of rooms). The graph below shows his estimates of weather sensitivity of electricity use for houses built in different eras (relative to a pre-1940 home).

ALThe markers summarize how electricity consumption increases per 10 cooling degree days (CDD). Pre-1940s houses are the baseline category.

Like Howard, Arik estimates that electricity use in homes constructed after the Title 24 code increases faster when temperatures rise as compared to the pre-code homes. In other words, he finds no evidence that Title 24 building codes reduced electricity consumption.

You might have thought Arik’s important paper would be the last word on Title 24 energy savings. But an econometrician’s work is never done, especially as data quality continues to improve. When our UC Davis colleagues Aaron Smith, Kevin Novan, and Tianxia Zhou got their hands on hourly electricity smart meter data from homes in Sacramento, they were eager to take another look at this temperature response/home vintage relationship. They released their working paper last week (presented at our recent POWER conference).

Moving from monthly billing data to hourly smart meter data is like increasing the magnification power on your statistical microscope. You can see patterns in the data you could not see before. With high-frequency smart meter data, these guys can flexibly estimate the relationship between temperature and electricity consumption for each individual home and look much more precisely at how these relationships vary with home age and vintage.

The graph below helps to illustrate the value added by higher resolution data. The black squares estimate the electricity response to an increase in summer temperatures relative to a 1977 home using the SMUD smart grid data. To construct comparable estimates from prior work, the authors go back to Arik’s monthly data, isolate homes that can be exactly matched to the construction year, and re-estimate impacts using monthly data from across California (in blue). The bars measure the precision (or imprecision) of these estimates.


One clear takeaway is that high-frequency smart meter data can allow us to see an effect that could not be detected with ye olde fashioned monthly data. Estimates using smart meter data are much more precise.

A second takeaway is that past failure to detect an effect of these building codes on electricity consumption could be chalked up to data limitations (e.g. noisy data, measurement error). In other words, it could be that these electricity savings have been there all along, we just couldn’t see them.

Estimated impacts on energy consumption may look tiny in the graph…but they add up! The authors estimate that a house built just after 1978 uses 8-13% less electricity for cooling than a similar house built just before 1978. The good news is that these are significant reductions, particularly when you consider that cooling drives peak consumption in California. The not so good news is that these reductions fall far short of engineering estimates (which project ~40% cooling reduction for a Sacramento house).

The authors argue that their estimated savings are in the right engineering ballpark IF we assume 42 percent of new homes would have reached Title 24 standards and another 27 percent would have had added some insulation even if Title 24 had not been imposed. The steps the authors must take to square their estimates with savings projections highlight the nuanced relationship between engineering estimates and real-world policy impacts. Engineering estimates evaluate differences in projected energy use between buildings with and without particular efficiency measures. Policy advocates often take those estimates and present them as the energy savings that would be realized with a policy mandating these measures. This is not the right interpretation if some of the targeted measures would be adopted without the policy and/or if the policy cannot be perfectly enforced.

How many economists does it take to measure energy savings?

California’s energy efficiency policies serve as a model for other jurisdictions so it’s really important to separate the success stories from the not-what-we’d-hoped. This new evidence suggests that California’s original Title 24 building codes delivered more savings than prior research had found (although not as much as the program architects had originally envisioned).

Is this the last stop on the do-building-codes-save-energy line of inquiry? I hope not. More than half of U.S. households are now equipped with smart meters monitoring electricity consumption as temperatures rise and fall. As these new data accumulate, there’s room for more research that plumbs the depths of this important question.


Posted in Uncategorized | 2 Comments

Creative Pie Slicing To Address Climate Policy Opposition

There are two fundamental, and fundamentally different, barriers to pricing greenhouse gases. The one economists tend to focus on is the economy-wide cost of reducing emissions: substituting to lower-carbon and (for now) higher cost production of energy and other products. That is, the hit to the size of the economic pie. The barrier less discussed by economists, but dominating the attention of politicians and voters, is how those costs are distributed among the population, that is, how the pie gets sliced. Today, I want to take a break from discussing pie-size issues to consider pie slicing.

To illustrate, think about a gasoline tax, as California legislators have been doing a lot lately.  In April they passed, and Governor Brown signed, a new law (SB1) that raises transportation funding, including increasing the state’s gas tax by 12 cents a gallon. As with any tax, the cost to the economy as a whole is not the revenue raised; that money comes out of the taxpayer’s pocket, but goes into fixing roads, building bridges, supporting public transit or other activities that benefit people in the economy. The cost to society as a whole is the adjustments consumers make to use less gasoline: switching to alternative transportation that costs more or is less convenient, or just travelling less.

IMG_0067If the tax is on pollution, there is also a benefit to society from reduced pollution. The point of reducing pollution – whether through taxation, cap-and-trade, or direct regulation – is that the benefits to society can outweigh the costs, making the total pie larger.

Still, to any one individual, the cost is measured in their own additional expenditures, which can be much larger for some people than for others. If you drive a lot more miles than average or drive a lot more car than average (in weight or horsepower), you obviously pay a greater share of the additional tax payments.

An obvious point to every politician, but one often ignored in economic analysis, is that individuals thinking about their costs will compare their lot after the tax is imposed to whatever they had before, whether or not the previous status quo made any sense. Even if large polluters previously paid nothing for the damage their emissions were doing, they will still feel worse off if they bear a high share of the new tax payments, and they will be likely push back against the increase.

SlicingThePieFig1And that’s where the fundamental separation of pie size and slice size comes in. Setting aside whether you think it is fair, the pie size goal of a gasoline or GHG tax increase, or a higher cap-and-trade price, can be achieved while compensating losers who would otherwise make the largest share of the new payments. This doesn’t eliminate the economy-wide cost of adjusting to, for instance, an increased GHG price, but it can spread the cost more evenly, likely reducing opposition from those who would otherwise be the biggest losers.

Many California state legislators are currently reluctant to support an extension of the state’s GHG cap-and-trade program beyond 2020 if it might noticeably raise the greenhouse gas price. They are responding to strong disapproval of the gas tax increase passed in April.  In fact, even though it won’t go into effect until November, that tax increase has already led to a recall campaign against one California Senate member. This, despite the fact that the tax increase will raise the cost of living for an average family of 4 by less than 50 cents per day, and by less than that for the average lower-income family.[1]

Still, the cost to some people is much more than the average. And many voters’ reactions to the 12 cent gas tax increase, as well as to paying for GHGs, seems to be that it’s a complete waste of funds, which will never come back to benefit them. To respond to these concerns, there have been many proposals at the state and federal level to return the funds from a price on GHGs to individuals through a per-capita rebate.

Many economists (myself included) wince a bit at such proposals, because we would like to see that money redistributed to individuals through a more value-creating process, by reducing some other distortionary tax and making the pie larger. To particularly benefit lower-income households, we could reduce payroll taxes or lower tax rates for lower income brackets.

Nonetheless, even GHG pricing proposals that return the revenue through per-capita rebates meet with strong resistance from areas or individuals who will be hurt more than the average (or believe so), but will only receive their equal share of the revenue. The “simple solution” of rebates based on how much you paid in taxes doesn’t work: if you know the tax you pay will come right back to you, it doesn’t give you an incentive to reduce your use of a good, which was the whole point. Still, it’s possible to do better than a uniform rebate without undermining the incentive that the tax is supposed to create.

SlicingThePieFig2For instance, the U.S. federal gas tax is 18.4 cents per gallon, unchanged since 1993. That’s in large part because politicians from Wyoming and other rural states recognize that their constituents consume much more gas than in urban areas. In fact, Wyoming consumes 63% more gasoline per capita than California, and more than twice as much as New York.

The efficiency (pie size) effects from raising the gas tax have nothing to do with who gets the revenue, but the political resistance sure does. So, how about increasing the federal gas tax by, say, 50 cents per gallon and rebating most of the money based on statewide average consumption? Wyoming residents would bear no more burden than New Yorkers on average, but Wyomingites would still have an increased incentive to buy more fuel efficient cars. The heaviest drivers everywhere would still take the biggest hit, but no state could claim that it is disproportionately harmed by the tax increase.

And once we go down this road, why stop at states? Rebates could be based on county-level average gasoline consumption. To the extent that county gasoline consumption is very heterogeneous within states, the rebates would even more accurately offset the increased tax burden, though cross-county gasoline purchases undermine this a bit. The point is that tailoring rebates to the impact in smaller areas can improve the match between revenue burdens and rebates without undermining the incentive effect of the tax.

(It’s worth noting that this is another advantage of using market mechanisms – taxes or cap-and-trade – to meet environmental goals. Unlike direct regulation – technology or emissions standards — they generate revenue that can be used to compensate losers.)

SlicingThePieFig3The same idea could be used in California, where the greatest opposition to gas taxes and pricing GHGs comes from rural counties.  Gasoline consumption varies substantially across counties, from densely packed San Francisco, where fewer people even own a car, to the rural areas in the Northern and Eastern parts of the state, where it’s hard to live without one. Rebates could be based on 2016 per capita fuel consumption so the high-use counties still have incentives to adopt public policies that reduce future consumption, such as EV chargers.

Likewise, rather than uniform per capita rebates of cap-and-trade revenues across the states, the rebates could be higher in GHG-intensive counties. After all, unless your goal is to punish people who have lived more GHG-intensive lives, this moves us in the right direction on climate policy while reducing political resistance, and resentment.

Emotionally, it may not be easy to separate policies that reduce fossil fuel use from penalties on those who currently emit the most, but practically it can be done and making progress politically on climate change depends on doing so.

[1] California’s economy uses about one gallon of gasoline per person per day (including personal and business vehicles). Average gasoline consumption increases with household income, though the estimates on how much vary.

Posted in Uncategorized | 15 Comments

Coal Mining: Jobs to Die For?

For every one to two jobs a non-miner dies each year.

[This post is coauthored with Carolyn Fischer at RFF].

If you open the papers it seems like the current administration thinks that coal is the next hot thing. As Governor Schwarzenegger put it, this is a bit like getting excited about the comeback of Blockbuster VHS rentals.


There are lots of promises of bringing back coal employment to the economically devastated areas in Appalachia, improving the quality of life in these former mining communities. There were popular claims that 43,000 mining jobs have already been added (which turns out to be not true) during this new presidency. I think we can all agree that higher employment in sectors that improve both the welfare of employees and society is a great and desirable thing. The problem with coal mining jobs – not the people holding these jobs – is that coal costs lives, both under and above ground.

First, coal mining is well known to be a higher-risk job than office work.  In the past 5 years, roughly 15 miners have died per year. That’s about .0001 miners per job (although Dept. of Labor statistics now include mining office workers as coal miners, so the real ratio is likely higher). But in a dismal science sense, coal miners are compensated for these extra risks with a “risk premium” that is reflected in the high wages offered in the coal mining sector. There is a massive literature on something called the “value of a statistical life”, which has this risk/reward notion as its underpinning.

The second problem is that the mining of coal results in the production of a highly carbon-intensive energy source. Some folks (or municipalities or states) care about these consequences, even though many of the damages from climate change occur half a world away. Of course, the current U.S. administration has looked at this tradeoff and clearly prefers American jobs over improving the global environment, as signified by its withdrawal from the voluntary and non-binding Paris Accord last month.

The third problem with coal has received less attention lately: energy generated from coal (and high-sulfur Appalachian coal in particular) is really bad for local air quality. Each ton of coal dug up from below ground will ultimately be burned somewhere resulting in damages to people, plants and animals near the point of combustion. (Also the mining process itself has some nasty consequences for the local environment.) Say what you will about climate change, but we still see broad agreement that local air and water quality are important to protect. After all, the EPA’s stated purpose by the current administration is “to ensure that all Americans are protected from significant risks to human health and the environment where they live, learn and work.”

Numerous studies have attempted to quantify the health consequences of coal combustion. If we knew the external costs from a ton of coal combusted, we could translate this into the external cost of a coal mining job. This sounds like a crazy calculation, but if we are using public policies to incentivize the creation of these jobs, we should think about the total societal costs of doing so.

For example, the IMF in 2014 calculated that the social costs of coal from air pollution (not including CO2) were $5.5/GJ of energy. There were about 50,000 jobs in coal mining last year in the US, more or less (more if you include related jobs, less if you just think miners). Each ton of coal contains roughly 22 GJ of energy. US production in 2016 was 738 million short tons. Put those together you get external costs of 1.79 million dollars per miner. Let that number sink in for a second. To the extent that these costs are not priced or regulated, they are considered as an implicit subsidy to fossil fuels, and that’s in a publication dedicated to Gary Becker (a famously conservative “Chicago” economist).

But those statistics are pretty impersonal. A more telling (and tolling) calculation comes from studies looking at the health – or rather death – consequences of pollution. A 2013 study from MIT found that pollution (specifically particulate matter, SO2, and NOx, an ozone precursor) from electricity generation causes 52,000 premature deaths annually, mostly from the fine particles associated with coal-fired generation. They have a nifty graphic showing that largest impact hovers over the east-central United States and in the Midwest, where the power plants tend to use coal with high sulfur content. This study only gets at how many people die every year from power sector emission and leaves out morbidity and damages to ecosystems, agricultural production etc.

Coal-fired generation creates on average 5 times the pollution of natural gas. At the time of the MIT study (2005), given the generation shares, roughly 90% of the power sector emissions were coming from coal. Put these numbers together and you can ballpark an estimate of what these studies suggest in terms of mortality alone. It’s very much back of the envelope and maybe we’ll write a paper to do this more precisely, but we were shocked by the outcome. Someone dies each year for every one to two coal mining jobs. Yes. You read that right. Let that sink in. To be completely fair here, we are assuming that coal is being replaced with some happy shiny non-polluting renewable energy source.

This fact is clearly not the fault of the miners. These are great jobs to have: they pay well and do not require hugely costly training. But, what this does mean is that if we keep on pushing the further extraction of dirty coal (clean coal is fiction and if you like fiction, call us, we have recommendations), we are implicitly subsidizing the deaths of the many people living within the range of power plant emissions. And this is not a good thing.

Why the focus on coal jobs? We are not political scientists by training, but even we understand these mining jobs are in politically important areas. But from a societal welfare point of view, we are making a huge deal out of a profession that is clearly dying out. The fast-food chain Arby’s now employs one and half times the number of people the US coal mining industry does. This does not mean we should subsidize hamburgers and fries. (Those may kill more people than coal, but that is for another blog.)

The issue, of course, is that something has to be done about the structural economic crises in the mining communities.  This is a global, not just a US issue. There is evidence from Poland that miners once unemployed stay unemployed for longer than people in other professions. The goal here has to be a way to train miners in these communities in jobs of the present or the future – not the energy equivalent of Blockbuster.

Posted in Uncategorized | 3 Comments

Stop Blaming Drivers for Mexico City’s Smog

This Spring, Mexico City has been choking under some of its worst smog conditions in years. The problem is ozone. Pollution alerts for ozone have been issued repeatedly, triggering “double” driving restrictions that have pulled hundreds of thousands of cars off the road, twice as many as usual. But what if cars are not the problem? I’ve been combing through recent data from Mexico City and the relationship between cars and ozone is tenuous at best.

Big Data

If you want to look at a pollutant that is tightly related to driving, take carbon monoxide. As the figure below shows, carbon monoxide levels in Mexico City tend to peak at 8am or 9am, when the roads are jammed with commuters trying to get to work. Emissions inventories show that 99% of carbon monoxide in Mexico City comes from cars, and you can see this in the daily pattern.

The pattern for carbon monoxide also differs across days of the week. The figure shows only Friday through Monday, but this is enough to be able to compare weekdays to weekends. Friday, Monday, and other weekdays have the biggest peaks. Saturday, and in particular, Sunday, have lower peaks. Again, this reflects driving. After battling traffic all week, people in Mexico City are happy to drive less on weekends.


Ozone has a very different pattern, peaking in the middle of the day when the sun is highest in the sky. There is no peak during the morning commute like you see with carbon monoxide. But even more revealing, notice that the peak for ozone is similar across all days of the week. Weekend ozone levels are just as high as weekday levels, even though many fewer cars are on the road. When people drive less, carbon monoxide levels go down.  Ozone levels? Not so much.


Note: These figures were constructed by Lucas Davis (UC Berkeley) using hourly pollution data from Mexico City’s Automated Environmental Monitoring Network. Each observation is mean pollution averaged over all monitoring stations for a specific hour and day, and the figures use all hours between January 2015 and April 2017.

What’s Going On?

I’m an economist, not an atmospheric scientist, but I think the data give a clear picture of what is going on here. Ozone is the classic “secondary” pollutant, meaning that it is not emitted directly but instead is formed in the atmosphere as a product of other pollutants.  The basic recipe for ozone is simple. Take volatile organic compounds (VOCs) and nitrogen oxides (NOx).  Add sun. Chemical reactions happen. Voila!! You get ozone, a pollutant that not only causes smog but also is very dangerous for human health.

But here’s the deal: You need both VOCs and NOx. If you are short one of these two ingredients, you can’t substitute more of the other. In econ-speak, we’d describe this as a “Leontief” production function. In areas where there are lots of VOCs, ozone formation is “NOx-limited”. A reduction in VOCs will have little impact, because the process already has more VOCs than it can use. Similarly, in areas where there is lots of NOx, ozone formation is “VOC-limited”, and a reduction in NOx will have little impact.


Note: Figure 1 from Sillman (1999).  The solid lines represent ozone production rates of 1, 2.5, 5, 10, 15, 20, and 30 parts per billion per hour.  Thus there are different combinations of NOx and VOC that yield the same ozone production rates.

Surprisingly, there is no consensus in the scientific literature on whether Mexico City is NOx- or VOC-limited. See here and here.  But the recent data provide pretty clear evidence that Mexico City is, in fact, VOC-limited. NOx levels are much lower on weekends, but ozone levels are not.


In fact, Sunday ozone levels in Mexico City are actually somewhat higher than other days of the week, consistent with the backward bending part of the curves above. For places that are severely VOC-limited, ozone production can actually increase when NOx concentrations drop. In other words, when VOCs are the constraint to ozone production, more cars on the road can actually reduce ozone levels!

Given this evidence, it seems crazy to try to reduce ozone levels by restricting driving.  Sunday is, in some sense, an extreme version of what could be achieved through driving restrictions. And while many pollutants are indeed lower on Sundays, ozone is not. Driving is not the problem.

Policy Implications

What are the policy implications? First, drop the double driving restrictions on high ozone days. There is no evidence that this has any impact on ozone levels. And, more generally, driving restrictions have been widely shown to be an expensive and ineffective approach to addressing air quality.

If you want to reduce ozone in Mexico City, you have to reduce VOCs. VOCs come from all kinds of things. Paints. Solvents. Adhesives. Cleaning Products. Cosmetics. Even dog poop.  Yes, dog poop. Carlos Álvarez, a chemical engineer at Mexico’s National Polytechnic Institute has calculated that 250,000 tons of dog poop are “deposited” annually on Mexico City’s sidewalks, significantly contributing to VOC emissions.


Note:  Watch your step.  A dog walker in Mexico City’s Hipódromo neighborhood.  Source: Airbnb.

In addition to targeting these sources, it would be worth looking again at transportation.  But rather than restricting driving, it’s time to look at gasoline regulations.

California provides a particularly useful point of comparison. Los Angeles is similar to Mexico City in that both suffer from high ozone levels and both are VOC-limited. Since 1996, California Air Resources Board (CARB) gasoline has been required throughout the state. Considerably more stringent than U.S. national fuel standards, CARB gasoline must meet strict content requirements for olefins and other highly reactive VOCs.

CARB gasoline has been shown to be very effective at reducing ozone. When CARB gasoline was introduced, it reduced Los Angeles ozone levels by 16%, according to research from the Energy Institute’s Max Auffhammer and Ryan Kellogg. California has also achieved additional ozone reductions by enforcing strict requirements for vapor recovery systems which reduce VOC emissions at gas stations when drivers are filling up their tanks.

Similar requirements could work in Mexico City too. Whatever approach is taken, let’s then evaluate the policy using data. Too much is at stake to continue rolling out the same tired policies. Let’s use modern data techniques to quickly and credibly figure out what works and what doesn’t work.

Posted in Uncategorized | 12 Comments

The Developing World Is Connecting to the Power Grid, but Reliability Lags

Most discussions about energy in the developing world quickly turn to the 1.3 billion people who don’t have access to electricity. Many initiatives are focused on serving these people, such as Kenya’s Last Mile Connectivity Project or the US’s ambitions of supporting 60 million new electricity connections through the Power Africa program.

I hear a lot less about the people who have an electricity connection but for whom reliability is bad. And, as anyone who has spent time in the developing world knows, poor reliability doesn’t just mean that the lights occasionally flicker. Whole neighborhoods can be left in darkness for hours and even days on end.

The importance of reliability – and not new connections – is particularly meaningful in cities, where nearly everyone has a connection, but reliability is poor. For example, the figure below, from a forthcoming working paper by my colleagues Ken Lee, Paul Gertler and Mushfiq Mobarak, summarizes data from 21 Sub-Saharan African cities. The share of city dwellers who have access is represented in the graph on the left – many are close to 100 percent, and almost all are above 75 percent (except the capital of Malawi). The graph on the right represents the share of those city dwellers who report that their electricity connections works “all the time” or “most of the time.” It’s less than 20 percent in Lagos, Nigeria, home to nearly 20 million people.

Source: Gertler, Lee and Mobarak (2017). Circle size proportional to city’s population density.

Cities are important because one of the starkest demographic trends in the developing world is urbanization. And, in my experience, a lot of rural families are tied to the urban economy – relying on remittances from the husband, older son, or uncle who has moved to the city.

The Reliability Measurement Gap

So, why is reliability the poor stepsister to energy connections? Why aren’t there more programs to promote reliability?

I believe a major reason is that reliability is really hard to measure. As my colleague Jay Taneja says in a recent working paper, “Before electricity reliability can be improved, it needs to be accurately measured.” You can’t fix what you can’t see.

Jay’s paper has some super interesting figures highlighting the difficulty measuring reliability in the developing world. For example, the graph below compares a common metric of outages (called SAIDI, for System Average Interruption Duration Index) measured two ways – as reported by utilities on the vertical axis and by their customers on the horizontal axis. Each point represents a country. If the utilities and their customers were reporting the same thing, the points would all lay on the dashed line. In fact, almost all of the points are below the line, indicating that the utilities are reporting one-seventh as many outages as their customers on average!

Source: Taneja (2017), Figure 1

Luckily, Jay and a team of engineers at Berkeley are working on some really innovative, inexpensive ways to measure reliability using smartphones and low-cost sensors.

The Cost of Poor Reliability

Why might reliability be a big issue?

For one, poor reliability doesn’t just impact households, but also hospitals, factories, telecom systems, government buildings, etc., all of which are important to economic development. Around the world, non-residential customers use well over 50 percent of electricity, and over 70 percent in some of the major developing countries, including India, China and Brazil.

I spoke to an entrepreneur from Lagos last year who was trying to make a go of a company selling beauty products designed for the local market. It’s a business that has very little to do with energy. And yet, the conversation quickly turned to Lagos’ electricity problems. Not only did he have a backup generator, but his internet service provider had a backup generator, his accountant had a backup generator – you get the picture. To start a business in Lagos, you have to invest in a generator. This is a tax on doing business, which makes it hard for new businesses to start and for existing ones to grow.

We need more research to document just how much of a drag this is on the local economy, but I suspect it could be a big hindrance to growth.

Within the residential sector, this isn’t necessarily a story about allowing rich city dwellers to watch TV and keep their apartments air-conditioned. In fact, reliability may impact the poor more than the rich. The chart below was made by an enterprising Accra resident, who hired people to stand on street corners in different neighborhoods and record whether the nearby lights were on. (Another indication of just how starved people are for data on reliability.) The neighborhoods at the bottom of his chart, with 3 or fewer outages over the two-week period, are where the rich people live. I believe that the human observers were able to distinguish grid outages from local backup generators, although I’m not positive.

“Disco Lights” because they go on and off

Finally, the pollution created by all of the backup generators is a major contributor to poor air quality. For example, my former graduate student Fiona Burlig pointed me to estimates from India suggesting that diesel generators contribute 10 to 20 percent of cities’ pollution, depending on the pollutant (here’s another source).

Considering the Trade-Offs Between Reliability and New Connections

Don’t get me wrong. I care about the 1.3 billion people who do not have electricity connections. They are no doubt some of the world’s poorest people. But, that’s only part of the energy access problem. And, I believe that we need to be open to the possibility that connecting fewer people and increasing reliability for existing customers is better for economic development than putting all our eggs in the connection basket. Remember, the already-connected customers include hospitals, schools, etc.

Kenya, for example, has made great strides connecting households to the grid in recent years. According to the latest reports, 15 million more people are connected to the grid now than 7 years ago. But, these newly connected consumers aren’t using much electricity – only one fifth of what the average household was using in 2009. This means that Kenya Power’s revenues per customer are dropping, just as it has added a bunch of new infrastructure to its system, infrastructure that it will need to maintain for years to come. It’s possible that building out the system will take resources away from improving the reliability.

There’s a lot of work to do. We need to figure out cost-effective ways to improve measurement technologies, identify the many varied causes of poor reliability, work with utilities to improve their systems for both preventative maintenance and triaging when they face reliability incidents. Those seem like jobs for engineers. Economists can provide estimates of the development benefits to investments in reliability as well as energy access, and they can identify ways to provide utilities with better incentives and more capital to invest in reliability. There are big payoffs to getting these answers right.

Posted in Uncategorized | 12 Comments

California’s Carbon Border Wall

With all that’s been happening in Washington DC, you may have taken your finger off the pulse of California climate change policy. But now’s a good time to check back in. There’s a new cap-and-trade proposal in town, and it’s turning lots of heads in the state capital.

California is deep in deliberations over cap-and-trade as it prepares to meet a new and ambitious GHG emissions reduction target. The state is aiming to reduce emissions to 40% below 1990 levels by 2030. This makes the GHG emissions reductions we’ve achieved so far look timid.

While the state has charted an emissions reduction path out to 2030, the existing GHG cap-and-trade program sunsets in 2020. This means the legislature needs to reauthorize – or replace – the current program to meet this post-2020 ambition. The 260 million metric ton question: What policy can most effectively deliver on this target?

The new cap-and-trade proposal, SB 775, would replace the existing cap-and-trade program in 2021. It has some pundits swooning. David Roberts of Vox writes a glowing endorsement of what he sees as a “clean break” from California’s existing GHG emissions trading program. But other policy experts offer a different view. Economist Rob Stavins argues there’s no need to “repeal and replace” the state’s effective cap-and-trade system. Professor Ann Carlson warns that it could “cause many more problems than those it attempts to solve”.

The proposal covers a lot of ground in 19 pages (Dallas Burtraw provides a great review here). I want to unpack one key piece: the proposed border adjustment. This may sound wonky and weedy, but it’s really important because it aims to bring imports under California’s cap-and-trade program. There’s a lot to like about this idea in theory. But the reality could be a different story.

The Leakage Problem

To put this border adjustment into context, let’s quickly review the problem it’s trying to address. The problem is that California’s climate change policy applies to only a small subset of the sources contributing to the global climate change problem. Pricing carbon only within the state could potentially send business – and associated emissions – out of state.

Suppose you are a California-based producer of an emissions-intensive product, such as cement, glass, or refined oil products. Under a statewide cap-and-trade program, you are required to purchase emissions permits for your GHG emissions. In other words, the policy increases your production costs. If out-of-state producers can supply the California market, this could mean you lose California market share to out-of-state rivals who don’t face the same cost increase. If you are a California-based operation that exports its products, this could make it harder for you to compete in out-of-state markets. In either scenario, the policy can shift production out of California. The associated emissions “leakage” erodes emissions reductions achieved within the state.


The Current Response

Concerns about leakage loom large, so it is essential that California’s cap-and-trade program incorporate some meaningful response to this problem. Right now, the response comes in the form of free permit allocation. A share of permits (approximately 15%) are distributed free to those industries that are deemed to be at leakage risk.

You may be wondering how requiring firms to purchase permits – and then handing them back for free- achieves anything at all. The key is that emitters are required to turn in permits to cover their emissions, but these same firms are allocated free permits based on production. So you (the producer) see both an emissions tax (which provides an incentive to invest in emissions abatement) and a production incentive (which helps to ‘level the carbon playing field’ with out-of-state producers and thus mitigate leakage).

If we are concerned about emissions leakage (and we should be), this output-based free permit allocation approach can strike a balance between incentivizing emissions abatement and mitigating leakage. That’s the good news. The less-good news is that this strategy comes with side effects. For one thing, it dilutes the carbon price signal that California consumers receive when they are making their consumption decisions. It also allocates the revenue from the sale of valuable permits to industry when this revenue could alternatively be put towards other good uses.

The Proposed Alternative

There’s more than one way to skin this leaky cat. SB 775 proposes an alternative that I think most (all?) economists would prefer in theory. The idea is simple and elegant. First, identify imported products whose price would be materially impacted by the carbon permit price. Then require importers of these products into California to purchase permits for the emissions baked into their product. As for California-based exporters, they are exempt from the obligation to purchase permits for emissions associated with products sold outside the state (the SB 775 language on this is hard to parse…. thanks to Michael Wara for clarifying this important point!)

Why is this the theoretically preferred approach? For one thing, consumer prices in California rise to fully reflect the carbon price signal. This helps us consumers account for the full costs of our consumption, and adjust our behavior in response. Second, California can use the revenues from the sale of permits for other purposes, versus freely allocating to industry (although exempting exports means no revenues are collected from exporting firms).

The upshot is that this border adjustment seems like a winning proposal in theory. But the winning horse, in theory,  need not be the most fit to ride through the real-world challenges that lie ahead.

Comparisons between an elegant proposal-on-paper and the existing workhorse that’s spent years slogging through messy policy implementation can be misleading. It’s easy to find flaws in the current permit allocation approach to leakage mitigation when compared against some theoretical ideal. But the more relevant point of comparison is the border adjustment after it hits the buzzsaw of reality.


Here’s my wet-blanket list of reality-bites concerns:

  • We import lots of stuff from lots of places: Under the border adjustment, California will need to estimate the carbon emissions baked into all the emissions-intensive products we import. There is already some precedent for this kind of accounting exercise covering one product under the state’s low carbon fuel standard (LCFS). Nine full-time staff have been hard at work estimating the GHG emissions factors for transportation fuels consumed in the state. This table summarizes the hundreds of “carbon pathways” (e.g. “South Dakota corn ethanol”, “Brazilian molasses ethanol”) that span the space of transportation fuels. It can take months to estimate a single pathway. The number of source-product combinations would increase dramatically under the border adjustment.
  • A cap on consumption emissions is harder to measure: It’s worth pointing out that, under a border adjustment, the state’s emissions targets and the associated emissions cap would have to be redefined. California currently caps emissions from in-state production. But under a border adjustment, the cap-and-trade regulation would cap emissions associated with in-state consumption. This means using the aforementioned emissions factors to estimate emissions in our imports, and subtracting the emissions from in-state production that get exported outside the state.
  • Export reshuffling? Under the SB 775 proposal, emissions associated with California production destined for export markets would be eligible for a border tax “refund”, whereas emissions associated with production that stays in California would remain under the cap. This asymmetric treatment of what stays home and what gets sent outside of California creates an incentive to re-allocate more emissions-intensive production to the export market in order to avoid the carbon price.
  • Legal challenges:  The border adjustment could pose a triple threat to the program: challenges from within the state, challenges under the commerce clause, and challenges from WTO. The legal resources required to defend this provision could be large. Notably, the SB 775 language does include an escape clause. If a judicial opinion, settlement, or other legally binding decision reduces the state’s authority to implement the border adjustment, the legislation authorizes a return to free allowance allocation for the affected products.  But this return would be messy, in part because it would require a re-adjustment of the emissions cap


California is demonstrating a working example of how emissions leakage can be mitigated in a regional emissions trading program. There’s no question that the current approach falls short of the theoretical ideal. But the real question is:  could an alternative approach work better? SB 775 has raised the profile of an important conversation about what those alternatives could look like.

My concern with the border adjustment proposal is that it seems to put the cart before the proverbial horse. Success hinges critically on our ability to come up with legally-defensible measures of greenhouse gas emissions intensities for all the carbon-intensive products we import. It’s worth noting that exploratory work along these lines is already underway (Resolution 10-42 directed the Air Resources Board to review the technical and legal issues related to a border adjustment for the cement sector). Given all that’s at stake, we should double down on these efforts to develop and test this approach before we bet the farm on its real-world durability.

Posted in Uncategorized | 16 Comments

One Stone, No Birds

Capping greenhouse gas (GHG) emissions at individual facilities is a bad idea whose time, unfortunately, may have come in California. Unlike a statewide cap or tax on emitting GHGs, facility-specific caps have essentially zero support among environmental economists, as I discussed in a blog in January.

Capping GHGs at specific facilities would undermine California’s leadership in creating cost-effective mechanisms for fighting climate change. If the caps are binding, their primary effect will be to drive GHG-intensive industry out of the state, moving the emissions, not reducing them.

Nonetheless, recent changes to a bill in the California legislature (Assembly Bill 378) suggest that’s where we’re headed.  AB 378 is one of (at least) two competing bills that would extend California’s GHG cap and trade program past 2020, which is when the current legislated program ends. Previously, AB 378 had stated that GHG reductions should be achieved in a way that also addresses public health issues from local air pollution, particularly in disadvantage communities. This is a great idea for which there are many possible policy options.

Unfortunately, AB 378 was amended in April to add specific requirements for facility-specific caps, which  will directly conflict with cost-effective climate change mitigation:

(c) The state board [California Air Resources Board] shall not permit a facility to increase its annual emissions of greenhouse gases compared to the annual average of emissions of greenhouse gases reported from 2014 to 2016, inclusive.

(d) The state board may adopt no-trade zones or facility-specific declining greenhouse gas emissions limits where facilities’ emissions contribute to a cumulative pollution burden that creates a significant health impact.

Wait, if legislators are worried about local air pollution in disadvantaged areas (known as environmental justice or EJ communities), why would they cap GHGs instead of regulating the local air pollution? After all, it’s the local pollutants (NOx, microscopic particulates, and toxics like benzene and formaldehyde) that create health impacts in surrounding communities.  The impact of greenhouse gases is the same regardless of where on earth they are released.

The answer goes back to a paper by Cushing, Wander, Morello-Frosch, Pastor, Zhu and Sadd that was released last September, which Meredith discussed last October. The paper shows (figure 3, reproduced here) a significant, though very imperfect, correlation between GHGs and one measure of local pollution released from industrial facilities.RefineryCapsFig1

The paper also shows that GHG emissions are higher on average in EJ communities than in those that are not considered disadvantaged. And, the paper suggests that total GHG emissions from industrial sources in California were higher in 2013-14 than in 2011-12, before California’s cap-and-trade program began. A longer time-series look at industrial GHG emissions confirms the claim of Cushing and co-authors. But it also shows that the largest change occurred between 2011 and 2012, before cap-and-trade started, so it is hard to know if cap-and-trade accelerated or slowed the trend.RefineryCapsFig2

A new paper by Kyle Meng of UC Santa Barbara sheds more light on the question of GHG emissions in EJ communities. Meng’s paper confirms that GHG emissions have been about 40% higher, on average, in EJ communities than other areas in California. But his analysis shows that GHG changes since the beginning of cap and trade have not differed on average between EJ and non-EJ communities. Meng looks at the changes in GHG emissions in 2013-2015 (data for 2016 have not yet been released) compared to 2012, the year before the program started. He finds no statistically significant difference between EJ and other communities over the three cap-and-trade years in aggregate, though if anything emissions have fallen slightly more in disadvantaged communities. He also finds substantial GHG drops in 2015 in both EJ and non-EJ communities.RefineryCapsFig3

Even if EJ communities have seen about the same GHG change as non-EJ areas, if GHGs at a facility generally move in tandem with local pollutants, isn’t restricting GHGs a tool that could kill two birds with one stone? Unfortunately, the answer almost certainly is no.

The reason is that restricting GHGs at specific facilities gives companies incentives to make changes that just shift the GHG emissions elsewhere, particularly out of state. The California oil refining industry has been at the center of these facility-cap discussions, and provides a good illustration of the problems.[1]

If a California refinery were faced with a binding GHG cap, the two most likely ways it would comply are by reducing the amount of oil it refines (and thus the amount of gasoline and other refined products it produces) and/or by changing the type of oil it refines.

Reducing the amount of oil it refines means that there is less in-state production of gasoline and other products. But that does not reduce the amount of gasoline we consume in California, at least not by much, because (with an extra 10-20 cents per gallon for shipping) California-specification gasoline can be brought in from refineries around the world. So, the reduction in in-state production just creates “leakage” of production to out-of-state facilities, generally taking high-paying jobs with them.

In fact, that is exactly what happened after the fire at Exxon’s Torrance refinery in February 2015, causing gasoline prices to spike. It takes about a month to order and receive delivery for imported gasoline that meets California specifications. As the figure below shows, gasoline imports to the west coast (the vast majority of which are to California) skyrocketed about a month after the mid-February fire (the blue lines). The fire drastically reduced GHG emissions from the Torrance refinery, but those were likely more than offset (due to additional transportation) by increases in emissions from refineries elsewhere in the world.RefineryCapsFig4

Some advocates for restricting refinery GHG emissions have argued that we just need to get off of gasoline, and this would be a first step.  I completely agree that we need to replace gasoline, but facility-specific GHG caps are not a step in the right direction. The Torrance refinery fire took out about 10% of the state’s capacity to produce California-specification gasoline, but as the figure below shows it did not put a noticeable dent in California gasoline consumption, which has continued to trend upward since 2013.  Consumption in March through December 2015, after the fire, was 2.6% higher than the same months in 2014.RefineryCapsFig5

Other supporters of capping GHGs from in-state refineries argue for the need to prevent imports of crude oil from the Canadian tar sands, which is substantially more GHG intensive (in production and refining) than other crude. (Though let’s not forget that the GHG emitted when you burn a gallon of gasoline in your car are still much greater than all the upstream GHG emissions from creating that gallon.) But just as reduced production in-state will push that production to more distant refineries, reducing California purchases of Canadian tar sands crude will push purchases of that crude to more distant refineries.  The effect of this supply “reshuffling” on world GHG emissions will likely be very small and may not even be a net reduction.

If capping GHGs at California facilities won’t do much to lower world GHGs, might it still lower local pollution? It might, because sometimes lowering a facility’s GHGs is indeed associated with lowering its local pollutants. But the fact that this association is very imperfect suggests that squeezing GHGs to reduce local pollutants will miss many of the opportunities for reducing local pollutants, opportunities that the facilities won’t have an incentive to pursue unless their local pollution is regulated directly.

And capping GHGs at industrial facilities will do nothing to reduce by far the largest source of dangerous local emissions, which is exhaust from trucks, ships and construction equipment.

Moreover, because designers of the state’s climate change programs understand that leakage and reshuffling are not really reducing global GHG emissions, and that they are likely to hurt the California economy, the programs include incentives (and some restrictions) to prevent these responses.  So, to the extent that facility-specific caps reduce in-state GHG emissions, they do so in ways that other state policies are specifically designed to prevent.

Hazardous air quality in disadvantaged communities is a very serious problem, but capping GHG emissions at facilities in those communities is not a serious solution.  And in the process it will undermine California’s programs and leadership in addressing climate change.  Let’s solve local air pollution by regulating (and taxing) it directly.

[1] The refining industry is also the subject of a proposed rule of the Bay Area Air Quality Management District (rule 12-16) that would cap GHG emissions from each refiner in the bay area. The Advisory Council to BAAQMD (of which I am a member) has put out a report recommending against adoption of rule 12-16.

I tweet energy articles, research and blogs (and a few opinions) most days @BorensteinS

Posted in Uncategorized | 7 Comments

Evidence of a Decline in Electricity Use by U.S. Households

It has been slowing down for decades, but is electricity use by American households now going down?

Americans tend to use more and more of everything.  As incomes have risen, we buy more food, live in larger homes, travel more, spend more on health care, and, yes, use more energy. Between 1950 and 2010, U.S. residential electricity consumption per capita increased 10-fold, an annual increase of 4% per year.

But that electricity trend has changed recently. American households use less electricity than they did five years ago. The figure below plots U.S. residential electricity consumption per capita 1990-2015. Consumption dipped significantly in 2012 and has remained flat, even as the economy has improved considerably.

USelecSource: Constructed by Lucas Davis at UC Berkeley using residential electricity consumption from EIA, and population statistics from the U.S. Census Bureau.

Broad Decreases

The decrease has been experienced broadly, in virtually all U.S. states. The figure below shows that between 2010 and 2015, per capita residential electricity consumption declined in 48 out of 50 states. Only Rhode Island, Maine, and the District of Columbia experienced increases.

StatesSource: Constructed by Lucas Davis at UC Berkeley using residential electricity consumption from EIA, and population statistics from the U.S. Census Bureau.  Electricity use per capita is measured in megawatt hours.

This pattern stands in sharp contrast to previous decades. During the 1990s and 2000s, for example, residential electricity consumption per capita increased by 12% and 11%, respectively, with increases in almost all states. Previous decades experienced much larger increases.

Energy-Efficient Lighting

So what is different? Energy-efficient lighting. Over 450 million LEDs have been installed to date in the United States, up from less than half a million in 2009, and nearly 70% of Americans have purchased at least one LED bulb. Compact fluorescent lightbulbs (CFLs) are even more common, with 70%+ of households owning some CFLs.  All told, energy-efficient lighting now accounts for 80% of all U.S. lighting sales.

It is no surprise that LEDs have become so popular. LED prices have fallen 94% since 2008, and a 60-watt equivalent LED lightbulb can now be purchased for about $2. LEDs use 85% less electricity than incandescent bulbs, are much more durable, and work in a wide-range of indoor and outdoor settings.

peakSource: Energy.Gov, “Revolution…Now”, September 2016.

Is this really big enough to matter? Yes! Suppose that between LEDs and CFLs there are now one billion energy-efficient lightbulbs installed in U.S. homes. If operated 3 hours per day, this implies savings of 50 million megawatt hours per year, or 0.16 megawatt hours per capita, about the size of the decrease above. Thus, a simple back-of-the-envelope bottom-up calculation yields a similar decrease to the decline visible in aggregate data.

Alternative Hypotheses

No other household technology is as disruptive as lighting. Incandescent bulbs don’t last long, so the installed stock turns over quickly. Air conditioners, refrigerators, dishwashers, and other appliances, in contrast, all have 10+ year lifetimes. Thus, although these other technologies have also become more energy-efficient, this can’t explain the aggregate decrease. The turnover is too slow, and the gains in energy-efficiency for these other appliances have been too gradual for these changes to explain the aggregate pattern.

Traditional economic factors like income and prices also can’t explain the decrease in electricity use. Household incomes have increased during this period, so if anything, income effects would have led electricity use to go up. Moreover, between 2010 and 2015, the average U.S. residential electricity price was virtually unchanged in real terms, so the pattern does not seem to be the result of prices.

Another potential explanation is weather. The summer of 2010 was unusually hot, so this partly explains why electricity consumption was so high in that year. But the broader pattern in the figure above is clear even if one ignores 2010 completely. Moreover, I’ve looked at these data more closely and there is a negative trend in all four seasons of the year: Summer, Fall, Winter, and Spring.

Rebound Effect?

This is not the first time in history that lighting has experienced a significant increase in energy-efficiency. In one of my all-time favorite papers, economist Bill Nordhaus examines the history of light from open fires, to candles, to petroleum lamps, to electric lighting. Early incandescent lightbulbs circa 1900 were terribly inefficient compared to modern incandescent bulbs, but marked a 10-fold increase in lumens per watt compared to petroleum lamps. However, as lighting has become cheaper, humans have increased their consumption massively, consuming thousands of times more lumens than they did in the past.

Economists refer to this price effect as the “rebound effect”.  As lighting becomes more energy-efficient, this reduces the “price” of lighting, leading to increased consumption.  An important unanswered question about LEDs is to what extent will these energy efficiency gains be offset by increased usage? Will households install more lighting now that the price per lumen has decreased? Will households leave their lights on more hours a day? Outdoor lighting, in particular, would seem particularly ripe for price-induced increases in consumption. These behavioral changes may take many years to manifest, as homeowners retrofit their outdoor areas to include additional lighting.


It is not clear yet whether U.S. household electricity use has indeed peaked or this is just a temporary reprieve. Probably the biggest unknown in the near future is electric vehicles. Currently only a small fraction of vehicles are EVs, but widespread adoption would significantly increase electricity demand. It is worth highlighting, though, that this would be substitution away from a different energy source (petroleum), so the implications are very different from most other energy services.

pexelsSource: Pexels.

Over a longer time horizon there will also be entirely new energy-using services that become available, including services that are not yet even imagined. The 10-fold increase in electricity consumption since 1950 reflects, to a large degree, that U.S. households now use electricity for many more things than they did in the past. The recent decrease is historic and significant, but over the long-run it would be a mistake to bet against our ability to consume more energy.


For more see Davis, Lucas W. “Evidence of a Decline in Electricity Use by U.S. Households,” Economics Bulletin, 2017, 37(2), 1098-1105.

Posted in Uncategorized | 71 Comments

Save the California Waiver!

How a “little” California vehicle standard prevented an urban “airmaggedon”

My midlife crisis did not lead to me to buy a German convertible — which would assist me in tanning my bald head and at the same time increasing the earth’s albedo — but rather to a rigid exercise regimen. I have discovered my love for running. Yesterday morning I left my hotel room in Berlin on a sunny day and headed towards the Reichstag along the river Spree. And I nearly choked. The stench of Diesel was pretty much unbearable. European cities are living through an “airmageddon”, with concentrations of some of the most toxic particulate matter in major urban centers breaking record levels in recent years on bad days.

Part of the problem is the incredibly high penetration of diesel engines in passenger cars. Germany registered 3.35 million new cars last year, of which 45.9% had a diesel engine and 2% had alternative (read hybrid or CNG) engines. The reason for this is that regular gasoline is expensive. The cheapest gas I could find in Berlin is $5.56 per gallon. A gallon of diesel is $4.58 per gallon. This difference is not due to the underlying cost of the fuel but the fact that diesel is taxed at a significantly lower rate. Also, using a VW Golf as an example, the diesel engine uses roughly 20% fewer gallons (excuse me liters) per mile than a power equivalent Otto (read regular) engine. The upfront price of diesel cars is higher, but for said Golf you break even after 18,000 miles and are printing money thereafter. Plus, diesel-powered cars are fun to drive as even a small Golf has the torque of a small tractor.

In theory diesel engines with the right filtering technology and regular checkups and adjustments are “clean”. But there is the problem that not everyone brings their diesel to an annual checkup. Further there is the little problem of criminal and reckless lying of companies like VW on the true emissions of these vehicles. Many European cities have introduced Low Emission Zones, where only the cleanest cars get to drive into the urban core and now some major cities are contemplating banning diesel cars outright from their downtowns. As my former student Hendrik Wolff points out, these policies have been reasonably successful  at improving air quality. Really fixing this problem for the Europeans is going to require a U-turn on diesel. A straightforward policy intervention would be abandoning the favorable tax treatment of this fuel. This is politically difficult, as French manufacturers have specialized in the production of small diesels. Any punishment of diesels would be regarded as failure to make Peugeot great again.

But why do we not have this problem in the United States? We know that we Californians are just a bunch of regulation-loving outdoor fanatics having massaged kale for breakfast. But this bunch of hippies has historically had the most stringent tailpipe emissions standards in the world. This was made possible by the so called “California Waiver”, which allows California under certain settings to set stricter standards than the ones required at the federal level. These tailpipe emissions standards were impossible to attain with the small popular diesel engines until very recently. And VW only managed to get there by committing fraud. European manufacturers historically had pushed to radically ramp up the increase of diesels sold in the United States. California regulation stopped this invasion of the diesels in its tracks. California was an appealing market for diesel vehicles because at the time they appeared to be more fuel and somewhat more greenhouse gas efficient. Further, under the Clean Air Act other states could adopt California’s standards without seeking approval from EPA (in fact in 2007, Connecticut, Maine, Maryland, Massachusetts, New Jersey, New York, Oregon, Pennsylvania, Rhode Island, Vermont, and Washington had done so, which is of course a large share of the US market for passenger vehicles).

Does this mean diesel emissions are not a big deal in California and elsewhere in the US? Of course they are. Big trucks are largely powered by diesels and there are a massive number of trucks on US roads. CARB and the US EPA have done a lot to make sure that the diesel fuel going into trucks has become cleaner by requiring low-sulfur diesel. But is this regulation efficient? Is it working? The answer is that I have no idea. Economists have largely ignored regulation of big-rig trucks. The externalities from these trucks are likely significant in terms of pollution, congestion and accidents. But I am aware of next to no papers in the economics literature which have attempted to quantify these externalities.

So what would I like economists to do? Get to work on quantifying the externalities from big diesel trucks. And everyone should light a LED candle in support of the California Waiver. It will need all the support it can get for the foreseeable future.

Posted in Uncategorized | 10 Comments

Is the Duck Sinking?

This has been a spring of leaks. Most of you probably heard about the hole at the Oroville Dam. In my house, we’ve had leaks in both our skylight and our car. Yes, it’s great to be out of the drought, but like other Californians, we’re feeling a bit waterlogged.

All this water means that the hydro dams are cranking out lots of electricity. Reservoirs are at high levels, even before the major snow melt, so we’re letting a lot of the water run through the dams and producing cheap hydropower morning, noon and night.

If you believe the saying, ducks take to water well. But in the electricity world, the bountiful water is creating problems for the industry’s favorite waterfowl.

Long-time blog readers have heard several mentions of the “duck curve” – the aptly named graph that depicts energy demand net of wind and solar generation over the course of a day. I’ve reproduced one of the original versions below, which was created circa 2013 and shows projections out to 2020. Much of the focus has been on the duck’s neck – the rapid increase in non-renewable electricity demand as the sun sets on solar plants and people turn on lights.


As of last spring, the projections in the duck curve were materializing on schedule, as Meredith’s blog post described. During 2016, however, utility-scale solar PV capacity in the state grew by another 50%. As a result, net load in the middle of the day on a recent Sunday (April 9) bottomed out at 10,000 MW (see the green line in the graph below), instead of the 14,000 MW projected for 2017 in the forecast duck (the dark orange line labeled “2017” in the graph above).

Source: Daily Renewables Watch, CAISO (Thanks to the ISO for this and other great data sources.)

All the solar and hydropower have led to a new phenomenon – negative prices in the middle of the day. The blue line in the graph below depicts day-ahead prices for Sunday, April 9 in Southern California. For comparison purposes, the red line depicts day-ahead prices at the same location on the second Sunday in April 2012. Looks like another version of the duck, albeit drawn by a preschooler, and this time with price on the vertical axis.

Source: California ISO OASIS

Note that I picked April 9 as an example. Through yesterday, there were 19 days during March and April 2017 with negative midday prices in the day-ahead market in the South. They’re certainly more common on weekends, when people take breaks but the sun doesn’t. But, 7 of those 19 days were weekdays. Also, I’m focusing on the South, as that’s where most of the grid-scale solar is located. For the three days I checked, though, prices were also negative in the North.

Let’s first wrap our heads around what it means to have a negative price. On these days, if you were in southern California, the ISO was willing to pay you to consume electricity. Nearly all retail customers are on fixed tariffs that do not vary with wholesale prices, so they were still paying positive prices for electricity. But, if you were exposed to wholesale prices, you would have made more money the more electricity you consumed – just plug in your least efficient electric space heater and watch the dollars role in.

You may wonder why an electricity generator would be selling into the market when prices are negative. If you’re the owner of a large solar plant in the desert, for example, can’t you just turn off your connection to the grid, instead of having to pay to feed electricity into the market? Similarly, why would a gas or nuclear plant use costly fuel to sell into a market with negative prices?

There are a couple reasons generators might be willing to sell at negative prices:

  • The production tax credit. Some renewables owners (mainly wind) are eligible for a production tax credit, which essentially pays them, in the form of a tax credit, for every MWh they produce. So, not producing means that they have to forego this credit. In theory, producers will pay to sell into the wholesale market as long as they’re paying less than the tax credit.
  • The Renewable Portfolio Standard. Under California’s Renewable Portfolio Standard (RPS), utilities are on the hook to provide 33% of their electricity from renewable sources by 2020 and 50% by 2030. The utilities sign contacts with renewable providers and, while terms likely vary, the utilities want to meet their RPS targets. In the extreme, the utilities are on the hook to pay a penalty (which was $50/MWh early on) if they don’t. So, they generally want to encourage the renewable providers to produce. For example, under a very simple power purchase agreement, the utility would pay the renewable provider a pre-specified price per MWh irrespective of the wholesale market price, leaving them no incentive to shut down when prices are negative.
  • Operating constraints. For some power plants, varying the output level entails high costs, particularly starting and stopping the plant. I think of those as analogous to the extra fuel, plus wear and tear, planes expend taking off. So, if it costs a lot to restart a nuclear plant, for example, you’re willing to pay not to have to turn it off to avoid incurring those costs.

In the graph below, we can see that the state’s lone nuclear plant, and even some thermal (which is essentially analogous to fossil-fuel) plants were still operating on April 9 when the prices were negative.

Source: Daily Renewables Watch, CAISO

The cost of turning plants on is also reflected in the real-time prices from April 9. Just like the day-ahead prices, they were negative in the middle of the day. But, they really spiked during the morning and evening ramps (to $1000/MWh!) when plants needed to turn on to meet the additional demand.

What do the negative prices tell us? At a fundamental level, they tell us that we have too much of a good and suppliers need to pay people to take it off their hands. Right now, California has too much renewable electricity. Emphasizing this point, a recent briefing from the California Independent System Operator noted that renewable “curtailments” were at record levels in March 2017, amounting to over 80 GWh, which is more than a typical day’s worth of solar production that month.

Is there anything to do about the negative prices? Negative prices certainly highlight the value of storage, where the basic idea is to buy low and sell high. Buying when prices are negative is especially lucrative. Standalone storage is still expensive, but the costs are rapidly declining. Increased electrification of transportation may provide one type of storage or at least flexible demand.

Another solution is to expose more retail consumers to wholesale prices, or find other ways to encourage customers to respond to real-time prices. Economists have bemoaned the disconnect between wholesale and retail pricing for years—maybe the prospect of being paid to consume electricity will help more people see the value of this?!?

In addition, generators that historically operated through the belly of the duck, including nuclear, combined heating and power, and conventional natural gas plants might be encouraged to reduce their output. For example, while it may not be practical to cycle nuclear generation on a day-to-day basis, maybe refueling outages could be scheduled for the spring, when excess supply problems are generally the worst.

Proponents of western grid integration note that removing barriers to exporting electricity will help California share some of its renewable electricity, especially when in-state demand is low and hydro supplies are high. (This is not intended as a comprehensive list of the solutions – an ISO discussion includes more here.)

To round out the post with another duck-ism, the duck may look calm, but we need to pay attention to what’s going on below the water line – the zero price line in this case. The duck is paddling furiously, as operating an electricity system with a lot of renewables isn’t easy.

Posted in Uncategorized | 55 Comments