What the Heck Is Happening in the Developing World?

One of the most important energy graphs these days shows actual and projected energy consumption in the world, separated between developed and developing countries. A version based on data from the Energy Information Administration (EIA) is below.
Screen Shot 2016-08-07 at 8.38.43 PMThe vertical axis measures total energy consumption, including gasoline, diesel, natural gas, electricity from all sources, etc. – all converted to a common unit of energy (the Btu, or British Thermal Unit). It reflects commercial energy sources, but excludes things like firewood that people collect on their own. The horizontal axis plots time, and the straight lines reflect historical (actual) data while the dotted lines reflect projections.

Strikingly, the developing world – approximated on the graph as countries that are not members of the OECD – has already passed the developed world (in 2007) and is projected to consume almost twice as much energy by 2040.

To me, this suggests strongly that anyone worried about world energy issues – including climate change, oil prices, etc. – should be focusing on the developing world.

Unfortunately, I fear that we know woefully little about energy consumption in the developing world. The series of graphs below depicts our ignorance starkly.

Let’s start with China, which single-handedly consumed 22% of world energy in 2013 (still far less per capita than in the US). The vertical axis again plots total energy consumption, but this time it’s measured relative to 1990 levels. The black line plots actual numbers. For example, since the black line is at 3.5 in 2010, that means that by 2010, China was consuming 3.5 times more energy than it had in 1990. Pretty amazing growth! By comparison, US consumption in 2010 was only 15% higher than 1990 levels.


Screen Shot 2016-08-07 at 9.10.13 PM

The colored lines on the graph depict the EIA’s projections, published in different annual issues of the International Energy Outlook (IEO). If you stare at 2010, 2015 and 2020, you see that the EIA has revised its projections upward considerably over a relatively short time period.

Start with the light blue line at the bottom, which reflects projections that were part of the 2002 IEO. At that time, the EIA thought China would only consume twice as much energy in 2010 as it did in 1990. But, China’s actual consumption surpassed that level midway through 2003, 6.5 years earlier than projected. So, by 2005, the EIA had increased their projection for 2010 by 30%. That’s a huge upward revision.

But it wasn’t nearly enough. The EIA continued to increase its projection, struggling to keep up with China’s actual growth.

Ah, you say. This is just a story about China, where there are lots of possible explanations for underestimated growth in energy, including faster than expected GDP growth, rapid industrialization, etc.

But, similar stories emerge for Africa and India. The EIA has recently revised projections pretty dramatically, and most of the revisions are upwards.

Screen Shot 2016-08-07 at 9.12.13 PM

Finally, for India more than Africa, the projections have been too low.

Screen Shot 2016-08-07 at 9.14.02 PM

And, this is not a problem in the developed world. The figure below contains a similar graph for the US. Note that the scale is different than for the developing regions, so the revisions have been pretty miniscule in comparison. Also, they’ve generally been downward.

Screen Shot 2016-08-07 at 9.15.18 PM

A couple points to keep in mind:

  • It may seem like I’m picking on the EIA. I’m not trying to. They are doing an incredibly important job with very few resources. (The International Energy Outlook was recently demoted from an annual publication to approximately biannual.) Also, the EIA are not alone. The International Energy Agency and BP – two other big names in world energy reporting – have also had to revise projections upward to keep up with energy demand in developing countries.
  • The EIA and other organizations are careful not to describe their projections as forecasts. The EIA, for example, notes that, “potential impacts of pending or proposed legislation, regulations, and standards are not reflected in the projections.” I doubt that omission explains the discrepancies in the developing regions, though. I have tried to back out how much of the underestimate is due to misjudged GDP growth, and I don’t think that’s a big share either, at least in China. I suspect that we need a better underlying model for how GDP translates to energy consumption in the developing world, the point of this academic paper.
  • Policymakers in the developing world appear to appreciate this issue. We recently launched a 5-year research project, funded by the Department for International Development (the UK’s analog of USAID) and joint with Oxford Policy Management, to study energy in the developing world, focusing on sub-Saharan Africa and South Asia. As part of this project, we hosted a policy conference in Dar es Salaam to hear from East African policymakers about the pressing issues they faced. One of the main themes that emerged was the difficulty of planning without better demand forecasts.
  • Some might argue that markets will solve this problem. The EIA is just some government agency that few are paying attention to, or so the argument might go. If you have real money at stake in understanding future energy consumption in the developing world, you would not hire someone who was off by 75% (3.5 divided by 2).

I do not know who is using the EIA projections for what, but I believe this logic breaks down for several reasons. For one, in many parts of the world, the private sector is not investing in energy infrastructure and the public sector may be relying on organizations like the EIA. Also, most investors don’t really care about 2040. Their discount rates are high enough that it doesn’t really matter what’s happening 25 years out. But, from the perspective of climate change, the world should care about energy consumption in 2040, 2050 and 2100.

This brings us back to the first graph in the post, which contained projections out to 2040. I fear that we are underestimating the 25-year out projections, just like we’ve underestimated recent trends. As researchers, we need to get under the hood and understand more about what is driving energy consumption in the developing world.

Posted in Uncategorized | Tagged | 26 Comments

Evaluating Evaluations – Energy Efficiency in California

Last year, Governor Jerry Brown signed a law, Senate Bill 350, that sets out to double energy efficiency savings by 2030. Last week at the Democratic National Convention, Governor Brown focused his remarks on the importance of policies such as this to tackle climate change.

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

California Governor Jerry Brown at the California Science Center, Oct. 30, 2012. Photo Credit: (NASA/Bill Ingalls)

The precise energy efficiency targets haven’t been finalized, but they will be ambitious.

Meeting these targets will require an expansion of energy efficiency policymaking. Policymakers need to understand which programs work in energy efficiency and which don’t.

This is a daunting task. The California Public Utilities Commission’s (CPUC’s) energy efficiency efforts fund roughly 200 programs. The California Energy Commission (CEC) is regularly introducing new appliance and building standards. The evaluations of these activities are made public, but they can be hard to find and difficult to interpret. Additionally, policymakers may not have the time or training to critically assess the methodologies being used.

As a result, individual programs may not be getting enough scrutiny.

Many people working on energy efficiency may think the last thing we need is MORE evaluation. Energy efficiency is heavily evaluated.

I disagree. Today we have an opportunity to step up our game. We have access to more data and more rigorous evaluation techniques than ever before. It’s time for more evaluation, not less. In particular, it’s time to evaluate the evaluations.

To illustrate what I’m talking about, let’s look at an example from another heavily evaluated sector, criminal justice. The context is quite different, but the basic lessons are instructive.

In the 1980s many US states enacted stricter laws to reduce domestic violence. Rather than putting every offender in jail, courts began to mandate that offenders go through batterer intervention programs (BIPs). The initial evaluations of these programs found they were highly effective. These evaluations contributed to the justice system’s growing reliance on BIPs. In a 2009 report, the Family Violence Prevention Fund and US government’s National Institute of Justice estimated that between 1,500 and 2,500 such programs were operating.

As the cumulative number of evaluations grew, researchers began to undertake reviews that evaluated the evaluations, referred to as meta-analyses or systematic reviews. What they found was disappointing.

Many of the past evaluations that showed positive effects had methodological shortcomings. While some men completed a BIP and did not reoffend, others failed to complete court-mandated BIPs. Many men also became difficult to track down for surveys. The positive evaluations left out these populations, who were the people most likely to re-offend. More recently, careful studies that recognized the systematic differences between men who stuck with the programs and those that didn’t found that mandating the programs had a small or no effect.

There is disagreement on what to do next. Some researchers and practitioners have argued that BIPs could still be effective for some people. What is needed is better targeting and tailoring of the BIPs, coupled with evaluation. Others have taken the position that policymakers should stop relying on these programs because they waste valuable resources and create a false sense of security for women who think their batterer will be reformed through the programs. This is a really important evidence-based debate that should result in more effective policy.

This example is not unique. Evaluations of evaluations, known as systematic reviews, are becoming prevalent in many sectors including medicine, international development, education and crime and justice.


The way a systematic review works is that a team of reviewers focuses on a specific policy intervention. The reviewers do an exhaustive search for all the evaluations on the intervention. This includes academic and consultant evaluations, and includes other geographies. Then the reviewers carefully consider each study. They particularly focus on how carefully each study considered what would have happened in the absence of the intervention—the counterfactual – and whether there is a risk that the results may be skewed one way or another.

The systematic review report discusses each study’s risk of bias and then reaches a conclusion about the intervention based on the studies with the lowest risk of bias. In some cases a systematic review may conclude that a program is effective, or that it is not. In other cases a review finds that there is insufficient evidence to reach a conclusion. In these cases the review recommends how evaluations should be performed in the future to reach a firmer conclusion.

There are several reasons why now is the time to begin doing systematic reviews of energy efficiency evaluations. First, a very large number of evaluations have been completed across the country and world. There is value in reviewing and synthesizing these evaluations so that policymakers everywhere have access to the best evidence. Second, new statistical approaches are taking hold in energy, fueled in part by smart meter data. Systematic reviews can help policymakers make sense of the diversity of approaches. Third, energy efficiency is taking on increasing importance, as reflected in ambitious goals and growing spending. The evidence base needs to be strong to ensure the resources are being used effectively.

Research conducted at The E2e Project points to questions that systematic reviews could help answer. When are ground-up engineering estimates most appropriate to use? How important is the rebound effect? What considerations are most important when embedding evaluations into program design? What can interval smart meter data tell us about the effectiveness of programs that other approaches cannot?

Several of these were highlighted by agency staff at an energy efficiency workshop held by the CEC last month.

California produces only 1% of global greenhouse gas emissions. Given that, as Severin emphasized in a prior blog, the state’s policies can’t possibly have a meaningful direct impact on climate change. Instead, the way California can best address the climate change challenge is through invention and learning, then exporting the knowledge to the world.

In the case of energy efficiency, California should focus on finding which policy interventions are most effective and sharing the findings. Policymakers should take a look at systematic reviews as a tool to accomplish this.

Posted in Uncategorized | Tagged | 8 Comments

The Promise and Perils of Linking Carbon Markets

The theme of the week is “We’re stronger together“.  This rallying cry applies in lots of places.. including climate change mitigation!   So this week’s blog looks at how this theme is playing out in carbon markets. A good place to start is California’s recent proposal to extend its GHG cap and trade program beyond 2020. One of the many notable developments covered by this proposal is a new linkage between California’s carbon market and the rest of the world.


Notes: The graph plots 2020 emissions caps. Quebec and California have been linked since 2014.  The proposed link with Ontario would take effect in 2017.  Emissions numbers summarized in the graph come from here, and here.

Admittedly, I am uniquely positioned to get really excited about linking the province of Ontario (where I was born and raised) with the state of California (my home of 10+ years) under the auspices of the California carbon market (an institution I spend a lot of time thinking about).  But excitement and interest in this “Ontegration” extends well beyond the Canadian economist diaspora. Why?  Because many see this kind of linkage between independent climate change policies as the most promising – albeit circuitous- means to an elusive end (meaningful climate change mitigation).

How did we get here?

After years of work to establish to globally coordinated “top-down” climate policy with very limited success, there’s been an important pivot towards a more decentralized, bottom-up strategy.  This change in course is motivated  by the idea that more progress can be made if each jurisdiction is free to tailor its climate change mitigation efforts to match its own appetite for climate policy action.  Whether, how, and when these independent carbon policies should link together so that regulated entities in one region can use allowances from another is viewed as “one of the most important questions facing researchers and policy-makers.”

To grease the wheels of this coming-together process, the Paris agreement provides a framework to support bottom-up policy linkages. International organizations such as the World Bank are working hard to translate this framework into on-the-ground success stories.  But so far, real-world carbon market policy linkages are few and far between.

I can count the number of linkages between independent trading programs on one hand (the  EU ETS is  linked to Norway, Iceland, Switzerland, and Liechtenstein. California is linked with Quebec).  Post-Brexit, we’ll probably see one more (after Brexiting, a likely outcome is that the UK will establish its own carbon market to link with the EU ETS).  The California-Ontario link is a good news addition to this list, which is why Ontegration is generating both hope and headlines.

Why link?

The most fundamental argument for linking emissions trading programs boils down to simple economics.  Why pay $20 to reduce a metric ton of carbon in California when you can pay $1 to reduce a metric ton in China?  If marginal abatement costs differ across regional cap-and-trade programs, allowing emissions permits to flow between programs to seek out the least cost abatement options will reduce the overall cost of meeting a collective emissions target. Of course, how this net gain is allocated across linkers will depend on how the linkage is implemented.

Other benefits include:

  • More integrated carbon markets are more liquid and can be less volatile, although market linkages can also propagate shocks more directly from one country to another. The EU ETS provides a case in point. The chaos that followed the Brexit referendum has directly (and significantly) impacted the price of carbon in 31 countries.
  • Economies of scale. Some jurisdictions are simply too small to support a well-functioning carbon market. If program operations are combined, administrative costs and effort can be shared across multiple jurisdictions. Larger markets also reduce risk of market power – a major concern for small jurisdictions trying to go it alone.
  • Political considerations. Politics are critical in determining whether a linkage will fly or die. Ontegration offers a case in point. California is happy to demonstrate that its climate policy initiative has brought other jurisdictions onto the carbon market board. In Ontario, the case for moving ahead with cap-and-trade is easier to make when the proposal involves plugging in to an established carbon market operation versus building a market from the ground up.

Market linkage comes with strings attached

The appeal of a bottom-up climate policy is that individual jurisdictions have the autonomy to pick and choose their own policy parameters.  But I am not going to link my carbon market with yours if I’m worried you’re going to introduce rogue policy changes that drive my carbon price and/or carbon emissions in an unpalatable direction. In other words, mutually acceptable linkage agreements will almost certainly impose limits on autonomy because the policy design choices in one jurisdiction affect outcomes in others.

Linkage does not require that all market design features are perfectly harmonized, but it does require careful coordination of design elements deemed to be critical. The Quebec-California linkage agreement provides a well documented example.   These kinds of deliberations get increasingly complex as the number of jurisdictions increases. Negotiations also become much more complicated when the benefits from linkage are distributed unequally across regions.

An important, related concern is that a linked network of carbon markets is only as strong as its weakest link. If one region lacks the capacity to monitor and enforce market rules effectively, this can undermine the environmental integrity of the entire system.



Limits to linkage?

Recent developments in Europe and California are demonstrating how carbon markets can be linked when partners see (mostly) eye to eye, market designs are similar, and political objectives are aligned.  Given current carbon market conditions, linkages have yet to deliver much (if anything) in terms of economic gains from trade.  But they have expanded the scope of carbon markets and laid down foundations for future cooperation. Some good news for a change.

Forging linkages between less compatible systems will require more effort and ingenuity.   It has been suggested, for example, that regions with more aggressive caps might be convinced to link with countries imposing less aggressive caps if “carbon exchange rates”  define favorable  terms of permit trade for regions with more ambitious mandated reductions.  Distorting market incentives in this way might help eliminate political barriers to linkage – but this would also undermine a fundamental economic reason for linking markets in the first place.  Mitigation costs will not be minimized if linkage agreements drive a wedge between regional mitigation incentives. At some point, the costs of policy coordination start to outweigh the economic and environmental benefits of linking.

We’re stronger when we work together. This is particularly true in fighting a global threat like climate change. But the explicit linking of carbon markets is only one way to join together and move global climate change mitigation forward.  We should celebrate recent carbon market  linkages, but realize they are one means to an end – not an end in themselves.


Posted in Uncategorized | Tagged , | 4 Comments

Who’s Stranded Now?

Utility costs are like taxes.  Everyone knows they have to be paid, but most people have a reason that their own share should be smaller.  And, just as with taxes, there are limitless ways to divide up the revenue burden.

It’s been 20 years since electricity deregulation raised the specter of stranded utility costs – past investments that have turned out to deliver less value than was originally expected — and the question of who should pay those costs: Electricity ratepayers? Customers switching to buy from a competing electricity supplier? Utility shareholders?

StrandMe2So now it’s 2016 and we are back to the same question.  Electricity customers are leaving or are greatly reducing purchases.  Some customers are installing rooftop solar while still buying some power from the utility.  Others are switching to a community choice provider (as I discussed in February) or proposing municipalization.   As utility sales decline, once again we are debating who should pay for utility investments that are less valuable in the new regime.

Utilities are responding mostly as they did in the 1990s, arguing that their investments were deemed prudent by regulators at the time they were made, so their own shareholders should not be on the hook.  In somber tones they invoke a “regulatory compact” that is supposed to assure them a reasonable return on investments in exchange for an obligation to provide safe, affordable, reliable service.  Basically, they argue, a deal’s a deal, even when the market or regulatory environment changes in ways that devalue their installed capital.

StrandMe1Opponents respond by saying “Not so fast.  Utility shareholders have received investment returns comparable to the rates earned by unregulated companies while bearing far less risk. Yes, the market is changing and that is hurting your company. Welcome to a world with some risk.”  And furthermore, the reply continues, the utility commissions that approved those investments were too cozy or politically connected with the utilities, so the deals made shouldn’t be binding.

Both arguments have some merit.  Regulators should try to fulfill commitments, out of fairness, to maintain credibility, and to create a financial environment that can support investment.  But if the regulatory process that made those commitments was so broken that it was not legitimate, then the argument for sticking with unfair commitments is less compelling.

So it has been ironic to now see the arguments of each side flip as regulators reconsider some of the terms set for the current wave of partially exiting customers.

In December 2015, regulators in Nevada changed the rules for rooftop solar, abandoning net metering both for new installations and for customers who already have panels on their roofs.  The decision followed another ruling in Arizona that reduced incentives to install rooftop solar.  The howls of protest from solar customers included many references to unfairly changing the rules, even though there was no explicit long-term commitment to those rules, just expectations.

While Nevada was rocking the DG world, California was considering many of the same issues, and the solar advocates’ arguments on net metering policies followed along similar lines: the state has made a commitment to building rooftop solar and breaking that commitment would have dire consequences.

In California, the advocates were mostly victorious.  Net metering was extended for a few years, though the commissioners suggested they could follow Nevada next time if they don’t see more evidence solar customers are paying their fair share of costs.

While solar customers viewed these unwritten commitments in California, Nevada and elsewhere as sacrosanct, utilities argued they are over-the-top subsidies that don’t make public policy sense and should be scaled back.  More than one utility executive, or manager at a grid-scale renewables company, has complained that subsidies for distributed generation are being driven by the outsize political influence of DG solar companies, and that state regulators have lost sight of the original goals of reducing greenhouse gases while maintaining affordable electricity.

StrandMe3At about the same time as they were reviewing policies towards distributed generation, the California Public Utilities Commission was also resetting exit fees for departing customers who join community choice providers, using a formula that had been established in a previous decision.  These fees were created to compensate utilities for the power contracts they signed at what are now above-market prices — many for renewable power contracts in the early, expensive days — and to protect remaining customers from having to cover an unfair share of those contracts.  Community choice advocates argued for delaying or abandoning the increase, while the utilities returned to the view that a deal’s a deal.

Watching the different sides repeatedly invoke and abandon the imperative of sticking with policy directions set in previous decisions is a bit like watching the Republicans and Democrats in the Senate fight over legislative procedures. Whichever side is in ascendancy uses the rules to support their agenda, while the opposing side is shocked by the blatant abuse of power.  And then instantly the roles reverse when power shifts.

The big difference, of course, is there is no regulatory agency overseeing Congress that can call them on their hypocritical arguments. Electricity regulators can, and should, do so when market participants selectively argue the sanctity of whatever existing policy they support.

That’s not to say that regulators should blithely switch policies ignoring the cost of the uncertainty it creates.  Policy consistency is important, up to a point.  New information, new analysis, and new technologies, however, constantly alter the energy landscape. Policies that are written with clear dates of future review and potential off ramps may discourage some investment, but they seem just as likely to maintain pressure for verifiable high performance.  Given the dynamism in energy technology and climate science, regulators should be extremely cautious about making inflexible policy commitments to specific technologies.

When policies are re-evaluated it is crucial to separate the determination of whether overall a policy merits continuation from the allocation of gains and losses if it is halted.    Some party will always lose when policy changes.  The regulatory or legal process can determine if losers are due compensation, but that mustn’t be allowed to lock policy into the status quo.

In the next 10 years, we will likely see more change in energy systems than we have seen in the last 50.  While government policy should be fair to market participants it must also be nimble and adaptive to a changing landscape.  Only with such flexibility will we be able to address the growing environmental impact and affordability challenges that we face.

Posted in Uncategorized | Tagged , | 12 Comments

Move Over PEMEX

Gasoline stations in Mexico have all been exactly the same for decades. PEMEX, the state-owned behemoth has been the only show in town. Pull up to any of 11,400 stations nationwide and the experience is very similar: PEMEX stations selling PEMEX gasoline.


This is all changing.  Starting April 1, 2016, private companies can now import, transport, store, and distribute gasoline and diesel. The change is part of a broader set of energy reforms aimed at increasing private investment throughout the Mexican energy sector. Already non-PEMEX stations are starting to appear like “La Gas” and “Hidrosina”.


Making this a competitive market will not be easy, but the reforms have great potential to improve service quality and, eventually, to increase efficiency and reduce prices.

The reaction thus far has been positive. Twitter users like Rubén Linares have reported that the new stations are better illuminated, cleaner, and more modern. The new stations are also introducing other innovations like electronic payment systems. If these stations can earn a reputation for better quality service they will pull business away from PEMEX stations.

tweetPEMEX stations are franchised. So there is already some incentive for station owners to provide good service. The problem is, however, that because all stations are branded PEMEX, there is also severe free riding. Why improve your station’s service quality when your efforts will mostly benefit other owners? Moreover, PEMEX’s franchising rules severely limit the scope for differentiation. For example, all PEMEX stations have the exact same limited snack and drink options.

Eventually, retail competition will also put downward pressure on prices. Currently, retail prices for gasoline and diesel are set nationally by the Mexican finance ministry. The current price for non-premium (“magna”) gasoline is $2.75 per gallon compared to an average price in the United States of $2.29 per gallon. All stations in Mexico charge this price, including the non-PEMEX stations.

However, these price controls for retail gasoline and diesel will be removed starting January 1, 2018. It will be very interesting to see what happens to prices. As in any market, there will be stations that enjoy local market power. But there will also be stations that cut prices to increase market share and new stations that open in high-demand locations.


In the longer-run, a competitive retail market will also help increase efficiency upstream.  You might ask, where do gas stations in Mexico buy gasoline? For the moment, they buy it from PEMEX. This vertical structure raises a couple of serious concerns. Probably most importantly, you worry about input foreclosure i.e. that PEMEX will favor PEMEX-branded stations. PEMEX could try to charge lower prices to PEMEX stations, or could try to refuse to sell products to non-PEMEX stations. It will be important for the Mexican regulator to keep a close watch on this type of non-competitive behavior, perhaps with the assistance of an independent advisory panel like California’s Petroleum Market Advisory Committee.

We are also beginning to see private investment in these upstream sectors. In particular, some major petroleum consumers are starting to import their own petroleum products. It will be important to ensure that new products conform with Mexico’s environmental regulations (e.g. low sulfur gasoline), but this end run around PEMEX is extremely promising. Not only can this reduce costs for consumers, but it will also put competitive pressure on PEMEX to reduce costs.


These upstream markets will not become competitive overnight. PEMEX has long dominated petroleum production, refining, imports, transport, and storage, so price regulation will be necessary for wholesale petroleum products for the foreseeable future. But, over time, this price regulation is going to become less and less necessary as private investment expands. And moving forward, these investment decisions will be increasing driven by market factors, presumably leading to more efficient choices.

So move over PEMEX.  It is going to be an exciting next couple of years in the Mexican petroleum sector.



Posted in Uncategorized | Tagged , | 3 Comments

Mitigation Bingo

I am clearly not a historian, but has there ever been a more dynamic, physically fit and forward-looking trio in charge of North America’s future? It’s not gender balanced, but hey, maybe we can fix that in November. At their recent summit, the TOP powerhouse (get it? Trudeau, Obama, Peña Nieto) announced that by 2025, 50 percent of the continent’s electricity will come from “clean” sources. Clean here means hydro, wind, solar, geothermal, CCS, demand reduction, and of course nuclear power. Currently these sources provide 37 percent of the three countries’ power. So this is an ambitious goal over a relatively short time horizon. It provides lots of flexibility to meet the goal, as Canada has ample potential hydro resources and Mexico’s solar potential is vast.

Don’t get me wrong, I am excited about this proposal, but I would like to take a step back. Climate change will affect the power sector in a number of ways. We have largely focused on reducing greenhouse gas emissions (mitigation), which is of course the root of the problem itself. Regulators outdo themselves with ambitious goals. California is going 80% below baseline by 2050! Everywhere you look there are sexy combinations of targets and timelines. Now it’s 50/25 for the electricity sector nationally! On this blog, my economics friends and I have argued repeatedly that it would be nice to set one GHG target and achieve it in a least-cost fashion, not in this game of mitigation bingo. Such a beautifully coordinated abatement effort across sectors and sources would make me happier than Kevin Durant moving to the Golden State Warriors.

Unfortunately some relative that clearly doesn’t get me, already got me “Mitigation Bingo” for my birthday. So I am going to spend the rest of my money on a game of “Adaptation Pursuit”.  A comprehensive private and public sector plan on climate change for the power sector should look more broadly at what’s coming down the atmosphere and think of plans to deal with them. Many impacts of climate could make reducing greenhouse gas emissions more difficult. To clarify thoughts, I drew you the picture below (another reason no one likes economists is the fact that we can’t really draw).

climate change

The four plagues of the climate apocalypse in this context are fire (more frequent, and likely more intense), heat, floods (sea level rise) and drought (related to heat and in some areas less rainfall). Here are a number of ways these four will negatively affect the power sector:

1)      Wildfires will affect transmission capacity. Lines do not like to get hot or dusty (fires generate a ton of dust). More fires may lead to lower transmission capacity at peak times. So maintenance and construction plans for existing and new transmission lines (which will ship those green electrons from the middle of nowhere to you) will have to take into account the new normal. Wildfires may also affect substation and generation facilities directly, of course.

2)      Many power plants are built near bodies of water, which is needed for cooling. Sea level rise will lead to higher flood events, which might negatively affect these plants and substations and require investments in additional seawalls. Or maybe one would want to build plants elsewhere.

3)      Nuclear power plants are also frequently located near the ocean. As we lack permanent storage deep underground for spent fuel rods in the US, spent fuel rods live in secure facilities at the plants. If sea levels rise, and the higher 100-year flood events change the new normal, we have to think about storage more carefully in the intermediate run. In the very long run, if there is really drastic sea level rise, the value of deep underground storage goes up.

4)      As we have recently experienced, droughts are bad for California and just about everywhere else. In agricultural areas, drought is fought by pumping water from wells, which requires lots of energy (e.g., electricity or diesel). If in the long run we have to drill deeper and deeper for water, these energy costs will rise.

5)      As streams and small rivers run dry and hot during drought periods, this may lead to insufficient supplies of cooling water necessary for both fossil and nuclear power plants. Nobody likes to think about overheating power plants. Of course, less water in streams is bad for hydropower generation as well.

6)      But the big Kahuna is heat. The biggest economic impacts will likely come from increased demand for cooling during hot days. Lucas, Catherine and I have written extensively on the subject. More people will install air conditioners and operate them more frequently. The big costs here are from more electricity consumption. Given the possibly sizable necessary investments in peak capacity, which costs about $750 per kW, higher demand during peak load will be a pricey endeavor.

7)      We already know that transmission lines lose capacity at high temperatures. But on top of that some types of gas fired power plants generate less power per unit of gas. This means that when it’s hot outside, not only is demand higher, but many plants see decreases in output.

None of these seven points require us to reach for our anti-anxiety medication. There is time to address all of these issues. Which is what the average 20 year old thinks in terms of heart conditions while biting into a triple bacon cheeseburger. It’s time to come up with a comprehensive plan on how we are going to deal with these issues and act accordingly. This will save us a lot of headaches later on.

Posted in Uncategorized | Tagged , | 3 Comments

Finding Energy Efficiency in an Unexpected Place – The Cockpit

I suspect that most energy economists think there are more unexploited opportunities for energy efficiency in homes than in firms. Firms are cost-minimizers, after all – they’re in the business of making things with the fewest possible inputs. And, energy is an important input for many firms, particularly airlines. So, not even McKinsey – in their exhaustive catalog of potential energy efficiency measures – identify improved fuel efficiency from airlines.

Surprisingly, a new NBER working paper finds a significant opportunity for fuel savings in the airline industry. In a research coup that makes people like me drool with envy, Greer Gosnell, John List, and Rob Metcalfe convinced Virgin Atlantic Airways to let them run an experiment. And, not just any experiment – one that involved their captains, the head honchos, as in, “This is your captain speaking.”

The cockpit of a Virgin Atlantic Airbus A340

The cockpit of a Virgin Atlantic Airbus A340

The researchers sent three randomly selected groups of Virgin Atlantic captains either (a) information about average fuel efficiency on the flights the captain made in the previous month, (b) the same personalized information as group (a) plus personalized targets for the coming month, or (c) personalized information and targets, plus an offer to donate 10 GBP (which was worth more pre-Brexit…) to a charity for each target achieved. A fourth set of captains was in the control group and knew the experiment was going on, but didn’t receive a monthly mailing.

Pilots who only received information – group (a) – significantly improved their fuel efficiency, while pilots who received personalized targets improved their fuel efficiency even more, achieving about the same gains as the pilots who could donate savings bonuses to charity.

Perhaps most surprisingly, even pilots in the control group improved their fuel efficiency considerably relative to the months before the study began. The authors speculate that this is an example of the so-called Hawthorne Effect, which suggests that people behave differently when they know they’re being studied.

The adjustments yielded substantial savings, netting the airline more than $5.4 million in Virgin-Atlantic-Airways-airbus2fuel savings over the 8-month pilot period. Presumably, this would scale up to about $8 million over a year – not too shabby considering that the company’s profits (before taxes and exceptional items) were around $20 million in the year the study took place. And, since the costs of running the experiment were very low – less than $3,000 to send a couple hundred letters a month – this appears to be an example of a negative cost abatement strategy. The authors calculate approximately -$250 per ton of CO2 (yes, negative, since the company saved money on net).

In my mental model, pilots are super-humans. They’re better than super-computers, and could probably even beat Gary Kasparov at chess. In other words, they always, always make the right decisions. This might be what my psyche requires to get on a plane, plus a function of being a teenager when the movie Top Gun came out.


So, what adjustments did these super-humans make to save fuel? And, do we really want them to be thinking about fuel efficiency and not MY SAFETY? As it turns out, most of the improvements involved simply – ahem – following the rules.

For example, when they’re taxiing, pilots are usually supposed to turn off half of the engines (e.g., one on a two-engine plane and two on a four-engine plane). Before the experiment, pilots did this on 35% of the flights, and after they knew they were being observed, this increased to 51%. Other adjustments involved following advice from air traffic control about more efficient routes or using real-time information about the baggage onboard to adjust fuel levels.

So, if even super-human pilots can be nudged into saving fuel, can we expect to find lots of similar opportunities across other industries? Think of the energy inputs controlled by power plant operators, truck drivers, or building supervisors. I’m personally optimistic, but two points temper my enthusiasm:

  • Nudging people to do things they weren’t doing before likely imposes additional costs. (Hunt Allcott and Judd Kessler explore this point in more detail here.) So, even if the cost of sending letters to the pilots is low, there may be other, unobserved costs. For example, whatever extra time it takes the pilots to update the fuel calculations is time they were previously spending doing something else. And, even if it was just getting a cup of coffee at the airport, it was something they must have enjoyed, because they chose to do it.

Greer, List and Metcalfe thought about this, and surveyed pilots after the experiment. They found that if anything pilots reported higher job satisfaction, especially those who met their personal goals and donated to charity. While we can’t rule out additional costs (unobserved costs are notoriously hard to quantify…), that result at least suggests they are small and possibly negative.

  • While the fuel savings are decent sized and statistically significant, they amount to small tweaks to the way an industry does business. But, climate scientists suggest that we need to reduce GHG emissions by around 80% to avoid dramatic disruptions, which we can’t do with only small tweaks. Every little bit certainly helps, but we can’t stop p19he9v0a7157b1eo01ib518a619rr4looking for more fundamental changes within sectors. (The Wall Street Journal had a recent piece on electric and hybrid planes, which still sound a bit futuristic, but you never know.)

As the researchers point out, providing feedback to the pilots involved large amounts of data (on over 40,000 flights), which were collected, quickly analyzed and sent to the pilots. At the most fundamental level, I see this experiment as an example of how using lots of personalized, high frequency data can give valuable feedback to decision-makers. Let’s hope more companies take similar opportunities to find savings this way – it’s in their best interest, as well as the climate’s.

Posted in Uncategorized | Tagged | 11 Comments

Time to Unleash the Carbon Market?

What’s a ton of carbon (dioxide equivalent) worth? Not much if you ask the world’s carbon markets. The graph below summarizes prices and quantities covered by existing carbon emissions trading programs (green) and carbon taxes (blue).  Nearly all carbon market prices are below $13/ton.


          Source: State and trends of carbon pricing 2015

These low carbon prices have been making headlines, particularly in the context of the two largest carbon trading programs: Europe’s Emissions Trading Scheme (ETS) and California’s GHG emissions trading program. In California last month, the carbon allowance auction price hit the floor of $12.73, with only 11 percent of the 77.75 million allowances up for sale finding a willing buyer. European allowance prices have averaged around €6  in the first half of 2016, far below the €30 or more needed to encourage a shift away from coal-fired electricity generation.

The European lawmaker in charge of overhauling the EU ETS post-2020 has compared his carbon market without a price to “ a car without an engine”.  I would spin this automotive analogy a little differently.  As far as I can tell, these carbon markets have a working engine. We’re just not allowing them enough room to drive.

Here in California, the GHG emissions trading program (covering 85 percent of the state’s GHG emissions) has been cast in a supporting role. The updated scoping plan projects that over 70 percent of emissions abatement required under the 2020 target will be driven by “complementary measures” (e.g. mandated investments in low carbon technologies) rather than the permit price.  Once you factor in offsets and the potential for emissions leakage and reshuffling, there’s not much work left for the carbon market to do.

In Europe the story is a little different. Because the ETS is less comprehensive (covering  approximately 45 percent of emissions), many complementary measures are designed to tap abatement potential that lies outside the reach of the carbon market.  But there are also important prescriptive measures mandating  emissions reductions that fall within the scope of EU ETS. To put this into some perspective, the value of interventions (i.e. subsidies, feed in tariffs, etc. ) designed to accelerate investments in  renewable energy has significantly exceeded the market value of emissions allowances in recent years (thanks Carolyn Fischer  for highlighting this fact).

Prescriptive policies come at a cost

This preference for using prescriptive policies –rather than market mechanisms- to coordinate abatement helps explain why carbon prices are so low.   Some simple graphs summarize the basics behind this cause and effect.

In the cartoon graph below, each colored block represents a different abatement activity (e.g. coal-to-gas fuel switching, renewable energy investments,  energy  efficiency improvements, etc.). Think Sesame Street meets the McKinsey curve. The width of the block measures achievable emissions reductions. The height of the blocks measures the cost per ton of emissions reduced.


In this cartoon cap-and-trade story, suppose baseline emissions are 200 and policy makers are seeking a 25% reduction. If we rely entirely on a permit market to get us there, we’d allocate 150 permits and let the market figure out where the 50 units of abatement will come from. An efficient market would drive investment in  the lowest cost options:  A  + B + 1/2 C. The total abatement cost incurred to meet the target would be  (20 x $10) + (20 x $20) + (10X$50) = $1100. The market clearing price (and the marginal abatement cost/ton) would be $50.

Now imagine that, in addition to the permit market, complimentary measures are introduced to mandate deployment of options D  and E.  These mandates take us 80% of the way towards meeting the emissions target.  The role of the carbon market has been seriously diminished –  we need only  10 more units of abatement to hit the target.


Under this scenario, the carbon market will incentivize investment in 10 units of A. The permit price drops to $10.  The total cost of meeting the emissions  target rises to  10 x $10 + 20 x $100 + 20 x $150 = $5100. And we wring our hands about low carbon prices and broken carbon markets.

Of course, this cartoon picture omits lots of real-world complexities (see this important EI paper for a more detailed analysis of California’s abatement supply and allowance demand). But it illustrates two real-world considerations. First, when complementary measures mandate relatively expensive abatement options, the carbon price we observe in the market will not reflect the marginal cost of reducing emissions. Second, a reliance on complementary measures to reduce emissions can significantly drive up the costs of hitting a given emissions target.

In California and in Europe, there is growing evidence that low allowance prices in the carbon market belie much higher abatement costs associated with complimentary policies.  For example, this paper estimates that the California Solar Initiative delivered emissions reductions at a cost of $130 – $196 per metric ton of CO2.  California’s  LCFS credit price (which reflects the marginal incentive to reduce a ton of MCO2e)  is currently averaging around $120 per metric ton CO2. In Europe, researchers estimate that the implicit costs of renewable energy targets per metric ton of CO2 are on the order of hundreds of euros for solar (and wind in some locations).

Time to unleash the carbon markets?

Looking out past 2020, more ambitious targets are being set and the process of charting a course to meet these targets is now underway.  This could be a turning point for carbon markets. How heavily are we going to lean on prescriptive policies versus carbon markets to meet these future emissions abatement goals?  If the increased stringency of future emissions targets is met with increasingly aggressive mandates and measures, we may be signing up for another round of low carbon prices.

I’m not suggesting we should leave *all*  of the driving to the carbon markets. There are good reasons for complementing carbon markets with some truly complementary policies and mandates (some of which are fleshed out here). But there are also costs associated with keeping the carbon market mechanism on a tight leash while chasing emissions reductions with prescriptive mandates and programs (see, for example, Jim’s recent post here).  Right now, carbon markets are hamstrung by a growing medley /cacophony of policies that drive allowance prices down. If we want to see carbon markets really work, we need to give them more work to do.



Posted in Uncategorized | Tagged , | 26 Comments

Is Electricity Pricing Different from “Real Markets”? Should It Be?

“No company in a real market would ever price that way.”  If you’ve discussed electricity pricing much, you’ve surely heard this said by a person opposed to one retail tariff or another.  In almost every instance, however, the claim is both incorrect and irrelevant.

Incorrect, because firms in unregulated markets are constantly experimenting with the pricing.  Whether it’s fixed charges, increasing-block pricing, decreasing-block pricing, demand charges, or even exit fees, there is something analogous in the unregulated economy.

Irrelevant, because the structure of providing grid services – a monopolist grid operator that has to assure second-by-second network-wide balancing across all transactions — has no analog in the unregulated sectors. We’ll get back to relevance.

But first how about a fun game of Name That Market Pricing Practice?

I give you the electricity price structure and you come up with the unregulated market that has a similar pricing model.  But don’t peek at the line below each structure where my suggested answers are.

We’ll start with an easy one.

Fixed Charges: 

The view that a consumer should have to pay only for the bits s/he uses is common.  But so is pricing that violates it.  There are the print and web-based media companies that charge a fixed subscription fee to read as much or as little as you like.  Amazon Prime shipping (and other services bundled with it) carrier a single fixed annual fee.   Rental car markets are generally a fixed daily charge with some free mileage, and usually a charge for additional miles beyond that.  The Zipcar model is a fixed annual fee plus a per-hour charge.  Gyms charge for membership that covers some basic activities, but then charge extra for certain classes, training or other add-ons.

Easy and fun, huh? Ok, how about a slightly more challenging one?

RealElectricityMarkets1Exit fees:

Cell phone contracts were the obvious example, but those contracts are changing.  Markets evolve.  But not always in the same direction.  Try paying off your mortgage early and you are likely to be hit with a pre-payment penalty, that is, an exit fee.  Cable television, internet service, and home security services all have exit fees.  Many students in business or law school have some part of their tuition paid by their employer, but if they don’t return to work for that company for X years they have to pay back the tuition subsidy when they exit.


Now for something tougher.

Increasing-Block Pricing (the price for additional units of a good rises as you buy more):

The fare to fly San Francisco to Boston may be $600 if you want 31 inches of legroom, but if you want 34 inches, about 10% more legroom (and no extra pretzels or luggage), that will be an extra $200.  The practice is simple price discrimination; the people who most value the extra legroom have a higher willingness to pay overall.   Sign up for Dropbox and they will give you 2 Gb of storage for free.  If you want more, you’ll have to pay.  That additional charge for rental car mileage beyond the bundle miles is increasing-block pricing.


In case that wore you out, here are a few softballs.

Decreasing-Block Pricing (the price for additional units of a good declines as you buy more):  

Too many example to list here.  Any quantity discount.

Minimum Bills:

Call a plumber or an electrician and the first hour is likely included in the $100+ charge for showing up. If they can fix your problem in 20 minutes, you still pay the minimum bill.  Many restaurants have a minimum charge per person.

Time-Varying Pricing:  

Ski resorts (cost more on weekends), Uber (surge pricing), strawberries (by season), theater tickets (cost more on weekends), baseball tickets (many teams charge more for big games), restaurants (lunch vs. dinner, and day of week at some).   It’s hard to get through a day without paying a price that varies with time.  And to the person who said “those aren’t necessities, like electricity”, take a look at housing in a college town, where rents drop in May and rise in August.


Have you caught your breath?  Ready to stretch your brain?

Demand Charges (a fee based on the customer’s highest rate of usage during a period):

For the most part, demand charges are just highly imperfect approximations to time-varying pricing.  This has become more clear with the many recent proposals for “demand charges” that apply only to specific time blocks.  They may be simpler than true dynamic pricing, though I’ve argued they probably aren’t in most cases, but they are usually attempting to price the same variation.  So, many of the answers to time-varying pricing apply here.  But there is at least one interesting example of something close to a classic demand charge, really intended to price customer-specific peak usage: cloud computing charges, such as Amazon’s server pricing, where the charge increases to account for a period of heavy demand for a company’s server.

In fact, buried in the many server payment options Amazon offers are examples of practically every type of pricing you can imagine.

And to finish up, how about the ultimate challenge?

Net metering – (a customer delivering electricity to the grid is credited at the same rate they are charged when they take electricity from the grid):  

OK, on this one, I’m pretty stumped. Some colleagues and I spent part of a long car ride last week trying to think of a market in which a seller of a good buys units of that same good from small retail customers and pays them the retail price.  The closest we could come up with is a customer buying items from store A and then returning them to store B for full retail price by claiming they were bought at store B.   Hmmm…not a great model.


You may have noticed that many of these real market pricing policies are very unpopular with customers.  In real markets firms occasionally exercise market power, charge more to a customer who really needs the product, take advantage of consumer misinformation or myopia, or just make a lot of money by selling something that has become very scarce.

Nearly everyone hates the exit fees on cable contracts and the exorbitant charges for a little more legroom on a long flight.  Many people bristle at having to pay for all the cable channels when they only watch a few of them, or paying a monthly gym membership fee, at least once they’ve discovered they really aren’t going to be there every morning at 6am.  And resistance to the rents in Bay Area and other housing markets is spurring new policies to make these less like real markets.  So, while there are real market analogs to nearly all electricity pricing models, that is hardly a justification for using them in a regulated setting.

Likewise, the absence of a close market analogy isn’t an argument against an approach.   Delivering electricity is not like services that are sold in real markets.  The transmission and distribution grids are natural monopolies, where it is more efficient to have one system used by all, rather than every seller building their own set of wires to deliver their own electricity.  And customers want the reliability value of that pooled network, which enables one generating source to instantaneously fill in for another if a gas plant suddenly shuts down, or a cloud passes over solar panels, or the wind stops blowing, or a tree falls on a transmission line.

But what makes a natural monopoly natural is that the cost of adding one more customer is lower than the overall average cost per customer.  That means that the attractive notion of cost causality – that Joe Bob Customer is responsible only for the costs that are caused by adding him to the grid – won’t generate enough total revenue to pay for the whole system.   Somebody has to pay more to cover the costs.  The array of prices that policy makers, utilities, and other interested parties have cooked up are an attempt to cover costs, follow cost causality, be fair to customers, help lower-income households, and be environmentally friendly, among other goals.

In real markets, companies cook up pricing to maximize profits and…that’s it.  There are many things done by the government, or under government regulation, that wouldn’t be financed the same way, or possibly done at all, in the private sector: national defense, local policing, disease control, environmental protection, free K-12 education, and consumer protection to name just a few.  Some private sector ideas can be very valuably applied in these area, but almost no one would say that the fundamental organization of these activities should be driven by a private-sector model.

So, let’s continue debating the pros and cons of the pricing alternatives in the rapidly-changing electricity world, but let’s do it without pretending that “real companies don’t price that way” is a useful contribution to the discussion.  Whatever the model, there is likely some real company that does price that way, but who cares.

Tweet me your “real market” analogs of electricity pricing @BorensteinS

Posted in Uncategorized | Tagged , , | 40 Comments

Do Energy Efficiency Investments Deliver During Crunch Time?

(Today’s post is co-authored with Judson Boomhower, who recently received his Ph.D. at Berkeley where he was a graduate student researcher at the Energy Institute and is now a post-doc at Stanford)

Along with everyone else in Berkeley, we’ve enjoyed watching the home-team Golden State Warriors pull out comeback after miraculous comeback on their way to the NBA Finals. Has anyone else watched Steph Curry and Klay Thompson catch fire at just the right time and thought, “this team could really teach us something about energy efficiency policy?”

Photos (1 and 2) by Keith Allison, Creative Commons License BY-SA 2.0

Crunch time in electricity markets are those few highest demand hours each year when generation is operating at full capacity. During these ultra-peak hours there is little ability to further increase supply so demand reductions are extremely valuable.

This feature of electricity markets is well known, yet most analyses of energy-efficiency policies completely ignore timing. For example, when the Department of Energy considers new energy-efficiency standards, they focus on total energy savings without regard to when these savings occur. With a few notable exceptions, mostly from here in California, there is surprisingly little attention both by policymakers and in the academic literature to how the value of energy-efficiency varies over time.

We take on this issue in a new Energy Institute working paper, available here. Our evidence comes from Southern California Edison’s residential air conditioner program. We use anonymized hourly smart-meter data from 9,700 rebate recipients to estimate how electricity savings vary across months-of-the-year and hours-of-the-day. As the figure below shows, electricity savings tend to occur between June and September, and between about 3pm and 9pm.

Electricity Savings

As a side note to duck chart aficionados, this savings profile differs somewhat from engineering models, which predict more savings earlier in the afternoon and in non-summer months. As more solar generation comes online, there is growing concern about meeting the steep evening ramp. Our estimates suggest that air conditioning investments deliver more savings than expected during these evening hours, and thus could become more valuable as renewables penetration increases.

These savings are highly correlated with the value of electricity. The figure below shows the value of electricity by hour-of-day in California for February and August, in dollars per megawatt-hour. We include wholesale electricity prices and the “resource adequacy” payments that generators receive to make sure they will be available when demand is high. The different data series in each panel show different methods for allocating resource adequacy contract prices to high load hours. For example, with “Top Hour” we assign the entire capacity value to the highest load hour-of-day in each month.

Wholesale Electricity Prices and Capacity ValuesCAISOfeb.PNGCAISOaug.PNG

Regardless of exactly how we allocate resource adequacy payments, these figures make clear that summer afternoons are crunch time in California electricity markets. Unlike natural gas, electricity cannot be cost-effectively stored even for short periods so during these ultra-peak periods there is nothing preventing wholesale prices and capacity values from rising sky high.

And this is exactly when air conditioning investments yield their largest electricity savings. Efficient air conditioners don’t save electricity in the middle of the night or during the winter, but electricity is less valuable at these times anyway. Overall, we estimate that accounting for timing increases the value of air conditioner investments by 50% relative to a naive calculation that ignores timing.

How does this compare to other energy-efficiency investments?  So glad you asked. We next brought in engineering-based savings profiles from the E3 calculator for a whole variety of energy-efficiency investments and calculated the timing premium for California and for several other major U.S. markets. The table below shows the results.

Timing Premiums for Energy-Efficiency Investments


Overall, there is a remarkably wide range of value across investments. Residential air conditioning has a 35%+ average premium across markets. The premium is similar whether we use our econometric estimates (first row), or the engineering estimates (second row), reflecting the fact that, despite some interesting differences, both sets of estimates indicate large savings during high-value summer peak hours.

Other investments also gain value when timing is considered. Non-residential heating and cooling investments enjoy a 20-30% timing premium, reflecting the relatively high value of electricity during the day when these investments yield savings. This is particularly true in CAISO and ERCOT, but also true in NYISO.

Refrigerators and freezer investments have the lowest timing premium. This makes sense because savings from these investments are only weakly correlated with system load. Lighting also does surprisingly poorly, reflecting that LEDs save electricity mostly during the winter and at night, when electricity tends to be less valuable.

We hope our paper will help move the energy efficiency discussion away from total savings and toward total value. To do this will require more rigorous ex post analyses of energy savings based on real market data. It will also require integrating these savings estimates with prices from wholesale and capacity markets, rebalancing the energy efficiency portfolio toward investments that save energy in more valuable hours.

Of course, these premiums are not everything.  In evaluating energy-efficiency policies it is still important to evaluate all the costs and benefits. The numbers above don’t say anything about how much these different types of programs cost, or about how large ex post savings are relative to ex ante estimates, or about how many participants are inframarginal (i.e., “free-riders” in the energy efficiency literature). We’ve discussed these issues in previous blog posts here, here, and here. But our paper makes a strong case that, when calculating benefits, it is important to account for timing.

More generally, our paper highlights the power of smart-meter data. The econometric analysis we performed for residential air conditioning would have been impossible just a few years ago, but today more than 40% of U.S. residential electricity customers have smart meters, up from less than 2% in 2007. We are just scratching the surface of what is now possible using this flood of new data and its potential to facilitate smarter, more evidence-based energy-efficiency policies that are better integrated with market priorities.

Posted in Uncategorized | Tagged , | 20 Comments