Is Electricity Pricing Different from “Real Markets”? Should It Be?

“No company in a real market would ever price that way.”  If you’ve discussed electricity pricing much, you’ve surely heard this said by a person opposed to one retail tariff or another.  In almost every instance, however, the claim is both incorrect and irrelevant.

Incorrect, because firms in unregulated markets are constantly experimenting with the pricing.  Whether it’s fixed charges, increasing-block pricing, decreasing-block pricing, demand charges, or even exit fees, there is something analogous in the unregulated economy.

Irrelevant, because the structure of providing grid services – a monopolist grid operator that has to assure second-by-second network-wide balancing across all transactions — has no analog in the unregulated sectors. We’ll get back to relevance.

But first how about a fun game of Name That Market Pricing Practice?

I give you the electricity price structure and you come up with the unregulated market that has a similar pricing model.  But don’t peek at the line below each structure where my suggested answers are.

We’ll start with an easy one.

Fixed Charges: 

The view that a consumer should have to pay only for the bits s/he uses is common.  But so is pricing that violates it.  There are the print and web-based media companies that charge a fixed subscription fee to read as much or as little as you like.  Amazon Prime shipping (and other services bundled with it) carrier a single fixed annual fee.   Rental car markets are generally a fixed daily charge with some free mileage, and usually a charge for additional miles beyond that.  The Zipcar model is a fixed annual fee plus a per-hour charge.  Gyms charge for membership that covers some basic activities, but then charge extra for certain classes, training or other add-ons.

Easy and fun, huh? Ok, how about a slightly more challenging one?

RealElectricityMarkets1Exit fees:

Cell phone contracts were the obvious example, but those contracts are changing.  Markets evolve.  But not always in the same direction.  Try paying off your mortgage early and you are likely to be hit with a pre-payment penalty, that is, an exit fee.  Cable television, internet service, and home security services all have exit fees.  Many students in business or law school have some part of their tuition paid by their employer, but if they don’t return to work for that company for X years they have to pay back the tuition subsidy when they exit.

RealElectricityMarkets2

Now for something tougher.

Increasing-Block Pricing (the price for additional units of a good rises as you buy more):

The fare to fly San Francisco to Boston may be $600 if you want 31 inches of legroom, but if you want 34 inches, about 10% more legroom (and no extra pretzels or luggage), that will be an extra $200.  The practice is simple price discrimination; the people who most value the extra legroom have a higher willingness to pay overall.   Sign up for Dropbox and they will give you 2 Gb of storage for free.  If you want more, you’ll have to pay.  That additional charge for rental car mileage beyond the bundle miles is increasing-block pricing.

RealElectricityMarkets3

In case that wore you out, here are a few softballs.

Decreasing-Block Pricing (the price for additional units of a good declines as you buy more):  

Too many example to list here.  Any quantity discount.

Minimum Bills:

Call a plumber or an electrician and the first hour is likely included in the $100+ charge for showing up. If they can fix your problem in 20 minutes, you still pay the minimum bill.  Many restaurants have a minimum charge per person.

Time-Varying Pricing:  

Ski resorts (cost more on weekends), Uber (surge pricing), strawberries (by season), theater tickets (cost more on weekends), baseball tickets (many teams charge more for big games), restaurants (lunch vs. dinner, and day of week at some).   It’s hard to get through a day without paying a price that varies with time.  And to the person who said “those aren’t necessities, like electricity”, take a look at housing in a college town, where rents drop in May and rise in August.

 

Have you caught your breath?  Ready to stretch your brain?

Demand Charges (a fee based on the customer’s highest rate of usage during a period):

For the most part, demand charges are just highly imperfect approximations to time-varying pricing.  This has become more clear with the many recent proposals for “demand charges” that apply only to specific time blocks.  They may be simpler than true dynamic pricing, though I’ve argued they probably aren’t in most cases, but they are usually attempting to price the same variation.  So, many of the answers to time-varying pricing apply here.  But there is at least one interesting example of something close to a classic demand charge, really intended to price customer-specific peak usage: cloud computing charges, such as Amazon’s server pricing, where the charge increases to account for a period of heavy demand for a company’s server.

In fact, buried in the many server payment options Amazon offers are examples of practically every type of pricing you can imagine.

And to finish up, how about the ultimate challenge?

Net metering – (a customer delivering electricity to the grid is credited at the same rate they are charged when they take electricity from the grid):  

OK, on this one, I’m pretty stumped. Some colleagues and I spent part of a long car ride last week trying to think of a market in which a seller of a good buys units of that same good from small retail customers and pays them the retail price.  The closest we could come up with is a customer buying items from store A and then returning them to store B for full retail price by claiming they were bought at store B.   Hmmm…not a great model.

 

You may have noticed that many of these real market pricing policies are very unpopular with customers.  In real markets firms occasionally exercise market power, charge more to a customer who really needs the product, take advantage of consumer misinformation or myopia, or just make a lot of money by selling something that has become very scarce.

Nearly everyone hates the exit fees on cable contracts and the exorbitant charges for a little more legroom on a long flight.  Many people bristle at having to pay for all the cable channels when they only watch a few of them, or paying a monthly gym membership fee, at least once they’ve discovered they really aren’t going to be there every morning at 6am.  And resistance to the rents in Bay Area and other housing markets is spurring new policies to make these less like real markets.  So, while there are real market analogs to nearly all electricity pricing models, that is hardly a justification for using them in a regulated setting.

Likewise, the absence of a close market analogy isn’t an argument against an approach.   Delivering electricity is not like services that are sold in real markets.  The transmission and distribution grids are natural monopolies, where it is more efficient to have one system used by all, rather than every seller building their own set of wires to deliver their own electricity.  And customers want the reliability value of that pooled network, which enables one generating source to instantaneously fill in for another if a gas plant suddenly shuts down, or a cloud passes over solar panels, or the wind stops blowing, or a tree falls on a transmission line.

But what makes a natural monopoly natural is that the cost of adding one more customer is lower than the overall average cost per customer.  That means that the attractive notion of cost causality – that Joe Bob Customer is responsible only for the costs that are caused by adding him to the grid – won’t generate enough total revenue to pay for the whole system.   Somebody has to pay more to cover the costs.  The array of prices that policy makers, utilities, and other interested parties have cooked up are an attempt to cover costs, follow cost causality, be fair to customers, help lower-income households, and be environmentally friendly, among other goals.

In real markets, companies cook up pricing to maximize profits and…that’s it.  There are many things done by the government, or under government regulation, that wouldn’t be financed the same way, or possibly done at all, in the private sector: national defense, local policing, disease control, environmental protection, free K-12 education, and consumer protection to name just a few.  Some private sector ideas can be very valuably applied in these area, but almost no one would say that the fundamental organization of these activities should be driven by a private-sector model.

So, let’s continue debating the pros and cons of the pricing alternatives in the rapidly-changing electricity world, but let’s do it without pretending that “real companies don’t price that way” is a useful contribution to the discussion.  Whatever the model, there is likely some real company that does price that way, but who cares.

Tweet me your “real market” analogs of electricity pricing @BorensteinS

Posted in Uncategorized | Tagged , , | 40 Comments

Do Energy Efficiency Investments Deliver During Crunch Time?

(Today’s post is co-authored with Judson Boomhower, who recently received his Ph.D. at Berkeley where he was a graduate student researcher at the Energy Institute and is now a post-doc at Stanford)

Along with everyone else in Berkeley, we’ve enjoyed watching the home-team Golden State Warriors pull out comeback after miraculous comeback on their way to the NBA Finals. Has anyone else watched Steph Curry and Klay Thompson catch fire at just the right time and thought, “this team could really teach us something about energy efficiency policy?”

curryklay
Photos (1 and 2) by Keith Allison, Creative Commons License BY-SA 2.0

Crunch time in electricity markets are those few highest demand hours each year when generation is operating at full capacity. During these ultra-peak hours there is little ability to further increase supply so demand reductions are extremely valuable.

This feature of electricity markets is well known, yet most analyses of energy-efficiency policies completely ignore timing. For example, when the Department of Energy considers new energy-efficiency standards, they focus on total energy savings without regard to when these savings occur. With a few notable exceptions, mostly from here in California, there is surprisingly little attention both by policymakers and in the academic literature to how the value of energy-efficiency varies over time.

We take on this issue in a new Energy Institute working paper, available here. Our evidence comes from Southern California Edison’s residential air conditioner program. We use anonymized hourly smart-meter data from 9,700 rebate recipients to estimate how electricity savings vary across months-of-the-year and hours-of-the-day. As the figure below shows, electricity savings tend to occur between June and September, and between about 3pm and 9pm.

Electricity Savings
Econometric

As a side note to duck chart aficionados, this savings profile differs somewhat from engineering models, which predict more savings earlier in the afternoon and in non-summer months. As more solar generation comes online, there is growing concern about meeting the steep evening ramp. Our estimates suggest that air conditioning investments deliver more savings than expected during these evening hours, and thus could become more valuable as renewables penetration increases.

These savings are highly correlated with the value of electricity. The figure below shows the value of electricity by hour-of-day in California for February and August, in dollars per megawatt-hour. We include wholesale electricity prices and the “resource adequacy” payments that generators receive to make sure they will be available when demand is high. The different data series in each panel show different methods for allocating resource adequacy contract prices to high load hours. For example, with “Top Hour” we assign the entire capacity value to the highest load hour-of-day in each month.

Wholesale Electricity Prices and Capacity ValuesCAISOfeb.PNGCAISOaug.PNG

Regardless of exactly how we allocate resource adequacy payments, these figures make clear that summer afternoons are crunch time in California electricity markets. Unlike natural gas, electricity cannot be cost-effectively stored even for short periods so during these ultra-peak periods there is nothing preventing wholesale prices and capacity values from rising sky high.

And this is exactly when air conditioning investments yield their largest electricity savings. Efficient air conditioners don’t save electricity in the middle of the night or during the winter, but electricity is less valuable at these times anyway. Overall, we estimate that accounting for timing increases the value of air conditioner investments by 50% relative to a naive calculation that ignores timing.

How does this compare to other energy-efficiency investments?  So glad you asked. We next brought in engineering-based savings profiles from the E3 calculator for a whole variety of energy-efficiency investments and calculated the timing premium for California and for several other major U.S. markets. The table below shows the results.

Timing Premiums for Energy-Efficiency Investments

table.PNG

Overall, there is a remarkably wide range of value across investments. Residential air conditioning has a 35%+ average premium across markets. The premium is similar whether we use our econometric estimates (first row), or the engineering estimates (second row), reflecting the fact that, despite some interesting differences, both sets of estimates indicate large savings during high-value summer peak hours.

Other investments also gain value when timing is considered. Non-residential heating and cooling investments enjoy a 20-30% timing premium, reflecting the relatively high value of electricity during the day when these investments yield savings. This is particularly true in CAISO and ERCOT, but also true in NYISO.

Refrigerators and freezer investments have the lowest timing premium. This makes sense because savings from these investments are only weakly correlated with system load. Lighting also does surprisingly poorly, reflecting that LEDs save electricity mostly during the winter and at night, when electricity tends to be less valuable.

We hope our paper will help move the energy efficiency discussion away from total savings and toward total value. To do this will require more rigorous ex post analyses of energy savings based on real market data. It will also require integrating these savings estimates with prices from wholesale and capacity markets, rebalancing the energy efficiency portfolio toward investments that save energy in more valuable hours.

Of course, these premiums are not everything.  In evaluating energy-efficiency policies it is still important to evaluate all the costs and benefits. The numbers above don’t say anything about how much these different types of programs cost, or about how large ex post savings are relative to ex ante estimates, or about how many participants are inframarginal (i.e., “free-riders” in the energy efficiency literature). We’ve discussed these issues in previous blog posts here, here, and here. But our paper makes a strong case that, when calculating benefits, it is important to account for timing.

More generally, our paper highlights the power of smart-meter data. The econometric analysis we performed for residential air conditioning would have been impossible just a few years ago, but today more than 40% of U.S. residential electricity customers have smart meters, up from less than 2% in 2007. We are just scratching the surface of what is now possible using this flood of new data and its potential to facilitate smarter, more evidence-based energy-efficiency policies that are better integrated with market priorities.

Posted in Uncategorized | Tagged , | 18 Comments

Is US Climate Policy Killing Nuclear Power?

These are strange times for competitive power markets in the United States.  Baseload power plants, many of them nuclear, are reportedly struggling to stay out of the red. About 10 years ago, plants like these were thriving with high wholesale prices set by (then) high natural gas prices. These nukes were such blue chip assets that many consumer groups cried foul about the process of deregulating them. Now, in places like Illinois and Ohio, it is the owners of base load plants who are, in effect, seeking to return to the warm bosom of cost-based pricing by seeking regulatory approval for long-term contracts at what many see as above-market prices.

Normally, someone with a market-oriented perspective on all of this would simply roll their eyes and point out that it is just one more example of trying to use regulatory restructuring to arbitrage the difference between average (i.e. regulated) and marginal (i.e. market) costs. It’s a game that has been played by both customers and suppliers for much of the last 20 years. Beyond the normal political arguments about maintaining local jobs, that certainly seems to be a large part of what is going on with nukes today.  However, some of these machinations also reveal another trend that illustrates how climate policy is interacting with restructured markets in ways that are counter-productive for both the markets, and for the climate.

Most of the authors on this blog would agree that putting a price on CO2, while not perfect, is a better way to approach climate policy than many of the alternatives currently being tried around the US. Certainly a non-trivial carbon price would be a boost to a deregulated nuclear power station. Whatever your position on the other attributes of nuclear power, it is a zero carbon source of electricity.  A $20/ton CO2 price would translate to about an $8-10/MWh increase in power prices, assuming the marginal plant were a natural gas station emitting roughly 1/2 a ton of CO2 per MWh. For a 1000 MW nuclear plant with a 90% capacity factor, that would translate into almost $80 million in extra annual revenue.

However, as almost everyone who follows this blog must know, we don’t have a carbon price in most parts of the country, and even where power plant emissions are capped, prices have been soft leading some to apparently conclude that carbon pricing has failed. California’s price has been supported by a price floor set by a reserve price in California’s quarterly allowance auctions. If there aren’t enough buyers at the reserve price, California simply sells less permits. This floor only binds if someone needs to buy allowances in the auction, and that’s looking somewhat shaky at the moment.  If we get to the point where there are already enough allowances in circulation so that the auction is not necessary, the auction floor price will be irrelevant.

Outside of California, therefore, meaningful climate policy will in the near term largely play out through the U.S. EPA’s Clean Power Plan, and various programs that support the development of renewable electricity. Neither of these policies may be good news for supporters of nuclear power.

Lets start with the Clean Power Plan. One key aspect of this regulation is the flexibility given to states over how to comply.  The two market-based options are either the implementation of cap-and-trade (called a mass-based approach by EPA), or a system that would focus on the average emissions rate from power plants within a state.  This latter approach (called a rate-based approach by EPA) looks a lot like other intensity standards such as the CAFE standards on vehicle fuel efficiency and Low Carbon Fuel Standards for transportation fuels.

Like carbon caps, intensity standards make “dirty” sources look more expensive than cleaner sources of electricity, but unlike caps, intensity standards do this by effectively subsidizing the sale of energy that is better than the standard.  A state can meet its EPA targets by either reducing its output from dirty sources or by increasing output from cleaner sources.  One of the effects of intensity standards that concerns environmental economists is that they do not pass through the cost of pollution to consumer goods (e.g. electricity or gasoline).

Under  the Clean Power Plan the price effect of intensity standards can be even more dramatic.  According to recent work I’ve done with Stephen Holland,  Jon Hughes, and Chris Knittel, if states adopt the rate-based approach, wholesale power prices may not just be lower than they would be under cap and trade, they could easily be lower than they would be under no regulation at all.  This is because the CPP, in addition to increasing the cost of coal plants, will provide additional incentives for states to increase renewable energy production.  The figure below describes three different supply curves for the Western US under a cap-and-trade, a rate-based standard, and business as usual.   supply_curve1

The dashed green line is the rate-standard and the red line assumes no regulation.  The left side of the green line shows a large amount of new renewable investment, which is being subsidized by the rate standard giving it a negative marginal cost.  From the blue line you can see that about the same amount of renewables get built under a cap, but it’s financed by the fact that electricity prices are higher, rather than through the internal subsidy provided by a rate standard.  It’s not just renewables that would get subsidized under a rate-based standard, however, many natural gas plants would also receive some level of implicit subsidy (to make them look cheaper than coal).  This is why the dashed green line is below the red (no regulation) line for most ranges of output, the plants on the right hand side are mostly natural gas.

Retail prices probably would not come down, as any above market cost of renewables would have to be paid for through other charges on retail rates, but wholesale prices would show the effect of an influx of zero marginal cost renewable energy and subsidized gas output.  Renewables largely come out the same under either caps or rate standards, the difference is in wholesale prices.  The big losers (other than coal plants) would be existing hydro and nuclear facilities that sell power at these lower wholesale prices but for the most part would not be eligible for the implicit subsidies provided by a rate-based standard under the clean power plan.

The details are a bit complicated, but under a rate-based standard “new” sources of zero carbon power can be part of the emissions rate average used for the standard.  Existing sources (including renewables built before 2012) would not count toward the emissions rate averages.  This means generating zero carbon power from existing sources would be substantially rewarded under a mass-based standard but receive no credit under a rate-based standard.

I assume that these provisions were intended to avoid giving additional rewards to plants like nuclear and large hydro for production they were expected to provide anyway.  This is why new nuclear facilities can qualify for benefits under a rate-based approach, but not existing facilities.  This perspective, however, assumes that these plants will in fact be around to produce.  Perversely, the regulations targeting carbon emissions could make that less likely.

This kind of effect is not confined to rate-based standards under the Clean Power Plan, however. We are also starting to see the cumulative effects of the combinations of tax credits and portfolio standards that have been contributing to the rapid expansion of grid-scale renewables in the US. Across the country, renewable portfolio standards are increasingly adding new energy and capacity to systems that are already fully resourced. One effect is a growing glut of capacity and energy in some markets that is depressing wholesale prices. Not coincidentally, renewable mandates and other policies are also likely depressing carbon prices in places like California and Europe. Unlike a carbon price, these renewable policies, through lower wholesale prices, threaten all incumbent generation no matter how clean or dirty that existing generation may be.

And now there are more and more stories about the tenuous position of nuclear power in today’s power markets.  Although fuel costs are low, ongoing fixed costs can be quite high, making early retirement a serious economic option.  If this were simply a story about low natural gas costs, such a trend could just be the market responding exactly as it should by closing expensive plants to make way for newer more efficient ones.  Natural gas is not the only story, however, when carbon emissions are also considered.  Large scale retirements of nuclear generation stations would make compliance with the CPP much more difficult.

This is where the choice of tools for combating CO2 emissions makes a big difference.  Not only is the lack of  carbon pricing not rewarding nuclear plants, by combating CO2 emissions through renewable energy mandates and intensity standards, we are also actively accelerating the demise of nukes, another zero carbon resource. For some who object to nuclear for other reasons, this may be the intention, but for climate policy it’s two steps forward, one step back.

Posted in Uncategorized | Tagged , , | 17 Comments

Giving Up on Carbon Markets in Favor of a Giant Vacuum in the Sky?

I sat next to a distinguished climate scientist at a recent dinner, who told me point blank that “carbon markets have failed, which means one should give up on market based approaches to reducing emissions”. After the ecologist on my other side had heimliched a poached organic beet from my windpipe, I launched a vicious full frontal attack on said climate scientist’s blue-eyed dreams of a world that only gets 1.5 degrees warmer. We all have our fantasies. In my fantasy world, we aggressively tighten caps or raise carbon taxes to reduce emissions. In his world, we use lots of public funds to develop a vacuum, which sucks all additional carbon out of the atmosphere some time mid-century.

The chorus pronouncing cap and trade as a failure has become louder and is often echoed by my friends in the physical climate community. It’s not a song I enjoy. What’s the supporting evidence? It’s always the same: Prices are too low and emissions reductions have been too small. Well, duh. If you set a loose cap, prices will be low. Then there is grouchy mumbling of manipulation of markets and whining about offsets. Where there’s a will (for more emissions reductions), there’s a higher price. So let’s talk about more emissions reductions. The type of emissions reductions needed for limiting warming to 1.5 degrees Celsius.

carboncountdown

My buddies over at Carbon Brief analyzed how many years of current emissions it would take to limit warming to below 1.5 degrees Celsius. To make a long and super nerdy story short, we will blow through the remaining carbon allowed to limit the “chance” of exceeding the 1.5 degree target to 66% over the next 5 years. If you are willing to lower the “chance” to 33% you might have 16.5 years. This means we need to go to zero emissions by as soon as 2021. Yes, fellow economists, I can hear you laughing. This is about as likely as Donald Trump picking Bill McKibben as his VP. We have 2 billion people without access to electricity, explosive growth in energy consuming durables across the rapidly developing world and some of the lowest fossil fuel prices in recent memory. Even if you double these timelines and give us 30 years, this economist is highly skeptical that we can get there.

This is where the vacuum in the sky comes in. The argument by my dining companion was that we should mobilize massive amounts of public (and possibly private) capital to focus most innovation in this space on developing a technology that removes CO2 from the atmosphere. This would prevent us from having to suffer the higher prices from those pesky carbon markets and we can get our 1.5 degree world. This vacuum won’t be free to install or operate of course. You will still want to engage in all mitigation efforts with lower marginal cost than the magical Miele. In fact, the way you would incentivize a carbon vacuum is probably by marrying it to a cap and trade system. To this rational economist and his even more rational friend Jim Sallee (who pointed this out), the vacuum and pricing should be complements, not substitutes.

To be fair, there are some ambitious efforts under way to develop such technologies and they might just come about. But, I am not willing to bet my planet on it. In fact I think this line of argumentation is just plain reckless. This is the equivalent of arguing that an obese person should continue eating Monte Christo sandwiches for breakfast, lunch and dinner, since surely weight loss science will provide a pill that will prevent reasonably bad long run health consequences. And, if you aren’t a fan of geo-engineering, this should make you worry too. Betting the farm on direct capture makes geo-engineering a Plan B (which, you know, is the second letter of the alphabet).

So what do we do? We economists will swallow our pride and admit that we live in a world that will in most places not go for pure price based approaches to reducing emissions. We will put a price on the heads of as many carbon molecules as we can, kill the rest by (gulp) using the least offensive versions of command and control policies. We should then get serious about R&D in all sorts of things, including that giant vacuum in the sky. We should do this with or without revenue from a carbon tax or cap and trade. I seriously hope that vacuum works. Because if it does, I am going out and getting myself one of those 1965 Shelby GT350s. That will probably be on my 130th birthday though. What a wonderful world that would be.

Posted in Uncategorized | Tagged , | 16 Comments

The Future of (Not) Driving

We have a momentous event coming up in my household: my son will turn 16 at the end of the month and will – if the DMV gods are agreeable – get his drivers license. This has sparked a lot of debates in my family about what driving will look like over the next 10-20 years.

My son hopes to strike this pose soon 

My son hopes to strike this pose soon

In short, my son HATES the idea of driverless cars. Imagine – the club he’s been pining to join – drivers – is now threatened with extinction. Perhaps with wishful thinking, he has come up with a lot of theories about why self-driving cars will never take off.

I disagree with him, though I may be indulging in a bit of wishful thinking myself. I find few things more stressful than sitting in the passenger seat with my son at the wheel. His behind-the-wheel instructor says he’s a good driver (I wish she wouldn’t tell him that…), but I have never been quite so focused on everything that could possibly go wrong, and I would rather trust a computer to make the right decision if something does.

Also, I’ve spent enough time in Bay Area traffic jams – where one distracted driver who brakes a little too hard can slow down a whole lane of traffic – to relish the idea of smoothly flowing computer-driven cars. Research seems to back me up –simulations suggest that automated vehicles will likely reduce fuel consumption, and part of that reduction will come from fewer slowdowns due to accidents.

Here’s my son’s theory, which draws on network economics even if he doesn’t use that phrase: as long as there are enough people like him on the road, who actually want to be behind the wheel, driverless cars won’t do much to improve congestion. In the extreme, a mixture of robot-driven and person-driven cars could be worse for congestion than all person-driven. Imagine if Silicon Valley technocrats could send for their favorite Los Angeles sushi and have it delivered by a driverless, and passenger-less, car, thereby adding cars that wouldn’t have been there. Then put those vehicles on the road with the remaining 16-year-old boy drivers, and others with an inner 16-year-old boy, some of whom get a kick out of messing with the automated cars’ sensors to make them brake quickly.

His theory was borne out by the story of the Google car getting stuck at the four-way stop as it waited for other cars to come to a complete stop. But, that doesn’t seem like an unsolvable problem to me – someone just needs to update the algorithm and stress test it versus thrill-seeking drivers.

My son also points out that his online driver’s ed course warned that no one leaves the house thinking they will get in a car accident. So, he thinks people won’t be drawn to driverless cars to protect their own safety. Consistent with this, surveys suggest that most of us live in a Lake Wobegon world and think we’re better than the average driver. This could mean that we all want other people – particularly the drunks, texters and overly aggressive lane-changers – to be in driverless cars, but want control over our own on-road destiny. Given that we buy cars for ourselves and not others, this doesn’t lead to many autonomous car sales.

I try to explain to my son (without using the phrases “opportunity cost” or “consumer surplus”…) that driverless cars will both give us more time and make driving a lot cheaper, so teenagers will eventually find another way to mark the transition to adulthood.

On the “more time” point, think of all the things we can do instead of sitting behind the wheel of the car. With more of us able to be productive remotely, time in the car could be quite valuable.

In terms of the cost of driving, it’s hugely inefficient to have so many of us own a $20,000-plus piece of capital that we use on average 46 minutes per day. The capital depreciates even when we don’t use it because technological change makes newer cars more desirable.

If you could order up an autonomous car only when you needed it – the cost of the capital would be spread over many more people and rides, driving down the cost per ride. 960-gm-lyft-bring-selfdriving-electric-taxis-a-yearSo, I explain to my son, you’ll have to really, really like driving to pass up the much cheaper alternative of renting one from the next incarnation of Uber or Lyft. In fact, GM and Lyft recently announced that they will begin testing self-driving taxis on actual roads within a year.

Cars themselves are also likely to get cheaper if they’re automated, leaving aside the cost of the automation itself. In economics, cars are the canonical empirical example of a differentiated product. Remember back to basic microeconomics, where the perfectly competitive market model works for a purely homogeneous good and market forces drive prices to marginal costs? The converse of this is that the more differentiated products are, the higher the markups above marginal cost are likely to be (which roughly means higher company profits). In fact, economists have written dozens of papers trying to model consumer demand for cars, accounting for our demand for brands, horsepower, leather seats, etc.

My guess is that with driverless cars, consumer demand for differentiation will be much lower. Who even knows what the brand of the last bus you rode was? And, as long as my Uber driver’s car is clean and gets me where I’m going, I don’t really care what he’s driving – no self-identity there.

In a rejoinder that warms his economist mother’s heart – the boy understands incentives! – my son points out that this is another reason why driverless cars are doomed. The auto companies will figure out that they spell lower profits for them, and will use their (considerable) economic and political power to derail them.

We will see. In a battle between Google and Ford – Silicon Valley and Detroit – I might put my money on Google. At least I hope I’m right….

What do you think? For those of you with 6-year-olds, will the drivers test be the same rite of passage in another 10 years?

Posted in Uncategorized | Tagged | 9 Comments

The Distribution Grid Has Room for More Solar

There is evidence that bigger isn’t necessarily better when it comes to solar energy projects.

Economies of scale suggest large projects would be more cost-effective than small ones. But recently, Lawrence Berkeley National Lab (LBNL) did an analysis of solar projects that came on-line in 2014. Their study collected information about ground-mounted, utility-scale projects (though notably not rooftop solar).

The chart below from the report groups together projects based on their size. The height of the bars reflects the capacity-weighted installed price, denominated in dollars per watt.

LBNL chart

SOURCE: Bolinger, Mark and Joachim Seel. Utility Scale Solar 2014, Lawrence Berkeley National Laboratory, 2015.

The LBNL researcher found the smaller utility-scale projects had a LOWER cost per watt than the largest projects. (Note: the smaller projects in the report are still more than 1,000 times larger than the average residential rooftop system.)

Why would this be? The report’s authors hypothesize that the larger projects face regulatory and interconnection complexities that drive up costs. Smaller projects (around 25-50 acres) have an easier time clearing these hurdles.

The full cost of the biggest projects may even be higher than the graph shows. This is because the prices collected by LBNL do not include all of the infrastructure costs associated with the projects. Key among these is the cost of building out the transmission grid to reach them and increasing the overall capacity of the grid.

It can be hard to tie specific transmission system upgrades to particular power plants because the grid is so networked. The transmission grid is similar to our road networks. Building a large residential development on the outskirts of a city far from workplaces not only requires building roads to the specific development. The new residents will also cause more traffic on roads throughout the metropolitan area and require the freeway system to be expanded.

Similarly, the development of large-scale renewable energy projects in remote locations in California has spurred a significant expansion of the state’s transmission grid. In fact, transmission expenditures have grown more rapidly than any other major utility expense category.

For Southern California Edison (SCE), transmission costs grew at an average annual rate of 9.5% between 2005 and 2015. For customers this showed up in retail prices. For example, SCE’s large commercial and industrial customers experienced a tripling of transmission rates over this time period. The graph below, drawn from an annual review of utility costs, performed by the California Public Utilities Commission shows this trend in total transmission costs.

CPUC graph

SOURCE: California Public Utilities Commission, 2016 Gas & Electric Utility Cost Report, April 2016.

Meanwhile the smaller projects, in the 1 to 3 Megawatt range (just slightly smaller than those covered by the LBNL study), can be connected directly to the distribution grid. The distribution grid includes all the power lines, poles, transformers and other equipment that carries electricity from substations to homes and businesses.

It may be possible to tie these smaller projects into the grid without triggering large infrastructure investments. Using the housing analogy, if housing is built close to workplaces then a significant number of residents could have short commutes on the existing roads without creating traffic on the surrounding freeway system.

I recently visited a test facility in Lubbock, Texas where Group NIRE has connected 3 Megawatt wind turbines directly to the distribution grid. Notably, each wind turbine has to be tied into a different substation so that the power generation doesn’t overwhelm demand.

IMG_1645 (1)

A large wind turbine connected directly to the distribution grid at Group NIRE. Group NIRE was formed by Texas Tech University in 2010.

Are there other infrastructure costs the smaller utility-scale solar projects require? Answering this question requires a better understanding of the distribution grid.

Regulators and utilities in California and Hawaii are carefully analyzing how solar energy can integrate into the distribution grid. The studies are worth looking at to understand the best-case scenarios for connecting solar.

In California, regulators are requiring utilities to go circuit-by-circuit and estimate the capacity for the grid to accommodate more solar without triggering upgrades over the next ten years. In these cases the cost of adding solar is zero, and hopefully there’s even a benefit. The available capacity is referred to as integration capacity or hosting capacity.

This analysis will be very important to understand the impact of smaller utility-scale projects on the grid. Here’s a quick overview of what they’re doing.

It’s a big, engineering-driven modeling exercise. The utilities have a combined 8,800 circuits to study. Each circuit is being broken down into multiple segments. The figure below from San Diego Gas & Electric’s Distribution Resource Plan shows how they break a typical circuit into three sections.

SDGE Circuit

SOURCE: San Diego Gas & Electric.

The utility needs to worry about several technical constraints on each circuit:

  • Circuit voltage needs to stay within a prescribed band so that connected equipment is not damaged. Solar can potentially cause unwanted voltage changes.
  • The temperature of circuit equipment, such as transformers, needs to stay within manufacturer ratings so that it does not fail or cause fires. Solar energy could potentially subject equipment to more than typical flows of electricity, and flowing electricity creates heat.
  • The utility needs to be confident that the circuit breakers and fuses that protect equipment and public safety in the face of short circuits continue to operate as intended. Solar energy could potentially keep fuses from operating as intended.

The analyses are still underway, but San Diego Gas & Electric (SDG&E) has estimated that their grid can accommodate about 1,000 Megawatts of distributed generation. That’s equal to around 20% of the utility’s peak demand.

SDG&E’s distribution grid may, or may not, be similar to other utilities’ grid. But if every utility’s distribution grid has hosting capacity equal to 20% of peak demand, then the six sunny states in the southwest US (CA, NV, AZ, CO, UT, NM) could accommodate nearly 24,000 Megawatts of solar without triggering distribution-level investments (20% of 118,000 Megawatt summer peak demand). That amount of new solar capacity would nearly triple the amount of solar photovoltaics in those states.

Increasing hosting capacity further only requires modest investments in many cases. A 2015 Energy Institute at Haas working paper, described here, performed a detailed analysis of Pacific Gas & Electric’s distribution grid and concluded that solar penetration equal to 100% of capacity on all circuits would require only small cost to accommodate.

Each utility has produced circuit-by-circuit maps that show hosting capacity. If you enjoy poking around maps like I do you can find them here, under the section “Integration Capacity Analysis (ICA) Maps”.

The utilities in Hawaii and some public utilities in California have also been undertaking hosting capacity analyses.

Smaller utility-scale solar projects could grow as a very important part of the renewable electricity mix. Policymakers should make sure they understand how to bring these projects onto the grid at the lowest possible cost. A good place to start is to pick up the analytical approaches being developed in California and Hawaii and do similar analysis in other sunny regions.

Posted in Uncategorized | Tagged , | 6 Comments

The Duck has Landed

May has arrived and days are getting longer and warmer. This is good news for baseball fans, barbecue enthusiasts, and grid operators concerned about integrating unprecedented levels of solar energy onto the California grid.

baseball2

Source: Solar panels at Busch Baseball Stadium

Plugging lots of solar into the power system creates challenges, particularly on days when electricity demand is relatively low and renewable generation is high. Here in California, this happens in March and April when solar intensity is up (relative to the winter months), but air conditioning demand has yet to kick in.

Back in 2013, some California energy analysts with an eye for aesthetics were looking at how projected increases in renewable energy generation might affect power system operations. They plotted actual and projected hourly net load profiles (i.e. electricity demand minus renewable generation) over the years 2012 to 2020, focusing on late March when integration concerns loom large. The result was remarkably duck-like.

duck_CAISO

The California ISO “duck chart” made a big splash for a number of reasons. For one, a graph that looks like a duck makes an otherwise dry, technical issue more fun to talk about.  Conversations about renewable integration become more engaging when sprinkled with fowl word plays.

Perhaps more importantly, the graph highlights two related integration challenges. First, the long duck neck represents the steep evening ramp when the sun sets just as Californians are coming home and turning on their lights and appliances. Accommodating this ramp requires maintaining a fleet of relatively expensive generation resources with high levels of flexibility. Second, the duck’s growing belly highlights the near-term potential for “over-generation”. As solar penetration increases, net load starts to bump up against the minimum generation levels of other grid-connected generators, such as the state’s remaining nuclear power plant. At some point, system operators have to start curtailing solar to balance the grid.

How’s the duck shaping up?

The CAISO duck chart predicts that we should see increasingly duck-like net load profiles in March and April. So I’ve been keeping an eye on the great data that CAISO makes readily accessible. This year, the duck showed up. The graph below plots average net load profiles for late March/early April since 2013 (I averaged across seven days around March 31 to smooth out the variation that comes with random weather, week days versus weekends, etc.).

duck_graph.fwNote: All data taken from CAISO website. Graph summarizes hourly data, March28-April 3, 2013-2016.

In the 2016 duck season, we saw mid-day net loads at or around predicted levels. Increased solar penetration on both sides of the meter (utility scale and distributed)  has been driving net loads down when the sun is up. Fortunately,  the ramp from 5 – 8 pm has not been quite as steep as projected because electricity demand in the evening hours has  been lower than projected. Perhaps this is due to unanticipated demand-side energy efficiency improvements. I could not easily find hourly curtailment data. The data I could find on plant outages indicate that March 2016 saw the highest forced solar plants outages on record, but these outages could  be due to factors other than curtailment.

My after-the-fact duck chart suggests that renewables integration challenges are showing up more or less on schedule (although ramping requirements are somewhat less than projected). So far, these challenges are quite manageable without major changes to grid operations. But the duck of the future – especially given California’s new target of 50% renewables by 2030 –  will present a more formidable challenge.

Renewables integration strengthens the case for regional coordination

California is not alone in creating and confronting unprecedented renewable integration complications. Take Hawaii, for example, where a 100% renewables target makes California’s 50% look timid. Our colleagues at University of Hawai’i, Michael Roberts and Mathias Fripp, have been thinking hard about how Hawaii can pull this off at least cost. The charts below illustrate a hypothetical 100% day in Oahu in April (no more duck when all load is served by renewable energy!):

fripp

Source: Fripp (2016)

The broken line in the right graph represents the “traditional”, business as usual demand profile. To hit the 100% target, wind and solar generation increases to nearly double current levels of the traditional peak.  Differences between the timing of renewable energy production and traditional demand are reconciled primarily by EV charging and other demand-side response  programs (although batteries and pumped storage also play a role).

When you’re an island in the middle of the ocean, you’re pretty much on your own when it comes to tackling these grid integration challenges. Thus, Hawaii is preparing to demonstrate how significant renewable energy integration can be achieved with demand response, grid management, and storage. In contrast, California has more options to leverage.

Although California fancies itself a different world, it is physically connected to (but not perfectly integrated with)  a larger western power system.   From an economic perspective, expansion of the energy imbalance market and improved coordination of the western grid looks like an obvious and important piece of California’s renewable integration puzzle.  A regionally coordinated western grid would integrate mandated renewables across a larger area, thus reducing the likelihood of over-generation. Coordination across balancing areas should also provide increased flexibility.

In the past, economists have documented the efficiency gains of improved regional coordination and bemoaned the inefficiencies of the balkanization that persists.  Looming renewable integration challenges could provide the needed additional impetus for grid integration.  To be sure, there are some important details that need to be better understood. But if done right, a fully coordinated regional grid could help clip the duck’s wings.

 

Posted in Uncategorized | Tagged , , | 59 Comments

Is Distributed Generation the Answer to Regulatory Dysfunction?

One delightful aspect of teaching an MBA course in energy and environmental markets is getting together with my former students as they pursue careers in the industries I study.  I learn so much about the latest trends and ideas in these markets, and they frequently challenge the way I have been seeing the world.

This happened recently when I had coffee with a former student whom I will refer to as “Pat”.  Pat has worked for a successful alternative energy company and done well, but s/he is ready to think about new paths.  Like many cleantech mavens, Pat is excited about distributed generation (DG), particularly with improving storage technologies.  Pat explained to me a potential business model s/he has been exploring with rooftop solar photovoltaic (PV) panels and on-site storage.

As I’ve written in a previous blog, I’m skeptical that rooftop solar is the most cost effective way to utilize the fabulous breakthroughs in PV technology.  I proceeded to lay out my argument, addressing each of the claims for distributed generation, even though I know Pat is a regular reader of the Energy Institute blog and had surely heard my views before.

But Pat was a star student and continues to be one of the most insightful people I know in the business.  So I was not surprised, but still unsettled, when Pat put on the table an argument for DG that I hadn’t heard before, or maybe Pat just presented it much more clearly so that I finally actually got it.

DysfunctionalUtilities1Here’s my dramatic (if you are an energy geek) re-creation of what Pat said: “Yes, Severin, in theory grid scale generation and delivery of renewable electricity generation is probably more cost-effective.  And, yes, there are some fixed cost of distribution systems that utilities are recovering through volumetric charges, which drives up the retail price and gives an inefficient incentive to install DG.  And, yes, California’s extreme increasing-block residential price schedules mean many households are paying more than 30 cents per kWh for much of their consumption, way above cost.”

“But,” Pat continued with growing enthusiasm, “California’s investor-owned utilities currently charge average residential rates in the 21 to 24 cent range –more than 50% above national average–and the utilities themselves are forecasting those numbers will rise in the coming years.  [Actually those are average rates among customers who aren’t on the low-income tariff.  More on that below. –SB]  I don’t know if rates are so high because of utility incompetence, a dysfunctional regulatory process, or some other reason, but it’s not my job to figure it out.  In any other industry, if a company’s prices are too high we rely on pressure from competition to reign them in.  Why should electricity be any different?”

Pat concluded with, “Severin, ever since I took your class many years ago you’ve been saying that California has high electricity rates in part to pay for the mistakes of the past.  But those ‘mistakes’ keep happening and keep driving up our rates.  At some point, aren’t those ongoing mistakes just part of a broken regulatory process? DG is the competition that will either force repairs in the process or will replace it.”

DysfunctionalUtilities2Pat’s argument isn’t entirely general; there are plenty of states — and even some municipal utilities in California — with rates that rooftop solar can’t touch.  And, there’s not much evidence nationally or internationally that competition introduced by deregulating retail electricity markets has significantly lowered rates.   Plus, it’s worth remembering that most residential customers don’t have a single-family home with a south-facing roof and no shading to put solar panels on, so most of us have to get all our electricity from the grid.

Nonetheless, Pat raises an important point.  Before proponents of high fixed charges and special fees for solar customers get too far down that road, they need to confront the fact that average residential electricity rates in California (and New York, and some other locations where DG is gaining the most traction) are out of line with the rest of the country.

I’ve been asking around about the high, and rising, average residential rates in California, and been surprised at the lack of clarity for the reasons. This seems like a central question of rooftop solar policy (as opposed to rooftop solar politics).  If the rates really reflect high costs of providing electricity, Pat and other DG supporters have a more compelling case that they are providing efficient competition.  On the other hand, if they are driven by other regulatory or legislative policy objectives, then we have to recognize that funding them in this way may encourage inefficient DG installation.

Put differently, is DG the answer to regulatory dysfunction, or is it just regulatory arbitrage? By regulatory arbitrage, I mean taking advantage of the structure of pricing or other utility obligations by pursuing strategies that reap private rewards through cost shifts to other ratepayers.

The simplest cause of regulatory arbitrage is the fact that electricity prices are well above the marginal cost of delivering a kilowatt-hour to the customer in California and many other states. In California, this is in part because of the regulator’s longtime resistance to fixed monthly charges, and in part because of the increasing-block price structure that leaves many customers today paying over 30 cents for their incremental kilowatt-hour.

In addition, the many programs that policymakers have decided to finance through electricity charges also invite regulatory arbitrage. For instance, significant parts of electricity bills in California and many other states pay for energy efficiency programs, early investments in renewable technologies, and — especially large in California — reduced electricity rates for low-income customers. Among the three large investor-owned utilities in California about 30% of all residential customers are on low-income rates.  And, of course, for more than a decade, part of electricity rates in California have paid to subsidize rooftop solar, both directly through the California Solar Initiative (from 2007 to 2013) and indirectly through net metering policies.

If all of these programs were eliminated, would average residential rates among California’s IOUs still be well above national average?   Of course, there are other factors that a cost analysis has to account for, such as the mix of generation, the density of residential consumers and the average consumption per customer.

I think that answering this question is critical to making good energy policy in California.  But after asking a number of regulators, utilities and other policy analysts in the state, I have not turned up any studies that put together all the numbers one needs.

That wouldn’t be the complete answer to Pat’s argument. It has to be paired with a credible analysis of the value and costs DG brings to the grid. But next time I see Pat, I’m hoping to have a better response than “good question. I should write a blog about that.”

 

I’m still tweeting energy news and research articles @BorensteinS

Posted in Uncategorized | Tagged , , | 30 Comments

Cartels Work Unless They Don’t

unicornI spend a lot of time describing unicorns in my undergraduate classroom. And by unicorns, I mean perfectly competitive markets and their features. If you’re a little rusty on this stuff, it goes like this: no single consumer or firm can affect the market price. This requires perfect information, no externalities, free entry and exit, blah, blah, blah.

Most markets are not perfectly competitive. For example – there can be huge returns for firms to try and raise prices above competitive levels. There are several ways to do this, but one of the most popular is to collude with your frenemies in a so-called cartel. Cartels can restrict output, which reduces total supply and leads to higher market prices. Consumers suffer, cartels (and most other producers) make out like bandits!

When everyone in the cartel sticks to the plan, this can work beautifully. So beautifully that in the US we have antitrust laws that prevent firms from colluding and setting prices artificially high (If you are in need of an excellent and entertaining summer read, read this). But on the international stage, one of the most well-known cartels is OPEC. These oil producing nations get together and set production targets that serve their interests (usually higher prices). In order for OPEC to function, its members need to stick to the agreed targets. A problem arises when the members of a cartel cannot agree to targets and do what is optimal for the individual countries, not the whole of OPEC.

And this is what appears to be happening in Qatar right now. Sixteen oil producing nations (essentially the OPEC nations and Russia) who jointly produce a significant share (yet less than 50%) of global output are engaged in talks about restricting output in order to prop up prices. Observers are suggesting that no meaningful restrictions will emerge from the talks. The markets agree with this. Oil prices fell on Friday and early morning trading in Asia raised fears of a significant drop in oil prices when major markets in the Western Hemisphere open, which is exactly what happened.

What does this mean for the average US consumer? If you are planning a road trip in your RV, which gets a glorious 3 mpg, to the national parks this summer, you should rejoice. The failure of oil producers to collude will lead to lower prices during driving season.

What does this mean for the atmosphere? Despite massive and unprecedented policy efforts to reduce emissions from transportation fuels, this lack of collusion leads to even lower prices and more miles driven. People in the market for a new car are already buying less fuel efficient cars than they would have if prices were high, which is bad news for the environment.

What I am saying may sound crazy on the surface; but if you are the global environment, successful collusion here might be a good thing! In unregulated markets with externalities, prices are too low and production/emissions are too high. Collusions will drive up prices and drive down consumption, which is a net gain for society.

Of course, there will be no domestic tax revenues that can be redistributed – all the revenues will go to a bunch of oil rich countries. This means no dollars to be redistributed, invested in the development and deployment of more renewable energy in the countries where the majority of consumption takes place. So in a perfect world, where I am the king of carbon, I would like not cartels, but a carbon tax. But, since I am missing this title I am going to stick to Severin’s proposal for a gas price floor domestically. Yes. It’s time for higher gas prices.

Posted in Uncategorized | Tagged , | 5 Comments

Automakers Complain, but CAFE Loopholes Make Standards Easier to Meet

With gasoline prices averaging $2 per gallon, Americans are flocking to gas-guzzling vehicles. Last year was the biggest year ever for the U.S. auto industry with 17.5 million total vehicle sales nationwide. Trucks, SUVs, and crossovers led the charge with a 13% increase compared to 2014.

trucks

The one problem with selling all these gas guzzlers is that it makes it harder to meet fuel economy standards. U.S. Corporate Average Fuel Economy (CAFE) standards have been around for a long time, but the new “super-size” version introduced in 2012 mandates a steep climb in fuel economy each year until 2025.

Back in 2012 when the Obama Administration announced the new standards, gasoline prices were $4 per gallon and Americans were buying smaller, more fuel-efficient vehicles.  Sales were increasing rapidly for the Chevrolet Volt, Tesla Model S, and other electric vehicles, and there was great optimism about reducing the carbon-intensity of the U.S. transportation sector.

Fast forward to 2016, and the automakers can’t believe they ever agreed to this. The new CAFE rules are scheduled to be reviewed this summer, and automakers are pushing back hard, seeking adjustments that would weaken the standards to reflect this new reality of cheap gasoline.

In pleading their case, one of the automakers’ favorite approaches is to try to shift the focus to consumers.  “One of the areas that needs to be addressed is consumer demand,” recently argued Gloria Bergquist, spokeswoman from the Alliance of Automobile Manufacturers, “Automakers can build models that are extremely fuel-efficient, but they can’t control sales.”

But, of course, automakers can control sales. In the short-run, automakers can adjust prices. And in the long-run, automakers can design new fuel-efficient vehicles that Americans want to buy. Nobody expected this to happen by itself. The whole rationale behind CAFE is that there are externalities associated with gasoline consumption. If we thought consumers were going to perfectly internalize these externalities, then we wouldn’t need CAFE in the first place.

What Ms. Bergquist probably meant to say instead is that $2 gasoline makes it harder to get consumers to switch. This is certainly true. Cheap gasoline provides huge benefits to U.S. consumers, but it also leads drivers to prefer larger, more powerful vehicles.

Fortunately for the automakers – though not for the environment – there is a built-in mechanism that relaxes the standard when consumers choose larger vehicles. The new standards are “footprint” based so that the fuel economy target for each vehicle depends on its overall size.  Larger vehicles have less stringent targets.

footprint

The standards are also more generous for trucks than cars. Most of the best-selling vehicles are “trucks” from a CAFE perspective including, of course, pickup trucks, but also SUVs, crossovers, and minivans. And as Americans switch from “cars” to “trucks” this makes it easier for automakers to comply with CAFE.

The real but more subtle challenge for manufacturers is that cheap gasoline makes consumers prefer more powerful engines (for a given footprint) and makes them less willing to buy EVs and hybrids. The automakers can adjust their prices to sell lower horsepower engines and more EV’s and hybrids, but this reduces profits.

There is one more loophole, however, to help soften blow. And it is a big one. My colleague Jim Sallee and former student Soren Anderson worked on this topic several years ago (here), but until I looked at it again, I had no idea how large this loophole was, nor had I known that the loophole would last so long after being initially introduced in 1993.

I’m talking about flex-fuel vehicles. Over two million flex-fuel vehicles are sold each year in the United States. These vehicles can run on E85 (a blend of 85% ethanol and 15% gasoline), but in practice, most end up running on gasoline and many sales of flex-fuel vehicles occur in parts of the country where there is limited E85 availability.

gascap

Under CAFE, however, these vehicles have a near-magical property. They are assumed to be operated 50% using E85 and 50% with gasoline — a very optimistic assumption. But even more optimistic, each gallon of E85 is assumed to have the carbon content of only 0.15 gallons of gasoline. This is, the ethanol component of E85 is assumed to be zero carbon. It is notoriously difficult to quantify the lifetime carbon impacts of biofuels but most studies find that, at best, ethanol is only marginally less carbon-intensive than gasoline. As a result of these overly generous assumptions, flex-fuel vehicles like the GMC Terrain end up being treated by CAFE as if they were extremely fuel-efficient.

terrain

Not surprisingly, manufacturers have been producing flex-fuel vehicles like crazy.  There are today more than 100 different models of flex-fuel vehicles for sale in the United States (who knew?).  And while you used to always see a “flex fuel” sticker on the back, many flex-fuel vehicles today aren’t even identified. You might be driving one and not even know it.

Thankfully, the flex-fuel loophole ended with model year 2015. These credits were so lucrative, however, that many manufacturers are now sitting on large stores of surplus credits. Under CAFE rules these credits can be “banked” until 2021, ensuring that the legacy of this loophole will live on, allowing manufacturers to produce lower-MPG vehicles for years to come.

flex

So let’s not feel too sorry for the automakers. Yes, the CAFE screws are beginning to tighten, but the automakers’ situation is not nearly as dire as they would have us believe.

Posted in Uncategorized | Tagged , , | 6 Comments