The Future of (Not) Driving

We have a momentous event coming up in my household: my son will turn 16 at the end of the month and will – if the DMV gods are agreeable – get his drivers license. This has sparked a lot of debates in my family about what driving will look like over the next 10-20 years.

My son hopes to strike this pose soon 

My son hopes to strike this pose soon

In short, my son HATES the idea of driverless cars. Imagine – the club he’s been pining to join – drivers – is now threatened with extinction. Perhaps with wishful thinking, he has come up with a lot of theories about why self-driving cars will never take off.

I disagree with him, though I may be indulging in a bit of wishful thinking myself. I find few things more stressful than sitting in the passenger seat with my son at the wheel. His behind-the-wheel instructor says he’s a good driver (I wish she wouldn’t tell him that…), but I have never been quite so focused on everything that could possibly go wrong, and I would rather trust a computer to make the right decision if something does.

Also, I’ve spent enough time in Bay Area traffic jams – where one distracted driver who brakes a little too hard can slow down a whole lane of traffic – to relish the idea of smoothly flowing computer-driven cars. Research seems to back me up –simulations suggest that automated vehicles will likely reduce fuel consumption, and part of that reduction will come from fewer slowdowns due to accidents.

Here’s my son’s theory, which draws on network economics even if he doesn’t use that phrase: as long as there are enough people like him on the road, who actually want to be behind the wheel, driverless cars won’t do much to improve congestion. In the extreme, a mixture of robot-driven and person-driven cars could be worse for congestion than all person-driven. Imagine if Silicon Valley technocrats could send for their favorite Los Angeles sushi and have it delivered by a driverless, and passenger-less, car, thereby adding cars that wouldn’t have been there. Then put those vehicles on the road with the remaining 16-year-old boy drivers, and others with an inner 16-year-old boy, some of whom get a kick out of messing with the automated cars’ sensors to make them brake quickly.

His theory was borne out by the story of the Google car getting stuck at the four-way stop as it waited for other cars to come to a complete stop. But, that doesn’t seem like an unsolvable problem to me – someone just needs to update the algorithm and stress test it versus thrill-seeking drivers.

My son also points out that his online driver’s ed course warned that no one leaves the house thinking they will get in a car accident. So, he thinks people won’t be drawn to driverless cars to protect their own safety. Consistent with this, surveys suggest that most of us live in a Lake Wobegon world and think we’re better than the average driver. This could mean that we all want other people – particularly the drunks, texters and overly aggressive lane-changers – to be in driverless cars, but want control over our own on-road destiny. Given that we buy cars for ourselves and not others, this doesn’t lead to many autonomous car sales.

I try to explain to my son (without using the phrases “opportunity cost” or “consumer surplus”…) that driverless cars will both give us more time and make driving a lot cheaper, so teenagers will eventually find another way to mark the transition to adulthood.

On the “more time” point, think of all the things we can do instead of sitting behind the wheel of the car. With more of us able to be productive remotely, time in the car could be quite valuable.

In terms of the cost of driving, it’s hugely inefficient to have so many of us own a $20,000-plus piece of capital that we use on average 46 minutes per day. The capital depreciates even when we don’t use it because technological change makes newer cars more desirable.

If you could order up an autonomous car only when you needed it – the cost of the capital would be spread over many more people and rides, driving down the cost per ride. 960-gm-lyft-bring-selfdriving-electric-taxis-a-yearSo, I explain to my son, you’ll have to really, really like driving to pass up the much cheaper alternative of renting one from the next incarnation of Uber or Lyft. In fact, GM and Lyft recently announced that they will begin testing self-driving taxis on actual roads within a year.

Cars themselves are also likely to get cheaper if they’re automated, leaving aside the cost of the automation itself. In economics, cars are the canonical empirical example of a differentiated product. Remember back to basic microeconomics, where the perfectly competitive market model works for a purely homogeneous good and market forces drive prices to marginal costs? The converse of this is that the more differentiated products are, the higher the markups above marginal cost are likely to be (which roughly means higher company profits). In fact, economists have written dozens of papers trying to model consumer demand for cars, accounting for our demand for brands, horsepower, leather seats, etc.

My guess is that with driverless cars, consumer demand for differentiation will be much lower. Who even knows what the brand of the last bus you rode was? And, as long as my Uber driver’s car is clean and gets me where I’m going, I don’t really care what he’s driving – no self-identity there.

In a rejoinder that warms his economist mother’s heart – the boy understands incentives! – my son points out that this is another reason why driverless cars are doomed. The auto companies will figure out that they spell lower profits for them, and will use their (considerable) economic and political power to derail them.

We will see. In a battle between Google and Ford – Silicon Valley and Detroit – I might put my money on Google. At least I hope I’m right….

What do you think? For those of you with 6-year-olds, will the drivers test be the same rite of passage in another 10 years?

Posted in Uncategorized | Tagged | 9 Comments

The Distribution Grid Has Room for More Solar

There is evidence that bigger isn’t necessarily better when it comes to solar energy projects.

Economies of scale suggest large projects would be more cost-effective than small ones. But recently, Lawrence Berkeley National Lab (LBNL) did an analysis of solar projects that came on-line in 2014. Their study collected information about ground-mounted, utility-scale projects (though notably not rooftop solar).

The chart below from the report groups together projects based on their size. The height of the bars reflects the capacity-weighted installed price, denominated in dollars per watt.

LBNL chart

SOURCE: Bolinger, Mark and Joachim Seel. Utility Scale Solar 2014, Lawrence Berkeley National Laboratory, 2015.

The LBNL researcher found the smaller utility-scale projects had a LOWER cost per watt than the largest projects. (Note: the smaller projects in the report are still more than 1,000 times larger than the average residential rooftop system.)

Why would this be? The report’s authors hypothesize that the larger projects face regulatory and interconnection complexities that drive up costs. Smaller projects (around 25-50 acres) have an easier time clearing these hurdles.

The full cost of the biggest projects may even be higher than the graph shows. This is because the prices collected by LBNL do not include all of the infrastructure costs associated with the projects. Key among these is the cost of building out the transmission grid to reach them and increasing the overall capacity of the grid.

It can be hard to tie specific transmission system upgrades to particular power plants because the grid is so networked. The transmission grid is similar to our road networks. Building a large residential development on the outskirts of a city far from workplaces not only requires building roads to the specific development. The new residents will also cause more traffic on roads throughout the metropolitan area and require the freeway system to be expanded.

Similarly, the development of large-scale renewable energy projects in remote locations in California has spurred a significant expansion of the state’s transmission grid. In fact, transmission expenditures have grown more rapidly than any other major utility expense category.

For Southern California Edison (SCE), transmission costs grew at an average annual rate of 9.5% between 2005 and 2015. For customers this showed up in retail prices. For example, SCE’s large commercial and industrial customers experienced a tripling of transmission rates over this time period. The graph below, drawn from an annual review of utility costs, performed by the California Public Utilities Commission shows this trend in total transmission costs.

CPUC graph

SOURCE: California Public Utilities Commission, 2016 Gas & Electric Utility Cost Report, April 2016.

Meanwhile the smaller projects, in the 1 to 3 Megawatt range (just slightly smaller than those covered by the LBNL study), can be connected directly to the distribution grid. The distribution grid includes all the power lines, poles, transformers and other equipment that carries electricity from substations to homes and businesses.

It may be possible to tie these smaller projects into the grid without triggering large infrastructure investments. Using the housing analogy, if housing is built close to workplaces then a significant number of residents could have short commutes on the existing roads without creating traffic on the surrounding freeway system.

I recently visited a test facility in Lubbock, Texas where Group NIRE has connected 3 Megawatt wind turbines directly to the distribution grid. Notably, each wind turbine has to be tied into a different substation so that the power generation doesn’t overwhelm demand.

IMG_1645 (1)

A large wind turbine connected directly to the distribution grid at Group NIRE. Group NIRE was formed by Texas Tech University in 2010.

Are there other infrastructure costs the smaller utility-scale solar projects require? Answering this question requires a better understanding of the distribution grid.

Regulators and utilities in California and Hawaii are carefully analyzing how solar energy can integrate into the distribution grid. The studies are worth looking at to understand the best-case scenarios for connecting solar.

In California, regulators are requiring utilities to go circuit-by-circuit and estimate the capacity for the grid to accommodate more solar without triggering upgrades over the next ten years. In these cases the cost of adding solar is zero, and hopefully there’s even a benefit. The available capacity is referred to as integration capacity or hosting capacity.

This analysis will be very important to understand the impact of smaller utility-scale projects on the grid. Here’s a quick overview of what they’re doing.

It’s a big, engineering-driven modeling exercise. The utilities have a combined 8,800 circuits to study. Each circuit is being broken down into multiple segments. The figure below from San Diego Gas & Electric’s Distribution Resource Plan shows how they break a typical circuit into three sections.

SDGE Circuit

SOURCE: San Diego Gas & Electric.

The utility needs to worry about several technical constraints on each circuit:

  • Circuit voltage needs to stay within a prescribed band so that connected equipment is not damaged. Solar can potentially cause unwanted voltage changes.
  • The temperature of circuit equipment, such as transformers, needs to stay within manufacturer ratings so that it does not fail or cause fires. Solar energy could potentially subject equipment to more than typical flows of electricity, and flowing electricity creates heat.
  • The utility needs to be confident that the circuit breakers and fuses that protect equipment and public safety in the face of short circuits continue to operate as intended. Solar energy could potentially keep fuses from operating as intended.

The analyses are still underway, but San Diego Gas & Electric (SDG&E) has estimated that their grid can accommodate about 1,000 Megawatts of distributed generation. That’s equal to around 20% of the utility’s peak demand.

SDG&E’s distribution grid may, or may not, be similar to other utilities’ grid. But if every utility’s distribution grid has hosting capacity equal to 20% of peak demand, then the six sunny states in the southwest US (CA, NV, AZ, CO, UT, NM) could accommodate nearly 24,000 Megawatts of solar without triggering distribution-level investments (20% of 118,000 Megawatt summer peak demand). That amount of new solar capacity would nearly triple the amount of solar photovoltaics in those states.

Increasing hosting capacity further only requires modest investments in many cases. A 2015 Energy Institute at Haas working paper, described here, performed a detailed analysis of Pacific Gas & Electric’s distribution grid and concluded that solar penetration equal to 100% of capacity on all circuits would require only small cost to accommodate.

Each utility has produced circuit-by-circuit maps that show hosting capacity. If you enjoy poking around maps like I do you can find them here, under the section “Integration Capacity Analysis (ICA) Maps”.

The utilities in Hawaii and some public utilities in California have also been undertaking hosting capacity analyses.

Smaller utility-scale solar projects could grow as a very important part of the renewable electricity mix. Policymakers should make sure they understand how to bring these projects onto the grid at the lowest possible cost. A good place to start is to pick up the analytical approaches being developed in California and Hawaii and do similar analysis in other sunny regions.

Posted in Uncategorized | Tagged , | 6 Comments

The Duck has Landed

May has arrived and days are getting longer and warmer. This is good news for baseball fans, barbecue enthusiasts, and grid operators concerned about integrating unprecedented levels of solar energy onto the California grid.

baseball2

Source: Solar panels at Busch Baseball Stadium

Plugging lots of solar into the power system creates challenges, particularly on days when electricity demand is relatively low and renewable generation is high. Here in California, this happens in March and April when solar intensity is up (relative to the winter months), but air conditioning demand has yet to kick in.

Back in 2013, some California energy analysts with an eye for aesthetics were looking at how projected increases in renewable energy generation might affect power system operations. They plotted actual and projected hourly net load profiles (i.e. electricity demand minus renewable generation) over the years 2012 to 2020, focusing on late March when integration concerns loom large. The result was remarkably duck-like.

duck_CAISO

The California ISO “duck chart” made a big splash for a number of reasons. For one, a graph that looks like a duck makes an otherwise dry, technical issue more fun to talk about.  Conversations about renewable integration become more engaging when sprinkled with fowl word plays.

Perhaps more importantly, the graph highlights two related integration challenges. First, the long duck neck represents the steep evening ramp when the sun sets just as Californians are coming home and turning on their lights and appliances. Accommodating this ramp requires maintaining a fleet of relatively expensive generation resources with high levels of flexibility. Second, the duck’s growing belly highlights the near-term potential for “over-generation”. As solar penetration increases, net load starts to bump up against the minimum generation levels of other grid-connected generators, such as the state’s remaining nuclear power plant. At some point, system operators have to start curtailing solar to balance the grid.

How’s the duck shaping up?

The CAISO duck chart predicts that we should see increasingly duck-like net load profiles in March and April. So I’ve been keeping an eye on the great data that CAISO makes readily accessible. This year, the duck showed up. The graph below plots average net load profiles for late March/early April since 2013 (I averaged across seven days around March 31 to smooth out the variation that comes with random weather, week days versus weekends, etc.).

duck_graph.fwNote: All data taken from CAISO website. Graph summarizes hourly data, March28-April 3, 2013-2016.

In the 2016 duck season, we saw mid-day net loads at or around predicted levels. Increased solar penetration on both sides of the meter (utility scale and distributed)  has been driving net loads down when the sun is up. Fortunately,  the ramp from 5 – 8 pm has not been quite as steep as projected because electricity demand in the evening hours has  been lower than projected. Perhaps this is due to unanticipated demand-side energy efficiency improvements. I could not easily find hourly curtailment data. The data I could find on plant outages indicate that March 2016 saw the highest forced solar plants outages on record, but these outages could  be due to factors other than curtailment.

My after-the-fact duck chart suggests that renewables integration challenges are showing up more or less on schedule (although ramping requirements are somewhat less than projected). So far, these challenges are quite manageable without major changes to grid operations. But the duck of the future – especially given California’s new target of 50% renewables by 2030 –  will present a more formidable challenge.

Renewables integration strengthens the case for regional coordination

California is not alone in creating and confronting unprecedented renewable integration complications. Take Hawaii, for example, where a 100% renewables target makes California’s 50% look timid. Our colleagues at University of Hawai’i, Michael Roberts and Mathias Fripp, have been thinking hard about how Hawaii can pull this off at least cost. The charts below illustrate a hypothetical 100% day in Oahu in April (no more duck when all load is served by renewable energy!):

fripp

Source: Fripp (2016)

The broken line in the right graph represents the “traditional”, business as usual demand profile. To hit the 100% target, wind and solar generation increases to nearly double current levels of the traditional peak.  Differences between the timing of renewable energy production and traditional demand are reconciled primarily by EV charging and other demand-side response  programs (although batteries and pumped storage also play a role).

When you’re an island in the middle of the ocean, you’re pretty much on your own when it comes to tackling these grid integration challenges. Thus, Hawaii is preparing to demonstrate how significant renewable energy integration can be achieved with demand response, grid management, and storage. In contrast, California has more options to leverage.

Although California fancies itself a different world, it is physically connected to (but not perfectly integrated with)  a larger western power system.   From an economic perspective, expansion of the energy imbalance market and improved coordination of the western grid looks like an obvious and important piece of California’s renewable integration puzzle.  A regionally coordinated western grid would integrate mandated renewables across a larger area, thus reducing the likelihood of over-generation. Coordination across balancing areas should also provide increased flexibility.

In the past, economists have documented the efficiency gains of improved regional coordination and bemoaned the inefficiencies of the balkanization that persists.  Looming renewable integration challenges could provide the needed additional impetus for grid integration.  To be sure, there are some important details that need to be better understood. But if done right, a fully coordinated regional grid could help clip the duck’s wings.

 

Posted in Uncategorized | Tagged , , | 56 Comments

Is Distributed Generation the Answer to Regulatory Dysfunction?

One delightful aspect of teaching an MBA course in energy and environmental markets is getting together with my former students as they pursue careers in the industries I study.  I learn so much about the latest trends and ideas in these markets, and they frequently challenge the way I have been seeing the world.

This happened recently when I had coffee with a former student whom I will refer to as “Pat”.  Pat has worked for a successful alternative energy company and done well, but s/he is ready to think about new paths.  Like many cleantech mavens, Pat is excited about distributed generation (DG), particularly with improving storage technologies.  Pat explained to me a potential business model s/he has been exploring with rooftop solar photovoltaic (PV) panels and on-site storage.

As I’ve written in a previous blog, I’m skeptical that rooftop solar is the most cost effective way to utilize the fabulous breakthroughs in PV technology.  I proceeded to lay out my argument, addressing each of the claims for distributed generation, even though I know Pat is a regular reader of the Energy Institute blog and had surely heard my views before.

But Pat was a star student and continues to be one of the most insightful people I know in the business.  So I was not surprised, but still unsettled, when Pat put on the table an argument for DG that I hadn’t heard before, or maybe Pat just presented it much more clearly so that I finally actually got it.

DysfunctionalUtilities1Here’s my dramatic (if you are an energy geek) re-creation of what Pat said: “Yes, Severin, in theory grid scale generation and delivery of renewable electricity generation is probably more cost-effective.  And, yes, there are some fixed cost of distribution systems that utilities are recovering through volumetric charges, which drives up the retail price and gives an inefficient incentive to install DG.  And, yes, California’s extreme increasing-block residential price schedules mean many households are paying more than 30 cents per kWh for much of their consumption, way above cost.”

“But,” Pat continued with growing enthusiasm, “California’s investor-owned utilities currently charge average residential rates in the 21 to 24 cent range –more than 50% above national average–and the utilities themselves are forecasting those numbers will rise in the coming years.  [Actually those are average rates among customers who aren’t on the low-income tariff.  More on that below. –SB]  I don’t know if rates are so high because of utility incompetence, a dysfunctional regulatory process, or some other reason, but it’s not my job to figure it out.  In any other industry, if a company’s prices are too high we rely on pressure from competition to reign them in.  Why should electricity be any different?”

Pat concluded with, “Severin, ever since I took your class many years ago you’ve been saying that California has high electricity rates in part to pay for the mistakes of the past.  But those ‘mistakes’ keep happening and keep driving up our rates.  At some point, aren’t those ongoing mistakes just part of a broken regulatory process? DG is the competition that will either force repairs in the process or will replace it.”

DysfunctionalUtilities2Pat’s argument isn’t entirely general; there are plenty of states — and even some municipal utilities in California — with rates that rooftop solar can’t touch.  And, there’s not much evidence nationally or internationally that competition introduced by deregulating retail electricity markets has significantly lowered rates.   Plus, it’s worth remembering that most residential customers don’t have a single-family home with a south-facing roof and no shading to put solar panels on, so most of us have to get all our electricity from the grid.

Nonetheless, Pat raises an important point.  Before proponents of high fixed charges and special fees for solar customers get too far down that road, they need to confront the fact that average residential electricity rates in California (and New York, and some other locations where DG is gaining the most traction) are out of line with the rest of the country.

I’ve been asking around about the high, and rising, average residential rates in California, and been surprised at the lack of clarity for the reasons. This seems like a central question of rooftop solar policy (as opposed to rooftop solar politics).  If the rates really reflect high costs of providing electricity, Pat and other DG supporters have a more compelling case that they are providing efficient competition.  On the other hand, if they are driven by other regulatory or legislative policy objectives, then we have to recognize that funding them in this way may encourage inefficient DG installation.

Put differently, is DG the answer to regulatory dysfunction, or is it just regulatory arbitrage? By regulatory arbitrage, I mean taking advantage of the structure of pricing or other utility obligations by pursuing strategies that reap private rewards through cost shifts to other ratepayers.

The simplest cause of regulatory arbitrage is the fact that electricity prices are well above the marginal cost of delivering a kilowatt-hour to the customer in California and many other states. In California, this is in part because of the regulator’s longtime resistance to fixed monthly charges, and in part because of the increasing-block price structure that leaves many customers today paying over 30 cents for their incremental kilowatt-hour.

In addition, the many programs that policymakers have decided to finance through electricity charges also invite regulatory arbitrage. For instance, significant parts of electricity bills in California and many other states pay for energy efficiency programs, early investments in renewable technologies, and — especially large in California — reduced electricity rates for low-income customers. Among the three large investor-owned utilities in California about 30% of all residential customers are on low-income rates.  And, of course, for more than a decade, part of electricity rates in California have paid to subsidize rooftop solar, both directly through the California Solar Initiative (from 2007 to 2013) and indirectly through net metering policies.

If all of these programs were eliminated, would average residential rates among California’s IOUs still be well above national average?   Of course, there are other factors that a cost analysis has to account for, such as the mix of generation, the density of residential consumers and the average consumption per customer.

I think that answering this question is critical to making good energy policy in California.  But after asking a number of regulators, utilities and other policy analysts in the state, I have not turned up any studies that put together all the numbers one needs.

That wouldn’t be the complete answer to Pat’s argument. It has to be paired with a credible analysis of the value and costs DG brings to the grid. But next time I see Pat, I’m hoping to have a better response than “good question. I should write a blog about that.”

 

I’m still tweeting energy news and research articles @BorensteinS

Posted in Uncategorized | Tagged , , | 30 Comments

Cartels Work Unless They Don’t

unicornI spend a lot of time describing unicorns in my undergraduate classroom. And by unicorns, I mean perfectly competitive markets and their features. If you’re a little rusty on this stuff, it goes like this: no single consumer or firm can affect the market price. This requires perfect information, no externalities, free entry and exit, blah, blah, blah.

Most markets are not perfectly competitive. For example – there can be huge returns for firms to try and raise prices above competitive levels. There are several ways to do this, but one of the most popular is to collude with your frenemies in a so-called cartel. Cartels can restrict output, which reduces total supply and leads to higher market prices. Consumers suffer, cartels (and most other producers) make out like bandits!

When everyone in the cartel sticks to the plan, this can work beautifully. So beautifully that in the US we have antitrust laws that prevent firms from colluding and setting prices artificially high (If you are in need of an excellent and entertaining summer read, read this). But on the international stage, one of the most well-known cartels is OPEC. These oil producing nations get together and set production targets that serve their interests (usually higher prices). In order for OPEC to function, its members need to stick to the agreed targets. A problem arises when the members of a cartel cannot agree to targets and do what is optimal for the individual countries, not the whole of OPEC.

And this is what appears to be happening in Qatar right now. Sixteen oil producing nations (essentially the OPEC nations and Russia) who jointly produce a significant share (yet less than 50%) of global output are engaged in talks about restricting output in order to prop up prices. Observers are suggesting that no meaningful restrictions will emerge from the talks. The markets agree with this. Oil prices fell on Friday and early morning trading in Asia raised fears of a significant drop in oil prices when major markets in the Western Hemisphere open, which is exactly what happened.

What does this mean for the average US consumer? If you are planning a road trip in your RV, which gets a glorious 3 mpg, to the national parks this summer, you should rejoice. The failure of oil producers to collude will lead to lower prices during driving season.

What does this mean for the atmosphere? Despite massive and unprecedented policy efforts to reduce emissions from transportation fuels, this lack of collusion leads to even lower prices and more miles driven. People in the market for a new car are already buying less fuel efficient cars than they would have if prices were high, which is bad news for the environment.

What I am saying may sound crazy on the surface; but if you are the global environment, successful collusion here might be a good thing! In unregulated markets with externalities, prices are too low and production/emissions are too high. Collusions will drive up prices and drive down consumption, which is a net gain for society.

Of course, there will be no domestic tax revenues that can be redistributed – all the revenues will go to a bunch of oil rich countries. This means no dollars to be redistributed, invested in the development and deployment of more renewable energy in the countries where the majority of consumption takes place. So in a perfect world, where I am the king of carbon, I would like not cartels, but a carbon tax. But, since I am missing this title I am going to stick to Severin’s proposal for a gas price floor domestically. Yes. It’s time for higher gas prices.

Posted in Uncategorized | Tagged , | 5 Comments

Automakers Complain, but CAFE Loopholes Make Standards Easier to Meet

With gasoline prices averaging $2 per gallon, Americans are flocking to gas-guzzling vehicles. Last year was the biggest year ever for the U.S. auto industry with 17.5 million total vehicle sales nationwide. Trucks, SUVs, and crossovers led the charge with a 13% increase compared to 2014.

trucks

The one problem with selling all these gas guzzlers is that it makes it harder to meet fuel economy standards. U.S. Corporate Average Fuel Economy (CAFE) standards have been around for a long time, but the new “super-size” version introduced in 2012 mandates a steep climb in fuel economy each year until 2025.

Back in 2012 when the Obama Administration announced the new standards, gasoline prices were $4 per gallon and Americans were buying smaller, more fuel-efficient vehicles.  Sales were increasing rapidly for the Chevrolet Volt, Tesla Model S, and other electric vehicles, and there was great optimism about reducing the carbon-intensity of the U.S. transportation sector.

Fast forward to 2016, and the automakers can’t believe they ever agreed to this. The new CAFE rules are scheduled to be reviewed this summer, and automakers are pushing back hard, seeking adjustments that would weaken the standards to reflect this new reality of cheap gasoline.

In pleading their case, one of the automakers’ favorite approaches is to try to shift the focus to consumers.  “One of the areas that needs to be addressed is consumer demand,” recently argued Gloria Bergquist, spokeswoman from the Alliance of Automobile Manufacturers, “Automakers can build models that are extremely fuel-efficient, but they can’t control sales.”

But, of course, automakers can control sales. In the short-run, automakers can adjust prices. And in the long-run, automakers can design new fuel-efficient vehicles that Americans want to buy. Nobody expected this to happen by itself. The whole rationale behind CAFE is that there are externalities associated with gasoline consumption. If we thought consumers were going to perfectly internalize these externalities, then we wouldn’t need CAFE in the first place.

What Ms. Bergquist probably meant to say instead is that $2 gasoline makes it harder to get consumers to switch. This is certainly true. Cheap gasoline provides huge benefits to U.S. consumers, but it also leads drivers to prefer larger, more powerful vehicles.

Fortunately for the automakers – though not for the environment – there is a built-in mechanism that relaxes the standard when consumers choose larger vehicles. The new standards are “footprint” based so that the fuel economy target for each vehicle depends on its overall size.  Larger vehicles have less stringent targets.

footprint

The standards are also more generous for trucks than cars. Most of the best-selling vehicles are “trucks” from a CAFE perspective including, of course, pickup trucks, but also SUVs, crossovers, and minivans. And as Americans switch from “cars” to “trucks” this makes it easier for automakers to comply with CAFE.

The real but more subtle challenge for manufacturers is that cheap gasoline makes consumers prefer more powerful engines (for a given footprint) and makes them less willing to buy EVs and hybrids. The automakers can adjust their prices to sell lower horsepower engines and more EV’s and hybrids, but this reduces profits.

There is one more loophole, however, to help soften blow. And it is a big one. My colleague Jim Sallee and former student Soren Anderson worked on this topic several years ago (here), but until I looked at it again, I had no idea how large this loophole was, nor had I known that the loophole would last so long after being initially introduced in 1993.

I’m talking about flex-fuel vehicles. Over two million flex-fuel vehicles are sold each year in the United States. These vehicles can run on E85 (a blend of 85% ethanol and 15% gasoline), but in practice, most end up running on gasoline and many sales of flex-fuel vehicles occur in parts of the country where there is limited E85 availability.

gascap

Under CAFE, however, these vehicles have a near-magical property. They are assumed to be operated 50% using E85 and 50% with gasoline — a very optimistic assumption. But even more optimistic, each gallon of E85 is assumed to have the carbon content of only 0.15 gallons of gasoline. This is, the ethanol component of E85 is assumed to be zero carbon. It is notoriously difficult to quantify the lifetime carbon impacts of biofuels but most studies find that, at best, ethanol is only marginally less carbon-intensive than gasoline. As a result of these overly generous assumptions, flex-fuel vehicles like the GMC Terrain end up being treated by CAFE as if they were extremely fuel-efficient.

terrain

Not surprisingly, manufacturers have been producing flex-fuel vehicles like crazy.  There are today more than 100 different models of flex-fuel vehicles for sale in the United States (who knew?).  And while you used to always see a “flex fuel” sticker on the back, many flex-fuel vehicles today aren’t even identified. You might be driving one and not even know it.

Thankfully, the flex-fuel loophole ended with model year 2015. These credits were so lucrative, however, that many manufacturers are now sitting on large stores of surplus credits. Under CAFE rules these credits can be “banked” until 2021, ensuring that the legacy of this loophole will live on, allowing manufacturers to produce lower-MPG vehicles for years to come.

flex

So let’s not feel too sorry for the automakers. Yes, the CAFE screws are beginning to tighten, but the automakers’ situation is not nearly as dire as they would have us believe.

Posted in Uncategorized | Tagged , , | 6 Comments

Why Does the Media Ignore Grid-Scale Solar?

Last month, I went to a talk by someone I surprisingly hadn’t heard of before. Yosef Abramowitz is an entrepreneur whose company, Gigawatt Global, just constructed and commissioned the largest solar power plant in East Africa. The 8.5 MW solar PV plant is 60 kilometers east of Kigali, Rwanda. It came online in February 2015 in record time – just one year after the power purchase agreement was finalized – and at its $24 million budget.

Gigawatt Global's Rwanda Project

Gigawatt Global’s Rwanda Project

Yosef is a fascinating, driven entrepreneur. But, through my usual perusing of the energy trade press, I hadn’t come across him. I had heard vague references to a solar plant in East Africa from conversations, but quick Internet searches hadn’t turned anything up.

I just did a slightly more systematic search and confirmed that Abramowitz’s story hasn’t been widely covered.

For example, if I search on “Rwanda solar” on Greentech Media – my go-to site for industry news – I turn up three stories about off-grid solar. One focuses on Ignite Power and two on Off-Grid Electric (here and here). Greentech Media (GTM to insiders) has only an oblique mention of Gigawatt Global’s project with a link to a story in the Guardian.

When I searched on “Rwanda solar” at the Wall Street Journal I was told, “Sorry, no results found.” The New York Times has two hits since Gigawatt Global’s installation came online, but one describes solar dryers and pumps for farmers and the other discusses solar lamps. No mention of Gigawatt Global.

Studying by a solar lamp

Studying by a solar lamp

In general, my read of the energy press is that it’s disproportionately focused on the off-grid sector in the developing world. Why aren’t projects like Gigawatt Global’s getting more coverage?

Here are some possible explanations:

  1. It’s only 8.5 MW. True, this is a pretty small plant relative to other grid-scale solar projects. For example, South Africa has a 175 MW plant, and the US has 17 grid-scale solar PV plants over 100 MW.

But, I don’t think that explanation works for two reasons:

  • Gigawatt Global is delivering orders of magnitude more solar power compared to the off-grid solar companies. For example, Ignite Power, which netted an entire article from GTM, provided 1,000 households with solar systems in 2015. I could not find any discussion of how big these systems are, but they’re described as powering, “some lights, a radio and a television, and cell phones.” Generously, let’s assume this is a 100W system. This means that Ignite has installed .1 MW, less than 1/50th of Gigawatt Global.

Powerhive, a company that installs solar mini-grid systems in rural Kenya, and has been in three GTM articles in the past three months, currently has installations in four villages amounting to 80 kW. That’s 1/100th the size of Gigawatt Global, and Powerhive has been around for several years.

These comparisons are based on capacity, not energy. I’m guessing that central stations deliver more energy per watt, since they don’t rely on individuals keeping the panels in good repair or putting them out when the sun is shining. A former Berkeley PhD student has found that some solar home systems aren’t outside in the middle of the day because farmers don’t want them stolen while they’re in the fields.

Sure, these companies are projected to grow, but Gigawatt Global should as well.

  • 8.5 MW is a huge plant for Rwanda. Total installed generating capacity in Rwanda was less than 150MW in 2015, and Gigawatt Global’s installation increased it by more than five percent. This is like increasing the US’s solar capacity by a factor of 13.
  1. soh_rwanda_5It’s in Rwanda. This might explain why the Wall Street Journal isn’t covering the sector in general, but the other outlets reported on the off-grid sectors there.
  1. It’s not a Silicon Valley company. Abramowitz is Israeli and his company is based in the Netherlands. It may simply be easier for reporters to bump into people who work at local companies, so this might explain a US-centric focus. If this is true, grid-scale solar in Sub-Saharan Africa will get more attention as US-based companies expand in the region.
  1. It’s grid-scale solar, not distributed. I think this is the most likely answer, but it’s useful to reflect on why this preference might exist. I can think of two reasons:
  • It’s more exciting to report on a new kind of electricity system.

I could have asked the question why Kenya’s proposed Lamu coal power plant, which is poised to nearly double the country’s existing generating capacity, hasn’t been covered. But, fossil fuel plants have been built for decades, and, if we’re serious about addressing climate change, we can’t continue building them in the same way.

thumb.php

But, the leap from a fossil fuel driven grid to off-grid solar may be too far.

Projections suggest that only 10 percent of the growth in residential electricity consumption in Sub-Saharan Africa over the coming decades will be driven by off-grid consumers. The majority of new demand will come from existing users in grid-connected areas as well as migration to these areas and grid extensions. If we bring in commercial and industrial, this share goes up considerably.

  • The poor, rural consumers targeted by off-grid solutions are seen as more deserving than the beneficiaries of the grid.

This is misleading for several reasons, which I’ve written about before (here, here and here). For one, we are likely wrong if we think that the only way to use electricity to help the people who currently don’t have it in their homes is by putting a solar panel on their roof. The rural poor need a lot of things, like good jobs, good health care and good education for their kids. Electricity is an important input into many of these things and doesn’t necessarily have to be at someone’s home to provide those benefits. As I argue here, things like solar lanterns and solar home systems don’t currently provide even the services households seem to want, let alone support a robust commercial and industrial sector.

There are certainly examples of the press covering the benefits of the grid. For example, The Economist had a recent piece that was largely about grid electricity. But, the coverage is disproportionately of the off-grid sector.

I’m not against solar home systems or solar lanterns. My concern is that those technologies are getting a disproportionate share of the media coverage relative to the potential benefits they can provide. If policymakers follow the media’s lead and emphasize off-grid solutions, we’re overlooking much higher impact on-grid solutions. And, if the ambitious entrepreneurs and funding follow the media, we’re ignoring the most important part of the picture.

To my mind, this is a huge omission. I hope we see more coverage of companies working on grid-scale solutions in the months to come.

Posted in Uncategorized | Tagged | 11 Comments

Canada’s Got a Good Thing Going

It’s tax season and this makes many Americans pretty grumpy. According to a recent poll/parody, 27% of those surveyed indicate they would rather get an IRS tattoo than pay their taxes.

tattoo

Source

Given the deep-seated ire that taxation can inspire in U.S. taxpayers, it’s not altogether surprising that calls for an economy-wide carbon tax do not find broad support.

Things are different in my native land, Oh Canada, where tax is not a four-letter word. Public support for judicious taxation and public spending are, in my mind, among the shared values that define the Canadian identity (up there with health care, hockey, and Neil Young).  Recent surveys suggest Canadian support for taxation extends to carbon.  According to this comparative study, a majority of Canadians support a carbon tax. Responding to the same survey, less than a quarter of Americans share this view.

Given this cultural bent, it’s not surprising that the highest carbon price in North America is found in Canada.  In 2008, the Canadian province of British Columbia implemented an economy-wide, revenue-neutral carbon tax.  A tax of $30/ton of CO2e  (or approximately $23 USD) applies to all fossil fuels consumed in the province. Carbon tax revenues, which account for approximately 6 percent of provincial tax revenues, offset other taxes (e.g., income and corporate taxes) or are directly transferred to households.

bctax  Sources of British Columbia Tax Revenue : Source

This carbon tax has won international acclaim and support at home. Last year, even the Business Council of British Columbia recommended keeping the BC carbon tax in place. A recent poll shows six in 10 support their home-grown BC carbon tax.

The new Canadian Prime Minister is hoping to leverage this important carbon tax foothold. During election season, many swooned over Justin Trudeau’s legendary perfect hair. His hair is perfect. But what made my heart skip a beat was his election promise to pursue a national carbon price that would apply across the country.

Earlier this month, Trudeau convened a ministers’ meeting to accelerate action on this important promise. But alas, even in my tax-tolerant Canada, a nation-wide carbon price is meeting with formidable resistance.  Although all parties at the meeting ultimately signed on to endorse “some form” of carbon pricing, this compromise language hinges on a very loose interpretation of carbon pricing.

ministers

First Ministers Meeting in Vancouver, B.C., Thursday, March. 3, 2016.  SOURCE

Perhaps the most creative interpretation comes from Saskatchewan Premier Brad Wall. Pointing to the CCS project in his province that captures carbon dioxide from a coal-fired power plant and sells it to oil companies for use in extracting crude, he maintains that this could fall under the umbrella of carbon pricing, very broadly defined.  Others point to government regulations mandating renewable energy and clean technology development, noting that these programs put a hidden price on carbon, paid for by industry, taxpayers, and electricity consumers.

We have seen similar debates play out here in the U.S. where renewable energy mandates, tax incentives, and clean technology programs are the preferred policy response. Across the U.S., a patchwork of these prescriptive policies have been implemented. The good news is that many of these programs are delivering real emissions reductions.  The bad news is that many of these emissions reductions come at higher-than-necessary cost.

Pursuing a GHG emissions reduction target without a carbon price amounts to tackling climate change with one (invisible) hand tied behind your back.  Mandating levels of investment in specific technologies or mitigation options – versus using a strong carbon price signal to coordinate actions taken by households and firms –  can significantly increase the cost of meeting emissions reduction targets.  Here in California, we see significant differences in marginal abatement costs across disconnected climate change policies and programs. This tells us that we could be achieving the same carbon emissions reductions at less cost if we relied more heavily on harmonized market-based mechanisms.

Another key cost consideration for any Canadians preparing to jump the carbon tax ship is that a carbon tax or cap-and-trade program– unlike mandates, subsidies, or  tax breaks – generate government revenues.  These revenues can be used to finance reductions in the marginal rates of existing distortionary taxes (see British Columbia for proof of concept). Alternatively, tax revenues can be used to fund other climate policy initiatives (such as investments in clean technology development) that can expand future opportunities for climate change mitigation while meeting other social objectives.

The upshot is not that carbon pricing is the silver bullet. Multiple market failures and distortions contribute to the global climate change problem.  Complimentary measures such as clean technology subsidies and mandates have a role to play in moving the climate change mitigation ball forward. But carbon pricing is the essential catalyst for coordinating today’s most cost-effective abatement and supporting tomorrow’s most promising abatement options.

Canada has a good thing going in British Columbia. Some other Canadian provinces are preparing to follow suit. With global enthusiasm for action on climate change picking up post-Paris, the value of demonstrating well-designed climate change policy is high. Here’s to hoping that Canada’s good thing keeps on going.

Posted in Uncategorized | Tagged , | 7 Comments

Driving Taxes for the 21st Century

Both Max and Lucas have recently written on this blog about the need to price gasoline appropriately. I agree with them…mostly.  I mean, how could I disagree with them? I’m the one driving the gray Prius with the license plate “TAX GAS”.  But, as I and the others who have advocated for higher gas (and diesel) taxes have recognized all along, it is an imperfect way to price the externalities of driving.  And it is likely to get worse.

IMG_0067

A good idea, but not the whole solution

For more than a decade, the students in my MBA course on Energy and Environmental Markets have listed the externalities from driving and then discussed how well taxing gasoline prices those externalities. The list usually looks something like this:

  1. Greenhouse gas emissions
  2. NOx, particulates, and other local pollution emissions
  3. Energy security
  4. Congestion
  5. Accidents

The students generally get the emissions externalities right away, and the energy security externality pretty quickly. Congestion externalities — my decision to get on the freeway slows down all the other cars on the road — sometimes takes a bit longer.  Accident externalities — my decision to drive increases the chances that another car will hit or be hit by me — are almost always the last to be pointed out.  Most students are surprised to learn that congestion and accidents are the largest externalities from driving.

LosAngelesTraffic405freeway

Slowing down other cars is often the largest negative externality from driving (limitstogrowth.org)

Then we get to the livelier part of the discussion: is taxing gas an effective way to have drivers internalize the externalities that they create?

Before I discuss the answers, let’s recognize that no public policy perfectly targets the problem it is meant to address. Every tax break is utilized by someone it was not intended to benefit and goes unnoticed by someone whose behavior it would have changed in exactly the hoped-for way.  Subsidizing the purchase of an energy efficient refrigerator sometimes causes a household to go from having one refrigerator to two, keeping the old one running in the garage or basement.  Fuel economy standards get people to buy more efficient cars, but don’t encourage them to drive any less.

Still, even if perfection is unreachable, we need to understand the policy imperfections and work to improve them.

The discussion of gas taxes and greenhouse gas emissions is always very satisfying, because it turns out that the correlation between burning gasoline and emitting GHGs is nearly perfect. Every gallon results in about 20 pounds of CO2 emissions.  So, if you want to put a price on GHGs, taxing gasoline is pretty much the same thing when it comes to emissions from gasoline-powered cars.  One smiley face for gas taxes. 

ORNL_vehicle_exhaust

Much of the problem comes from a small number of old smokies (Oak Ridge National Laboratory)

But the students also start to see red flags as they apply that logic to the other categories. The high correlation with GHG emissions evaporates when it comes to NOx and other local pollutants. These emissions, which contribute to ozone and other health-damaging pollutants, have a very low correlation with the amount of gasoline the car uses. Old cars are massive polluters compared to new cars, due to great technology improvement in pollution control systems. And even within the same year and model, there is huge variation in the quantity of these emissions, as our MIT colleague Chris Knittel has shown in work with Ryan Sandler. Taxing gasoline is not an effective way to go after the small share of cars that put out most of the local pollution.  A frowny face for gas taxes.

Energy security is always a bit hard to explain, but it generally means some combination of greater risk to our economy when we import a lot of oil, and greater security risk when oil sales enrich the autocratic leaders of oil exporting countries.  As US oil production rises and world prices fall, it’s less clear that this is a big externality, but it is clearly still highly correlated with the amount of gasoline one uses.  Another smiley face, though probably a less important one.

By now the students see where this is going, despite the fact that I have told them of my license plate at the beginning. Congestion is also likely to be poorly correlated with the amount of gasoline a car burns. Some people drive on crowded freeways at rush hour, while others drive on uncongested roads or at off-peak times. I’m not aware of any good studies on the variation in congestion externalities across drivers, though someone at Waze/Google should be able to tell us a lot on the subject. That one almost certainly gets a big frowny face.

Accidents are more complicated because some of this externality is internalized through your insurance rates. But work that Max has done with Michael Anderson points out that insurance does a poor job of internalizing the accident-risk externality, because of low insurance requirements and limited cases of liability.  Max and Michael find that a gas tax does a pretty good job of representing the fact that heavier cars are more likely to hurt other people, but it still doesn’t capture the variation in where and when people drive, or much of the variation in how carefully they drive.  Hard to know for sure, but gas taxes probably aren’t great.

Perhaps the most interesting part of this debate is not how well taxing gas captures externalities today, but how that will change in the next decade or two. Gas taxes will almost certainly remain an excellent way to price greenhouse gas emissions and, to the extent they are relevant, energy security externalities.

Technology, however, is increasingly giving us much better ways to address the other externalities, though not without their own issues. Onboard computers will be able to inexpensively monitor tailpipe emissions so we can know exactly how much pollution a car has put out in the last year (though tampering with the equipment may still be a problem – see the VW debacle).  GPS will be able to report to that computer how many of the miles were driven on roads that Google was coloring yellow or red at the time and, with some sophisticated algorithms, even calculate how many delay minutes you imposed on the drivers around you.

LAtrafficmap

GPS could easily tell your onboard computer when you’ve been driving on congested roads (LA freeways at 7:13am today)

At the cost of a modicum (OK, a whole lot) of privacy, we could price pollution and congestion externalities to an extent that perhaps only an economist could love.  The difficult conversations we have been hearing lately about the trade-off between privacy and social responsibility will come to vehicle transportation.

And the possibilities for pricing accident-risk externalities are even more exciting/disturbing.  That onboard computer will know how close you came to hitting the other car or tree or pedestrian, as well as every time you accelerated too quickly or braked too hard.

By now, you may be thinking, “wait, when onboard computers are monitoring that much information, they will also be driving the car.”  Maybe so.  But given the blowback I’ve heard from drivers who seem to think that the right to drive old polluters is also protected by the Second Amendment, I don’t think see everyone giving up their vehicular autonomy any time soon.

And sometime in the next decade we will have to face up to the fact that electric or hydrogen or biofuel powered vehicles have the same congestion and accident effects as the ones powered by hydrocarbons.  As they become a larger share of the fleet, gas taxes will become even less effective for these major externalities, though still a fine way to capture GHG emissions.

In 20 years, if vanity license plates aren’t obsolete, I will have to get a new one: MEASURE AND TAX ALL DRIVING EXTERNALITIES.  Hmmmm. That may have to go on my LED bumper sticker.

ADDENDUM: As commentors have pointed out, wear and tear on roads is also an externality, because we don’t pay for the damage our vehicles do to the roads.  That is clearly correct.  There seems to be some disagreement about how much road damage increases with weight (though it is recognized to be more than proportional), and about how much damage occurs due to weather apart from vehicle use.

Posted in Uncategorized | Tagged , , | 15 Comments

Our Newest Energy Consumer

We recently added a new member to our family. Since I have a tendency to look at the world through an energy lens, I’ve being wondering, what is the likely energy and climate change impact of our family expanding the global population by one? And more broadly, what is the current thinking about how global population trends will affect greenhouse gas emissions?

Within our household the impact is apparent. We’ve been running the heater more than usual to make sure the baby isn’t cold at night. We’re doing more laundry, thus using more natural gas and electricity. We’re also consuming more—diapers and baby toys. In other words, we’re directly and indirectly, through our consumption of goods and services, using more electricity, oil and gas. The little squirt is racking up a significant greenhouse gas deficit already!

Now let’s assume our child is going to be an average American. How much greenhouse gas emissions does the average American account for? Yikes! According to World Bank statistics, carbon dioxide emissions per capita in the US were 17.0 metric tons per capita in 2011. That’s over 3 times the world average!

World Bank Group. 2016. Global Monitoring Report 2015/2016: Development Goals in an Era of Demographic Change. Washington, DC: World Bank. DOI: 10.1596/978-1-4648-0669-8. License: Creative Commons Attribution CC BY 3.0 IGO

World Bank Group. 2016. Global Monitoring Report 2015/2016: Development Goals in an Era of Demographic Change. Washington, DC: World Bank. DOI: 10.1596/978-1-4648-0669-8. License: Creative Commons Attribution CC BY 3.0 IGO

Ah, but our daughter is not an average American, she’s a Californian. According to the US Energy Information Administration’s (EIA’s) latest state-level analysis California per capita greenhouse gas emissions are 45% below the national average. If she had been born in Texas, where I grew up, the statistics would suggest her contribution would have been 45% above the national average.

Is it appropriate to look to averages like these to determine the environmental impact of expanding the population?

Dr. Paul Ehrlich thought so in the late 1960s, when his book The Population Bomb popularized the idea that population growth will cause widespread environmental damage. His analysis proposed multiplying the population by the per capita environmental impact to predict the total negative environmental damages.

The Population Bomb

If you take this analysis at face value, policymakers wanting to address climate change should not only promote policies that reduce the amount of greenhouse gas generated by energy, but also push policies that reduce population and economic activity. That sort of narrow logic, however, ignores all the other ways in which growing populations and economies improve human welfare, and has, fortunately, fallen out of favor. (For an excellent history of the debate between Paul Ehrlich and his critics check out Paul Sabin’s 2014 book The Bet.)

In the early 1990’s, an update of this analysis by Dr. John P. Holdren, President Obama’s current Director of the White House Office of Science and Technology Policy, used a similar, simple model to conclude that, globally, population growth from 1850 to 1990 was responsible for 52% of energy growth, with the remainder being attributed to a growth in per capita energy use.

More recently several studies (for example, here and here) have taken a fresh look at relationships between greenhouse gas emissions and population. The papers try to model relationships between population growth, economic growth, aging, urbanization and other demographic factors.

As far as I can tell, unraveling what’s causing what among all these factors is extremely difficult. In some cases these papers imply causal relationships, but I’m skeptical that we really understand these interactions yet. I hope to see more vigorous research in this subject area because I believe that policymakers addressing climate change should try to understand demographic trends.

Two types of trends deserve special attention. First, policymakers should consider overall projections of population growth by region to help set energy priorities. Second, policymakers should look beyond the headline numbers and consider how the age profiles of populations are changing in different ways in different regions.

First, the overall projections. The United Nations Population Division develops a set of widely used population projections. A supplemental probabilistic analysis published in Science projects that global population will grow from 7.2 billion people in 2014 to between 9 billion and 13 billion in 2100, with a 95% probability.

The difference in projections between continents is especially remarkable. Asia, the most populous continent, could see a peaking population mid-century, but Africa is projected to triple or even quintuple in population. So, while today’s emissions per capita in Africa are lower than anywhere else in the world, the aggregate emissions from Africa could grow dramatically over the century, even more so if per capita emissions converge with higher income countries.

SOURCE: Gerland et al. (2014), "World population stabilization unlikely this century," Science 346(6206):234-237.

SOURCE: Gerland et al. (2014), “World population stabilization unlikely this century,” Science 346(6206):234-237.

One takeaway is that policies and technologies that are effective in Africa will have a tremendous impact over the course of the century. Catherine, for one, is exploring important issues related to energy use in Africa (here, here and here). Also, as Lucas explored last week, getting energy prices right is important, especially before countries’ get too far down the path of investing in inefficient automobiles and other capital stock.

Second, policymakers should consider how the characteristics of the global population are changing, and how these characteristics vary between countries. The World Bank tackled these trends in its latest Global Monitoring Report. The report describes how children have represented a shrinking share of the population since the late 1960s and working age adults’ share of the population peaked in 2012. Adults aged 65+, on the other hand, represent a growing share.

However, Africa diverges from the overall trend. Children and working age adults still represent a growing share of the population. Africa may eventually converge towards global aging trends, but it isn’t there yet.

In higher income countries with aging populations policymakers will need to pay more attention to 65+ energy consumers, and how they may differ from the average consumer. For example, income-tested subsidy programs that disregard overall wealth capture disproportionate numbers of older adults. Some of these programs encourage inefficient use of energy by setting lower prices for energy rather than transfers to pay for a certain basic level of energy use. Programs aimed at the poor should be better targeted to those they’re intended to help and also consider any negative impacts on the environment. Improving the energy efficiency of low-income senior housing programs could also be an important use of resources long term.

Also, energy use within sectors like healthcare could become more significant and is ripe for technological innovation that focuses on energy conservation. Policy and technology should turn attention to these kinds of problems and opportunities.

On the flip side, in lower income countries, especially those in Africa, policymakers should keep in mind that populations are younger and will remain that way for quite some time. Prioritizing access to the latest energy innovations for young people there will have long lasting effects. For example, energy efficiency within rapidly expanding mobile phone networks will be important.

Of course our new daughter will be hearing a lot about the importance of being a thoughtful energy user. We’ll also have to get her some carbon offsets for her first birthday to make her feel better.

Posted in Uncategorized | Tagged , | 8 Comments