Carbon dioxide has received the bulk of policymakers’ attention as the villain of climate change. Now, its henchman methane is facing scrutiny. Methane is an attractive target. It is much nastier than carbon dioxide in the atmosphere. Over one hundred years, a kilogram of methane has a 28-times greater impact on global warming than a single kilogram of carbon dioxide.
Unfortunately, designing practical policies to cut methane emissions is tough. Unlike carbon dioxide, which can be pinned on burning fossil fuels in power plants and vehicles, methane comes from millions of diverse sources. Methane comes from enteric fermentation (the technical term for cow burps), manure (cows again), landfills, and water treatment plants.
Nonetheless, the Environmental Protection Agency (EPA) is going ahead with regulations and has announced plans to go after the largest source of methane emissions—the oil and gas industry. In 2013, methane leaking from natural gas systems was about 2.8% of total energy-related greenhouse gas emissions. The prevalence of leaks may mean that natural gas generation is worse than coal-powered generation for climate change. Meredith explored this in a prior blog.
To me it’s a big puzzle why there’s any methane leaking. Why is the oil and gas industry allowing one of its major products, natural gas, to float away into the atmosphere?
One explanation is that oil and gas producers are plugging some leaks, but not all because plugging all leaks is expensive. At some point, for private industry, the cost of repairs is not cost justified. The cost exceeds the market value of the gas saved. Policy could reduce leaks further by making the producers face the full social cost of the leaks, including the climate change impacts.
However, the part of the natural gas system that most worries me is the transportation network.
For the most part, the owners and operators of the transportation networks don’t lose money when gas leaks from their infrastructure, and they don’t benefit when they stop leaks. If the amount of gas delivered by a pipeline is less than the gas entering the pipeline, then the shipper, in the case of interstate pipelines, or the end-use customer, in the case of local distribution companies picks up the tab.
This occurs due to cost pass-through mechanisms. The rates charged by pipelines and distribution companies explicitly assume that some gas will be lost. If leaks increase or decrease, rates are adjusted so that shippers or customers continue to bear the cost.
This is a common arrangement in the world of utility regulation. Retail electric and gas utilities have fuel adjustment clauses that pass through changing fuel costs and decoupling mechanisms that pass through capital costs.
The Federal Energy Regulatory Commission (FERC), which sets rates for the country’s interstate natural gas pipelines, launched a new docket last November. FERC proposes to allow pipelines to recover capital expenditures made to enhance reliability, improve safety and meet environmental objectives. This would be allowed outside of the normal rate-setting process.
In January and February FERC heard from interested parties. The pipeline owners love the idea of being able to collect the cost of repairs from customers. What utility wouldn’t? The environmental groups want leaks reduced, but fret that the utilities will favor expensive capital fixes over low-cost operational solutions. The shippers are not happy at all. They doubt the investments will be cost-effective and fear pipelines will spend money with abandon.
There’s no obvious solution. The principal-agent problem persists with or without FERC’s proposed policy. A pipeline’s interests don’t align with its shippers’.
What I find most jarring, however, is the lack of good empirical leak data. Regulators are developing policy in a data vacuum.
Turns out that the EPA depends on a 1996 study that is based on a very small number of leak measurements. Using the study, the EPA calculates “per mile” emissions factors for cast iron pipes, unprotected steel pipes, plastic pipes, etc. Then the EPA estimates total US emissions by multiplying the factors by the miles of each pipe type in service across the country. This recent report from the EPA’s Office of Inspector General provides a critique of the EPA’s emissions factors.
The 1996 study may have been the best available in the past, but times have changed.
The rapidly falling cost of communicating sensors and cloud computing is enabling real-time measurement that was cost prohibitive in the past. This trend is called the “Internet of Things” or Industry 4.0, in the industrial context. Now it’s feasible to monitor natural gas pipelines and compressors at many locations on a real-time basis.
The value of lost gas is substantial. The DOE estimates that each year 110 Bcf per is lost from transmission infrastructure alone. That equates to over $300 million per year at current natural gas futures prices. Applying a social cost of carbon of $37 per metric ton of carbon dioxide, the cost exceeds $2 billion. Investments in sensors are easy to justify with so much value at stake.
The Environmental Defense Fund and Google have launched an initiative that demonstrates one new approach to leak monitoring. In city after city they are conducting drive-by leak surveys using car-mounted measurement devices. Street View meets leak detection. In the sample maps below, each circle signifies a leak, with darker colors representing bigger leaks. The incidence of leaks varies significantly between and within cities.
Here at the Energy Institute we will soon be initiating a new project that will take advantage of new monitoring technologies in the industrial sector, to find energy saving opportunities.
Better leak detection could enable entirely new policy options. The EPA could even pursue market-based approaches that charge utilities directly for the social cost of the leaked methane.
It’s time for natural gas utilities and their regulators to join the sensor revolution. Improving measurement of natural gas leaks is a great place for federal and state regulators to start.
Andrew Campbell is the Executive Director of the Energy Institute at Haas. Andy has worked in the energy industry for his entire professional career. Prior to coming to the University of California, Andy worked for energy efficiency and demand response company, Tendril, and grid management technology provider, Sentient Energy. He helped both companies navigate the complex energy regulatory environment and tailor their sales and marketing approaches to meet the utility industry’s needs. Previously, he was Senior Energy Advisor to Commissioner Rachelle Chong and Commissioner Nancy Ryan at the California Public Utilities Commission (CPUC). While at the CPUC Andy was the lead advisor in areas including demand response, rate design, grid modernization, and electric vehicles. Andy led successful efforts to develop and adopt policies on Smart Grid investment and data access, regulatory authority over electric vehicle charging, demand response, dynamic pricing for utilities and natural gas quality standards for liquefied natural gas. Andy has also worked in Citigroup’s Global Energy Group and as a reservoir engineer with ExxonMobil. Andy earned a Master in Public Policy from the Kennedy School of Government at Harvard University and bachelors degrees in chemical engineering and economics from Rice University.