How Should We Regulate Safety for Autonomous Vehicles?
A recent experience with Uber/Lyft brought home the limits of market incentives.
Max has been dreaming about a clean, autonomous future. Meanwhile, Uber and Lyft are both talking about initial public offerings, and a large part of their valuation is based on their positioning to run that future.
I had an experience with one of those transportation network companies (TNCs) recently that highlighted the limits of technology and the importance of considering economic incentives to provide safe products. I haven’t thought about the National Highway and Transportation Safety Administration in a while, but I’m now hoping that their budget will expand greatly so they can hire a lot of capable people to design regulations for autonomous vehicles.
In regulatory economics classes, we distinguish between “economic regulation,” which happens when agencies such as the California Public Utility Commission (CPUC) influence prices, market entry and other economic variables, and “social regulation,” which includes regulation of health, the environment and safety. The Federal Aviation Administration (FAA), in the news recently when it decided to ground the Boeing 737 Max jets in the US, provided an example of safety regulation.
The basic idea behind product safety regulation is that consumers may not be well informed about the risks associated with certain products, so the government steps in to do things like mandate air bags for passenger vehicles or ground planes. With vehicles, there is an additional market failure, because a malfunctioning vehicle could put other drivers, pedestrians, and bicyclists at risk. Since the consumer who buys and/or operates the vehicle may not fully internalize the risks to others, this creates a negative externality.
A couple weeks ago, I had a really bad TNC driver on a ride home from the airport. (I’ve decided not to call out one particular company.) This got me thinking about these companies’ incentives to provide safe trips, these days with human drivers, but very soon with computers at the wheel.
My driver was swerving between lanes on the freeway, had trouble staying on the correct side of the yellow line on Berkeley streets and generally seemed out of it. My son shared the ride with me. He is generally less alarmist than I am (which is probably true for most 18-year-old boys relative to their mothers!), but he agreed that the driver was really dicey and possibly drunk.
I reported the driver to the company right away – both on their app and by email. At first, I imagined that the technology behind their platform would help rectify a bad situation. I figured the words “possibly drunk” would trigger an alarm in their system, then a human being at the company using GPS would pinpoint where the driver was and notify the police, who could apprehend him to give him a breathalyzer test.
In truth, I have no idea what happened. The company did not provide any details when I called and emailed the next day. But, I don’t think that the technology-assisted instantaneous response I imagined is what played out.
Based on other people’s reported experiences online, and driver discussions on Reddit, it sounds like the company may not typically notify the police. The online driver discussions suggest something like a two-strikes policy, where the first time a rider reports a possibly drunk driver, the driver is suspended for 24 hours and the second time, the driver is dropped from the platform. I suppose the 24-hour suspension helps get the person off the road, but it’s definitely not as effective as a police stop.
Both companies describe policies of “zero tolerance” for drug or alcohol use while driving, but they do not provide any details on how this is enforced.
Ah, you may be thinking, that’s exactly where robots come in. But, while it’s true that robots do not get drunk, computer software does crash, sensors malfunction and systems can go awry. My guess is that TNCs’ incentives to make sure their autonomous vehicles are working will be similar to their incentives to get drunk drivers off the road. In short, they’re profit-maximizing companies that will be weighing the costs and benefits in ways that might not align with societies’ best interests.
So, what are those incentives? Can liability laws deliver the right amount of attention to safety? Do we really need more regulations if the TNCs face the risk of multimillion dollar lawsuits in the event a driver who was reported drunk gets in a major accident on the ride they send him on 10 minutes after someone reports him drunk? I’m not a lawyer, but their Terms of Service (Uber’s and Lyft’s) seem to have anticipated this scenario. Lyft’s terms, for example, insist that everything is settled by binding arbitration and claim that every ride is an individual contract, they are not a transportation provider, and they “have no control over the quality or safety of the transportation that occurs as a result of [ a ride].” And, there are a lot of other products subject to extensive safety regulations even though we have liability laws. For example, the Consumer Product Safety Commission is basically 100% focused on safety.
How about market incentives? Uber is reportedly contemplating using potential riders’ phones to detect whether they’re drunk as they try to line up a ride. Could they provide something similar for the drivers? Or, could a TNC require breathalyzers installed in drivers’ cars? At this point, I would certainly go to the company that did that, even if it involved paying a premium. The fact that the companies aren’t using technology to monitor drivers who might be under the influence suggests that either I’m in a minority, the costs of the breathalyzers are too high, or that it discourages people from driving for the company.
I did see one report suggesting that regulators are paying attention to how well the TNCs are monitoring drivers who are reported as drunk, but it was depressing. The CPUC, which regulates TNCs in California, investigated 154 complaints of intoxicated drivers and found that Uber did not follow up on 133 of them. In light of these infractions, the CPUC assigned a fine of $750,000, which the Los Angeles Times pointed out amounted to 0.0075% of the company’s annual revenue.
Automobile accidents accounted for over 40,000 deaths in the US in 2016 and more than half of those deaths were people under age 50. Automation will bring radical changes in the way our automobiles are operated. Technology, liability laws and economic incentives will help provide some level of safety, but regulation should also be a significant part of the solution.
Suggested citation: Wolfram, Catherine. “How Should We Regulate Safety for Autonomous Vehicles?” Energy Institute Blog, UC Berkeley, March 18, 2019, https://energyathaas.wordpress.com/2019/03/18/how-should-we-regulate-safety-for-autonomous-vehicles/
Categories
Catherine Wolfram View All
Catherine Wolfram is Associate Dean for Academic Affairs and the Cora Jane Flood Professor of Business Administration at the Haas School of Business, University of California, Berkeley. She is the Program Director of the National Bureau of Economic Research's Environment and Energy Economics Program, Faculty Director of The E2e Project, a research organization focused on energy efficiency and a research affiliate at the Energy Institute at Haas. She is also an affiliated faculty member of in the Agriculture and Resource Economics department and the Energy and Resources Group at Berkeley.
Wolfram has published extensively on the economics of energy markets. Her work has analyzed rural electrification programs in the developing world, energy efficiency programs in the US, the effects of environmental regulation on energy markets and the impact of privatization and restructuring in the US and UK. She is currently implementing several randomized controlled trials to evaluate energy programs in the U.S., Ghana, and Kenya.
She received a PhD in Economics from MIT in 1996 and an AB from Harvard in 1989. Before joining the faculty at UC Berkeley, she was an Assistant Professor of Economics at Harvard.
One way to have the best safety protocols would be a federal law making the 5 highest compensated individuals at the owning company, operating company and producer (OEM) personally, jointly and severally responsible for liability with a minimum penalty in the millions for each death.
Catherine, I don’t believe autonomous vehicles will ever be widely accepted in this country (or anywhere), nor should they be. It’s becoming evident responsibility is a human characteristic which can never be replaced.
Responsibility means a driver who has skin on the game – not only keeping his passenger safe, but him or herself. Responsibility means making value judgments in situations of infinite variety, the reactions to which can’t possibly be artificially programmed in advance. Autonomous vehicles won’t be able to account for subtle differences in driving environments to which we adapt every day.
Liability will be a nightmare. Was it coding? Was it hardware? Was it the pedestrian who tried to cross the street at the last minute, knowing the approaching car would be able to stop in time? Was it the autonomous driver which couldn’t detect the skateboarder had earbuds on? For every answer, there are 100 new questions.
Autonomous drivers are unneeded, unlike jobs for skilled drivers. If the purpose is to save money, they’re unjustifiable. Some propose, without evidence, automated drivers will be statistically safer. I propose they explain to the grieving mother of the child lying in the street she shouldn’t cry – her daughter was just a victim of statistics.
After witnessing programmers overestimate the possibilities of artificial intelligence for over fifty years, I realize they always will. Though AI will remain a convenience aid in special situations, I think many passengers will have to die before we realize having a living, breathing, self-interested human behind the wheel, even with its own risks, is irreplaceable.
Unfortunately under the current Administration, NHTSA has decided that they are not interested in implementing any regulations regarding the safety of highly automated vehicles. The policy document that they released last fall asks companies developing automation systems to submit “voluntary safety self assessment reports”, with the emphasis on voluntary. They go so far as to say that nobody should use the absence of such a report or deficiencies in the submitted reports to sanction the companies. The last time I checked, ten such reports were available on the NHTSA website, and they vary greatly in quality and level of detail. Some are nothing more than marketing brochures, while others look like proposals for how to design safe systems but none of them provide any substantive data to support their claims of safety.
Are the automobiles in the photo going the wrong way on a single lane road?
Hilarious Robert – yes, they are. Apparently programmers are still working out some kinks in the software.