Uber automation fatality is a bad decision

Face-to-face Training

We provided training for many large organisations and professional risk bodies, up to the most senior levels.

These trainings give more opportunities to have deeper levels of discussion and interaction.

Find out more →

Uber automation fatality is a bad decision

This article was first published on 20th March 2018. 

 “Was a matter of time. Can we agree this is a stupid idea? #Uber @Potus” 

….cried out the Tweet from Bill (and doubtless the thousands like him, soon to follow). It didn’t take long for the indignation to turn up, the “I told you so” attitudes that had been waiting for just this moment. 

Around 10 pm on Sunday 18th March 2018 in the Tempe district of Phoenix, Arizona, Elaine Herzberg was pushing her bicycle along then stepped out into the road. In terms of physics, she was no match for the large Volvo CX90 SUV travelling at 40mph (64km/h). As soon as she committed to crossing the road, her death was assured. The software and sensors driving the autonomous vehicle (AV) never saw her, or detected her change of direction, or calculated her pathway. This was the failure that companies like Uber, Waymo, Lyft and all the traditional auto-manufacturers had been dreading; the first human death caused by this new technology.

No other information about the incident is available yet, so we won’t speculate about the incident itself. Instead, let’s consider the decision that was subsequently made by Uber, to suspend all AV testing in Tempe and their other test locations in Pittsburgh, San Francisco and Toronto. As a precautionary reflex in the light of sudden failure, as a public and social acknowledgement of the important ramifications of the incident, as a sign of care and respect to all pedestrian human beings, it is, of course, a correct move to make.

Removing such technology experiments from the roads feels like it is sensible risk aversion, but it is a fallacy.

Risk in the numbers

But it can’t be allowed to solidify in the minds of regulators or the minds of the campaigning public. Removing such technology experiments from the roads feels like it is sensible risk aversion, but it is a fallacy. On a purely rational level, Uber has so far completed over 3 million miles of AV testing. By comparison, the US National Safety Council, reports a 2016 rate of 1.25 deaths per 100 million vehicle miles, which seems to underline the grievance against Uber – that is 26 times safer! However, we must be careful with numbers. In 2016, the death rate for motorcyclists (not even including other people affected) was 21 times worse than the all vehicle deaths baseline, so Uber’s AVs are already in the ball-park of a societally-accepted level of risk.

The key reason why any decision to permanently suspend AV testing has to do with learning. As Matthew Syed sets out in his excellent book, “Black Box Thinking”, progress is underwritten by continual exposure to failure. He points out that we can safely travel in an airplane because tens of thousands of people have died before us in air incidents, combined with the aviation industry’s structured and systematic dedication to learning from their mistakes and weaknesses. Black boxes are inserted in aircraft to ensure there is a reliable source of evidence to analyse and understand how errors occurred. Black boxes don’t avoid tragedies but they do ensure that deaths are significant and valuable.

Killing one risk can make another one worse

We can be certain that Uber’s AV was full of recording equipment, constantly monitoring the multiple sensors looking out into the external operating environment and software interactions within. The first great risk of the tragic incident with Elaine Herzberg is that a knee jerk reaction takes place and we end up with a portfolio of risks which are far greater than if we had persisted with this learning experiment.

Human beings are frequently terrible drivers; we get distracted, intoxicated, aggressive, tired. We make selfish or stupid decisions, we often have poor observations and lousy machine-handling skills. We kill when we drive. Not only that, but we drive in a way that pollutes by harsh use of the throttle, poor anticipation of other road-users actions. AVs are likely to produce far safer, consistent and reliable driving habits, reducing accidents, congestion and atmospheric pollutants. We must argue against the fallacy that the incident of death by an AV, means we should not continue to attempt to reduce the much bigger risk of all vehicle-related deaths.

The second big risk goes beyond Arizona.

The second great risk, is that regulators stand back and engage in blamescaping the technology companies. Arizona State consciously invited Uber to test there, selling itself as a lower regulatory environment than neighbouring California, with ideal weather conditions (low rain, no snow) for AVs to perform in. Regulators must strongly defend the taking of the first risk, of doing this testing. But they must also ensure that the risk is worthwhile, that deep, genuine, systematic learning takes place. Learning points must not become absorbed into one company’s intellectual property, but they should be documented and shared for the benefit of the whole AV industry and wider society. They must work to guarantee the learning outcomes don’t get distorted, ignored, or that they have such enormous, reputation costs for the tech companies, that it is impossible for an open and transparent learning culture to thrive.

This is not a time for knee jerk reactions, but ‘black box thinking’. There is a duty to make Elaine Herzberg’s death count, so that we all become smarter and safer.

October 2020 Update

On November 19, 2019, the US National Transportation Safety Board (NTSB) issued its report on the probable cause of a 2018 fatality involving an autonomous vehicle in Tempe, Arizona. Beyond the immediate cause of this accident, NTSB reported that an “inadequate safety culture” at Uber and deficiencies in state and federal regulation contributed to the circumstances that led to the fatal crash. Among the findings were the following:

  • Uber’s internal safety risk-assessment procedures and oversight of the operator were inadequate, and its disabling of the vehicle’s forward-collision warning and automatic emergency braking systems increased risks.
  • The Arizona Department of Transportation provided insufficient oversight of autonomous vehicle testing in the state.
  • NHTSA provides insufficient guidance to developers and manufacturers on how they should achieve safety goals, has not established a process for evaluating developers’ safety self-assessment reports, and does not require such reports to be submitted, leaving their filing as voluntary.

We don’t consider these findings either surprising in themselves; we’d call them typical failure types. At the same time, none of them suggest that the pursuit of testing and proving the use of autonomous vehicles contains risks which are incapable of being acceptably controlled. Innovation and experimentation will always contain an element of risk and we remain convinced that the environmental benefits of optimally controlled vehicles, along with an aggregate reduction in road safety incidents, will be a strong net gain for society and sustainability.

KPMG are producing an interesting annual report called the Autonomous Vehicle Readiness Index showing how different nations are faring in their preparation of legal and technical infrastructure to support AVs. Whilst some countries are continuing to make incremental improvements with each passing year, the USA, UK and Germany (all major car manufacturing nations) are the only countries in the top 15 who have slipped back.

Whilst it is clear that societies have low tolerance for accidents caused by technology, the World Health Organization estimates 1.35 million deaths and 50 million injuries annually. If ever there was a time for a clear presentation of the benefits of quantifiable risk assessments, this is the case to be made. 

Partially automated driver assistance mechanisms such as lane assistance can be perceived as a half-way house towards full vehicle autonomy and a sensible way to limit new risks. However, evidence suggests that these solutions are the worst of both worlds. Passive drivers do not become alert quickly enough to take decisive action with appropriate situational awareness when the technology becomes overwhelmed. They are half-asleep or playing with their smartphones, reading newspapers even. In our opinion, these are more dangerous than systems designed to take full control of a vehicle in all circumstances. We know that people regularly fail at safety-critical tasks, even when they are supposed to be fully engaged. Making systems available that permit disengagement unless a critical situation arises, seems to be the worst approach. We recommend that regulators keep faith in the technology, knowing that near-perfection takes some time.[/vc_column_text]

Don’t rely on luck - rely on Senscia

The new ERM training model designed for risk professionals BY risk professionals.

Strengthen my risk culture →

Now you know a bit about us, why not tell us something about who you are and what kind of business/risk issues you are having to deal with

Contact Senscia today