Ditto AI
Blog

Who's responsible for autonomous car crashes?

Posted by Ditto on Nov 27, 2018 10:50:00 AM

2018 has seen a number of high-profile crashes involving autonomous or semi-autonomous vehicles. None were more widely reported on than the first known fatal accident between a self-driving car and a pedestrian in Arizona earlier in the year.

These accidents have, understandably, called the AI behind autonomous vehicles into question with some demanding a halt on all testing. Who is really accountable for the accidents though and what role could truly explainable AI have in mediating the leap into the future of transport?

Digital drivers, human accidents

Self-driving cars have been touted as life-saving alternatives to human-controlled vehicles. While the way this is being measured has been disputed the existing evidence does point to the fact that autonomous vehicles are safer on the whole.

When it comes to accidents involving AI-controlled cars, there’s a pretty clear picture of what’s going on. Despite the fact that there has been a rise in distrust for self-driving vehicles in recent months, a study by Axios has revealed that the vast majority of accidents that they are involved in are caused by humans, not the technology itself.

Of 38 accidents in California involving a moving automated vehicle, just onewas deemed to be the fault of the technology. This doesn’t mean that self-driving cars are the magic bullet, though. What the evidence instead shows is that the AI still has room for improvement when it comes to reactingto human error – and until all driving is automated this will continue to be a factor. Humans are still better at responding to the mistakes of other humans. 

The reaction

‘People climb mountains and expose themselves voluntarily to all kinds of risks, but they don’t like risk inflicted on them that they don’t understand or have control over…’

- Dr. Roger E. Kasperson, risk researcher at Clark University.

Three of the cases in the Axios study involved humans physically attacking automated vehicles, with one man assaulting a car ‘with his entire body’, according to California’s DMV. People are having visceral reactions to technology that they aren’t accustomed to, and to the idea of giving the controls over to AI and a team of anonymous developers. 

Dr. Kasperson’s quote highlights one of the major issues that these accidents give rise to – a lack of trust and control. There were over 40,000 vehicle deaths in the USA last year, all involving people, not robots. No matter how many studies show that the fault lies with human error the inability of automated vehicles to explain themselves in a human way will be a barrier to their success. Many will fear and distrust the AI behind self-driving cars until they can understand it.

The future of explainable, automated driving

With artificial intelligence researchers calling for an end to ‘black box AI– basically defined as AI that can’t explain its reasoning – the need for an accountable and transparent alternative is growing. The reaction to driverless car crashes is just one example of this.

The technology itself may not be responsible, but it should be able to explain itself to combat trust issues with its users. This doesn’t just have the potential to make it seem safer to the public – it can help developers to better understand their own technology, and where they need to improve it. If even the developers struggle to make sense of their AI’s decision-making, end users will find it very hard to trust it.

Explainable AI is becoming increasingly valuable and is getting a lot of attention for exactly these reasons. With researchers from MIT to Scientific American and AI Now calling for an AI product that can explain its thought processes, aligning yourself with developers that have made this a reality is likely to pay dividends. 

The success of AI depends on its widespread acceptance by the public and their willingness to make it a part of their lives. The problem of self-driving cars is a microcosm of the far broader trust issues that users have with living alongside the technology. Transparency and accountability are the missing ingredients that could win over the masses for AI.

 

Topics XAI Accountability Trust