by Eithne Dodd
They’re already here, but they have yet to go mainstream. Self-driving cars are currently undergoing trials in Coventry, Milton Keynes and Greenwich. As automated vehicles become the norm, the insurance market needs to adapt.
“It’s very much a state of flux. The insurance industry is working very hard to figure out what its next move should be,” said Kevin Pratt, Consumer Affairs Spokesperson at Money Super Market.
Driverless Vehicles Could Be Restricted To Short Journeys And Group Travel
We’re between three and five years away from driverless vehicles on UK roads and potentially longer if unforeseen safety issues emerge. Pratt predicts that initially driverless vehicles might be restricted to short-haul transport services between fixed locations “such as railway stations and shopping centres,” he said.
“Another early adopter might be road haulage ‘caravans’ on motorways, with a number of driverless trucks travelling in a convoy,” Pratt added.
The vast majority of car insurance claims are triggered by accidents and collisions. 94% of car accidents in the United States are attributed to human error.
“So, if you can eliminate the biggest source of claims, you should be able to reduce the size of premiums paid,” said Pratt.
Insurance Premiums Are Likely To Go Down
Although insurance companies are uncertain about some aspects of driverless cars, it is likely that consumers will benefit. Insurance premiums are expected to go down, fewer accidents are predicted to happen and if your car is stolen there is a better chance of it being recovered.
Right now, one of the first thing a car thief does is disable the car’s GPS system if it has one. This will be very difficult to do on an automated car without ruining the vehicle.
“With a fully automated driverless car, the whole vehicle will be sending out signals all the time, so it will be very difficult for a thief to mask the location,” Pratt said.
Who We Be Liable In Case Of A Malfunction?
However, there are still issues that insurance companies need to work out, particularly in terms of where to attach liability for an accident. If an accident occurs as a result of a computer system malfunction, is that the fault of the software engineer or the car manufacturer? To what extent is a human occupant liable if there is a suggestion they should have taken control of the vehicle in order to prevent an accident?
What Will Happen If A Driverless Car Is Hacked?
Another complication for insurance companies is the uncertainty around the potential for automated vehicles to be hacked. Pratt says that insurance companies have yet to decide if hacking will be a complication for them or not. Pratt believes that if hacking does become a reason that vehicles have accidents, insurance companies will likely write an exclusion clause in their contracts and it won’t be covered.
However, Pratt stressed that this was still in speculation stage.
“They can’t make concrete plans until driverless vehicles are on the road in private use,” Pratt said of insurance firms.
Pratt added: “As with so many things with driverless cars we’re venturing into unknown territory so people are reacting to events as they unfold. They’re trying to anticipate trends rather than knowing for certain what those trends will be.”
In Focus: Who Does The Driverless Car Kill?
You’re in your self-driving car, which is taking you along a street. Suddenly a child runs out onto the road. The car cannot stop in time, but it can choose to turn away from the child. So the car has three options. 1. Keep going and hit the child. 2. Swerve to the left which will mean crashing into a wall and you’ll be hurt. 3. Swerve to the right onto a footpath and hit an elderly man.
What should the car do?
If the car is programmed to prioritise your safety, then it will choose option 1 or 3. If it is programmed to minimise danger to others, it will pick option 2. If it is programmed to continue on its path and not deviate onto places not meant for cars, the child will be hit.
This is a real-life enactment of the famous Trolley Problem, a philosophical conundrum many will be familiar with. A trolley is on train tracks hurtling towards five oblivious rail workers. You’re on the station and can pull a lever to move the trolley onto a new track, saving the five men, but it will kill one worker on the other track. The scenario gets people to explore the difference between actively doing harm and allowing harm to happen.
Although this hypothetical situation seems farfetched it will become a very real problem for automated vehicles. As the scenarios for drivers to make gut decisions decrease, the dependency on computer software with a moral compass increases.
Even philosophers don’t agree on what the right thing to do in the above situation. Some would argue that there is a moral duty for the car, acting on behalf of the driver, to protect itself from harm, while others argue it is more important to prevent harm from happening to others.
By removing many human errors from the equation, self-driving cars are predicted to reduce the number of road deaths dramatically. 94% of car accidents are attributed to human error. Driverless vehicles will mean that accidents due to driving while texting or while under the influence of alcohol or drugs should decrease.
That leads to another moral dilemma; if we know that this automated vehicle technology we have now will save many lives, does it matter if we can’t figure out how to programme the cars to act in a Trolley Situation?
As long as the car isn’t programmed to hit people intentionally, some would consider it ethical for the car to make whatever decision is needed to avoid hitting something in the moment. If that is the stance that’s taken does that make it the same as the gut decision that a driver would make. It could also be seen as gross negligence or even pre-meditated murder.
Times when an algorithm has to choose who to kill will be rare. But the point here is, for a computer, it will always be a choice; not a gut reaction.