Update: Risk Lurks at Autonomous Driving’s Fuzzy Edge

Car makers are looking for ways to manage the potential liability of autonomous driving features.
Car makers are looking for ways to manage the potential liability of autonomous driving features.

In-brief: autonomous vehicle makers are looking for ways to manage liability by getting drivers to “OK” autonomous actions. But is that the best approach? 

A recent article caught my attention for raising what I think will be one of the key challenges for connected vehicles: liability.

It goes without saying that technology advances in autonomous driving is moving well ahead of our society’s ability to both embrace and control them. While states like California and Nevada have passed laws that regulate the testing and operation of autonomous vehicles, no other states have made concerted efforts to update laws to address the behavior and actions of self-driving cars. At the federal level, the National Highway Traffic Safety Administration has released some initial guidance on autonomous vehicles, but rule making is a slow process. That’s significant, because there are already semi-autonomous vehicles on the road, with many more coming.

And it’s not as if the risks posed by autonomous vehicles are hypothetical. Earlier this month, video captured a horrifying scene in which a Volvo sedan plowed into a group of reporters observing the car’s “self parking” feature. The response from Volvo to the incident was even more worrying: pedestrian detection, the company claimed, wasn’t part of the option package that the car’s owner had elected.

[Read “Building an unhackable autonomous car.“]

Who is to blame for such accidents? The car’s owner (who was not behind the wheel)? The automaker? The maker of the auto-parking software (assuming that is not also the manufacturer)? Nobody?

Absent a clear legal framework within which to operate, car makers are finding their own way to manage the risks attendant with new (and desirable) features like hands-free driving. As this article at IEEE Spectrum notes, Tesla Motors’ is requiring drivers to hit the turn signal on their car in order to activate an automatic lane changing feature that the company is hoping to push to its cars via a software update.

Though that may take some of the spice out of the automated lane shifting feature, it is a way for Tesla to cover itself, should the automated lane change result in an accident. By using the signal, the driver is indicating a clear desire to change lanes – and would have a harder time claiming she had nothing to do with the movement of the vehicle.

As IEEE notes, this is akin to the use of legally verbose EULAs and disclaimers that we all willing click past to get to the cool software we want to use. But in an age of increasingly autonomous technology, technology and device firms will need a way to show that their products have not gone “rogue,” resulting in the loss of life and limb.

Still, it is almost certain that liability lawsuits related to automated driving features will be a big area of concern for vehicle makers. As Forbes notes today: automakers are already facing class action suits based on little more than allegations of weaknesses in connected vehicle software, but without any provable damage. Just imagine what will happen once computers are behind the wheel and life and limb are at stake.

Of course, having individual automakers decide how to manage liability by way of noodling “check box” features may not be the best outcome in terms of consumer protection or customer satisfaction.

I had the chance to speak with Paul Rosenzweig of Red Branch Consulting, a recognized expert on Internet of Things and product liability, back in September.

“I think that the time is coming for a liability revolution in cyber space where the manufacturers of devices and the code writers for devices will have to think about whether they’ve taken reasonable steps to secure their products,” said Rosenzweig at the time.

“The first time one of these autonomous vehicles crashes because a cyber system is hacked by someone who is malevolent through a known vulnerability or seizes up in the middle of a turn because of some coding error, we’re going to have TORT lawyers coming out of the woodwork and making some money out of the deal.”

Check out the video podcast we did below.

One alternative to that, however, is for regulators to get out in front of these features, providing guidelines for their development and implementation that might constitute some kind of “safe harbor” for manufacturing. But its doubtful that government bureaucrats will be able to move fast enough to stay ahead of what everyone realizes is a blistering pace of technology innovation.

Speaking at the MassTLC Internet of Things Conference in Waltham, Massachusetts on Wednesday, Michael Munsey, a Director Product Management and Strategy at Dassault Systemes said that one concern is that a lack of regulatory clarity may hold back the adoption of potentially safety-enhancing and life saving automation technologies.

IoT companies may, themselves, be in the best position to build new requirements around autonomous and connected technologies in automobiles and other systems. “The requirements for new Internet of Things technology may come from test devices that are already in the field and data that is in the cloud already,” he said. “Data analytics based on how products are used in the field could become the platform for requirements generation in the future,” Munsey said.

 

Comments are closed.