Wednesday 25 December 2024
Font Size
   
Tuesday, 07 February 2012 12:35

Navigating the Legality of Autonomous Vehicles

Rate this item
(0 votes)

“The law in California is silent, it doesn’t address it,” Google’s Anthony Levandowski told me. “The key thing is staying within the law — there’s a always a person behind the wheel, the person in the seat is still the driver, they set the speed, they’re ready to take over if anything goes wrong.”

Ryan Calo, who studies, among other things, the legal aspects of robotics at Stanford University’s Center for Internet and Society, notes, “generally speaking, something is lawful unless it is unlawful — that’s the whole idea of having a system of so-called ‘negative liberties.’”

He has parried on this issue with economist Tyler Cowen, who counters with one local driving code, which states “No person shall operate a motor vehicle upon the streets of the city without giving full time and attention to the operation of the vehicle.” And yet by this definition alone there was nothing illegal about what the Google engineers, sitting up front and busily monitoring the Prius’ various operations, were doing. They were within both the spirit and letter of the law.

In fact, you could argue they were paying more attention than any of the drivers around us.

One reason Google’s autonomous Prius was not unlawful is autonomous vehicles have not been on society’s radar — or roads. As with drivers talking on cellphones (or any number of Internet issues) legislation tends to follow the adoption of new technology. Google, of course, isn’t taking chances and sent representatives to Nevada, which has a history of autonomous vehicle testing, to lobby in favor of Assembly Bill 511. The law “requires the Department of Motor Vehicles to adopt regulations authorizing the operation of autonomous vehicles on highways within the State of Nevada.” It defines an autonomous vehicle as “a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator.”

As at least one commentator has noted, although the bill defines artificial intelligence as “the use of computers and related equipment to enable a machine to duplicate or mimic the behavior of human beings,” many modern cars already do this, in any number of ways: Adaptive cruise control, anti-lock braking, lane-departure warning systems, self-parking, even adaptive headlights.

Navigating the Legality of Autonomous Vehicles

One of Google's autonomous Toyota Prius hybrids struts its stuff. Photo: jurvetson/Flickr

So is Nevada simply reaffirming what’s on the road, or raising the specter that existing in-car technologies would be subject to the requirements laid out in the draft regulations? For example, would the requirement that “prior to testing, each person must be trained to operate the autonomous technology, and must be instructed on the autonomous technology’s capabilities and limitations” apply to someone test-driving a new Mercedes-Benz?

Calo doesn’t think so.

“It sets out a definition of autonomous technology that is based in part on the statutory definition of autonomous vehicle that the legislature gave to the DMV, and then it very explicitly excludes essentially all of the individual technologies that are commercially available today,” he says. “It’s an open question if it would exclude them in their combination or in their particular uses but it reflects the DMV’s effort to say no, no, no, we’re really not talking about things you can buy today, we’re really not talking about the driver assistive technologies, we’re talking about autonomous in its extreme sense.”

Welcome to the brave new world of autonomous vehicle law. When the rules become official, Nevada will become the first state to broach the subject (largely thanks to Google’s influence). But as increasingly autonomous technologies enter our cars, the legal questions, liability in particular, have grown more relevant. As a report by the RAND Corporation notes, “As these technologies increasingly perform complex driving functions, they also shift responsibility for driving from the driver to the vehicle itself…. [W]ho will be responsible when the inevitable crash occurs, and to what extent? How should standards and regulations handle these systems?”

But the legal picture is still incredibly murky. For one, standards for many of these systems are still evolving, and there’s a certain level of mystery regarding their performance and their performance failures. Consider, for example, that it took the National Highway Transportation Safety Administration and NASA investigators months of involved study to determine that the notorious Toyota unintentional acceleration incidents were less likely an electronic malfunction and “most likely the result of pedal entrapment by a floor mat [holding] the accelerator pedal in an open throttle position.”

“In the slipstream of this uncertainty,” Dutch researchers Rob van der Heijden and Kiliaan van Wees note in an article in the European Journal of Transport and Infrastructure Research, “another source of uncertainty concerns doubts on whether legal regimes are adequate to cope with ADAS [advanced driver assist systems] or that they might create problems with regard to their development and implementation.” The fear of product liability always looms as an obstacle to innovation in the auto industry (and liability is a particularly American issue; one study noted that in 1992, Ford was hit with more than 1,000 product liability in suits in the United States and exactly one in Europe). As Rand notes, automakers initially opposed air bags because they worried about shifting responsibility from drivers to themselves; Calo points out that people have sued when a car in a crash does not have an air bag “because someone said at this price point you should have an airbag.”

It’s not hard to spin complicated crash scenarios involving autonomous vehicles and the tangled webs of post-event liability. Take a scenario envisioned by Rand: “Suppose that most cars brake automatically when they sense a pedestrian in their path. As more cars with this feature come to be on the road, pedestrians may expect that cars will stop, in the same way that people stick their limbs in elevator doors confident that the door will automatically reopen.”

But what if some cars don’t have this feature (and given the average age spread of the U.S. car fleet, it’s not hard to imagine a gulf in capabilities), and one of them strikes a pedestrian who wasn’t aware the car lacked this technology? If a judgment is found in favor of the pedestrian, are we encouraging people to be less careful? Or simply hastening the onset of universal pedestrian-crash avoidance features in cars? As Calo asks, “do we force the vehicles to adapt to our legal regime, to our motor vehicle code that sets out what is reasonably prudent or what a vehicle must do?” Or, he asks, “do we make some changes to these legal and social infrastructures to incentivize or encourage or speed up the adaptation of autonomous technology.”

Imagine another scenario: What if a driver in a car that uses lane markings to maintain its position goes off the road on a section where the markings have worn away? Is the local department of transportation at fault? Or the manufacturer of the road striping paint whose product didn’t last as long as promised? Or the automaker for not having a more robust backup system? Or the driver for failing to maintain the necessary vigilance? As der Heijden and van Wees write, “under fault-liability regimes drivers and vehicle owners will not be liable if they acted as a careful person.” But what’s the definition of a careful person in an autonomous vehicle? How could we prove ex post facto they were monitoring the car’s performance and not simply daydreaming?

Or what if the autonomous or semi-autonomous vehicle is a Mercedes-Benz using a hypothetical Google geolocation product and it crashes into a barrier while headed for an off-ramp because it misjudged its location? Is fault attributed to Mercedes (acting on the information), or Google (providing the information), or the driver for not correcting for the error?

An interesting and related area of inquiry here is product liability in the case of crashes that occurred as drivers were given incorrect coordinates by navigation systems. As legal scholar John Woodward notes in an article in the Dayton Law Review, finding fault in that case requires not only locating the source of the malfunction (software, hardware, or a triangulation snafu involving a wayward satellite in the heavens), but ascertaining whether the navigation instructions doled out by GPS constitute a product or a service — which would render product liability claims moot.

Observers like Rand are optimistic that it all will be sorted out, and that liability concerns will not hold up the adoption of safety-oriented autonomous technologies.

“On the contrary,” they argue, “the decrease in the number of crashes and the associated lower insurance costs that these technologies are expected to bring about will encourage drivers and automobile-insurance companies to adopt this technology.”

Photo: Nevada Gov. Brian Sandoval took a spin in one of Googles autonomous cars on July 20, 2011 in Carson City. He called the experience “amazing.” Sandra Chereb/Associated Press

French (Fr)English (United Kingdom)

logo-noemi

Parmi nos clients