Driven to Autonomy

Driverless.

From wires to paper, “less” now means much, much more. In today’s transportation industry, driverless or autonomous cars (technically not the same thing) seemingly hold the promise for a Jetson-esque reality. More efficiency, more reliability, and more safety – with less actual driving. Autonomous systems may finally allow us to text and not drive, but not all systems can do all things. Although birthed from utopian ideals, autonomous cars are not perfect cars. And as long as humans are a factor, they do not operate in a perfect world.

Who is to blame, then, for an accident involving an autonomous car? The sleeping driver, who dozed off long ago, confident in her vehicle’s self-driving capabilities? The manufacturer, whose countless real-world trials somehow missed this outlier scenario? The programmer, whose predictive algorithm had guided the same car through countless grocery runs or stop signs before? Or someone else? These issues of liability run both wide and deep, for good reason. Let’s dive in.

What are Autonomous Cars?

To understand any potential liability issues surrounding autonomous cars, we must first understand what an autonomous car actually is. In 2013, the Society of Automotive Engineers defined various levels of driving automation, summarized here:

L0 – No automation – Human drivers completely control and are fully responsible.

L1 – Driver assistance – Some individual vehicle controls, like cruise control or electronic stability control, can be automated; humans otherwise fully responsible.

L2 – Partial automation – System can drive the vehicle more or less straightforward and avoid other cars; humans monitor and are otherwise fully responsible. Tesla’s autopilot system is a well-known example of this level.

L3 – Conditional automation – Almost complete automation, human intervention still needed under certain circumstances, such as sensor blockages or road abnormalities. Although no cars exist on the road at this level yet, L3 automation is only a few years away.

L4 –  High automation – System is in complete control (in a geographically limited area like a city center) and humans are optional.

L5 – Full automation – System can drive anywhere in any condition. With steering wheel and gas pedals optional, L5 automation does everything a human driver can do. Companies like Google and Tesla are currently aiming for this.

Now adopted by the National Highway Traffic Safety Administration, these levels of automation form the standards by which self-driving vehicles are deemed truly “self-driving”.

The Current State of Liability and Autonomous Cars

Since Level 5 automation is still many years away, current issues of liability will focus on Levels 1-4. As with any issue of liability, causation (whodunit?) will be fact-specific in any accident. The following potential defendants provide a clue into liability issues in the future:

Vehicle owners/drivers: At the very least, courts and juries may find a vehicle owner or driver to bear some liability if he failed to maintain the system (didn’t install the necessary software updates), negligently overrode a vehicle’s automation systems, or improperly used the vehicle in some manner.

Manufacturers (and software developers): This is where it gets tricky. For car manufacturers like Tesla, who are developing their automated-driving software in-house, the chain of liability begins and ends with the manufacturer. However, for other manufacturers working with outside software vendors, such as Google, potential liability may be split.

As with most software disputes, the EULA (End User License Agreement – that long wall of text you see when installing software on your computer that only lawyers read), often points to any restrictions or claims to licenses afforded by the developer. In practical terms, an autonomous car manufacturer’s EULA will state whether the consumer owns (and thus may be liable for) the entire vehicle – hardware and software – or just the physical car. In the latter instance, consumers are merely granted a (restricted) license to use the software. Tesla owners, for example, are prohibited from using their vehicles’ self-driving systems to generate revenue from Uber or Lyft. Since software/EULA issues fall within the realm of copyright law, autonomous vehicles will present novel ownership issues in years to come.

Data Providers: Independent, third-party data providers might share partial liability if their data is a contributing factor in a crash. For example, if Google Maps incorrectly navigates a Tesla down a one-way street resulting in a crash, Google may be held partially liable.

To further complicate things, traditional product liability claims are also set to receive an autonomous-vehicle-fueled overhaul across the board. This Pandora’s box of potential claims may involve everything from negligence (Could a self-driving car manufacturer have “reasonably forseen” the accident?) to failure to warn (What types of injuries are forseeable in an autonomous system? Are written warnings sufficent? See Glorvigen v. Cirrus Design Corp. for a failure to warn case involving an airline’s autopilot mode), and everything in between.

With the multitudes of legal issues on the horizon, we may find it tempting to prematurely tackle the complexities of each. In a time where the future seems ever-present, perhaps taking a step back and addressing the simpler problems in the here and now may be more prudent.

Leave a Reply

Your email address will not be published.