NEW YORK(Thomson Reuters Regulatory Intelligence) - Regulators frequently apply a not-so-invisible hand to manage the advent of new technology. They weigh the perceived social benefits of the innovation against the perceived dangers (physical, financial and moral). Examples include credit default swaps and other risk-spreading financial instruments, blockchain, bioengineering, fintech and medical advancements. This dynamic is also front-and-center in transportation safety, previously with seat belts, anti-lock brakes and air bags, and now in the field of automated driving systems (ADS).
Thomson Reuters Regulatory Intelligence reached out to Mark Raffman, a partner in the Washington, D.C. office of Goodwin Procter LLP. Raffman concentrates his practice on complex product liability and consumer products litigation and advice. He advises on regulatory compliance audits and legislation as well as transactions posing product liability risks.
Raffman’s responses, edited for length, reflect his own views and not those of his law firm or any firm clients:
Q: In an effort to bolster the development of ADS, the U.S. Department of Transportation (DOT) announced (October 2018) its new policy that Federal Motor Carrier Safety Administration (FMCSA) regulations (FMCSR) “will no longer assume that a commercial motor vehicle (CMV) driver is always a human or that a human is necessarily present onboard a commercial vehicle during its operation.” The DOT emphasized the preemptive authority of the FMCSR over conflicting state and local regulations, thus paving the way for uniform minimum safety standards, at least for long-haul trucks and buses. How have stakeholders (for example, state and local governments, manufacturers, the transportation industry and labor interests) generally responded to this development?
Raffman: The October 2018 policy (“AV 3.0”) is itself an attempt to respond to stakeholders’ complaints that federal regulators are not keeping pace with technological developments. At the most general level, reaction from stakeholders seems to be that the federal government is proceeding in the right direction, but not fast enough or far enough. But different stakeholders have emphasized different aspects of the plan – for instance, industry has touted the technology-neutral stance of the new document as a positive development; state officials have expressed concerns regarding funding and flexibility; and labor has expressed concern about whether federal safety regulation will be sufficiently robust. Meanwhile, in the absence of federal action, interested stakeholders are liaising with state regulators to see if they can pave the way for sensible and uniform regulation across transportation corridors.
Q: What other ways has the federal government intervened to encourage the development and adoption of ADS technology and a uniform state regulatory environment, in the commercial vehicle sector and generally?
Raffman: No other federal efforts in the automotive space come immediately to mind – DOT’s publication has been the centerpiece and focus of federal efforts as far as autonomous vehicles go. That said, other agencies (e.g., Federal Trade Commission and Consumer Product Safety Commission) are considering other aspects of Internet of Things risks and regulation. If Congress were functioning as it should (i.e., not paralyzed by partisan gridlock), we would expect legislation to update the federal government’s regulatory relationship with the emerging technologies such as ADS and the Internet of Things.
Q: About how many state governments have introduced or passed autonomous vehicle laws, and what are some of the key similarities and differences among them?
Raffman: Over forty states have considered legislation related to autonomous vehicles, and more than half the states have passed legislation or regulation in one form or another. They range from bare-bones to more involved. For example, most states have yet to address insurance requirements for autonomous vehicles, while a handful of states (New York, Nevada, Connecticut, Florida, Georgia and Nebraska) have implemented mandatory insurance requirements for both autonomous vehicle testing and road use. Some states are also coming to grips with what it means to have remotely-operated vehicles on their roadways, and in particular how to ensure that remote operators are properly trained and not operating under the influence of drugs or alcohol.
Q: The big assumption is that ADS will continue to develop to the point where eventually users have little input on road safety. This reallocates responsibility for most accidents to the ADS vehicle manufacturing supply chain. How might this shift affect the current patchwork of state tort and mandatory auto insurance laws? For example, will more states (beyond the current 12 or so) move from a traditional driver negligence system towards a no-fault system (in which crash victims receive loss compensation through their own insurance policies, and cannot sue other “drivers” unless their injuries reach the specified monetary or verbally descriptive threshold, such as death or permanent disfigurement)? What other systems might be appropriate, especially in the transition period where regular cars share the road with ADS vehicles?
Raffman: First, I’m skeptical that the no-fault model will take hold. Drivers have an inherent incentive to drive safely, so as not to be injured or killed on the roadways. That inherent incentive is what mitigates the “moral hazard” of a no-fault system. But in a no-fault model for autonomous vehicles, the incentives toward safety would be degraded given that manufacturers do not suffer the physical consequences of unsafe operation, as do drivers.
I think states will continue to require owners of automated vehicles to have liability insurance. There will always be opportunities for operators to affect the operation of the vehicle, even with fully autonomous capability. Or, put another way, injured plaintiffs can be expected to look for and identify ways in which a human owner/operator failed to maintain the vehicle, or to affect its operation, and that incentive means owner/operators will need insurance. (The same may not be true for the occasional “passenger” of a ride-sharing vehicle, but ride-sharing services can and will insure their fleets themselves). As long as the “human” component of vehicle ownership/operation requires liability insurance, we probably will not see a revolution in how vehicles are insured, or a difficult transition period while regular cars and autonomous cars operate together.
In addition, it’s likely that states will ultimately require insurance from manufacturers and/or operators as a condition for the sale or operation of autonomous vehicles. In my ideal world, all of the various participants in the supply chain for an autonomous vehicle would be insured under the same rubric, so that an accident victim with a legitimate claim against the “vehicle” can be compensated without complicated and drawn-out litigation among the various entities (software component developer, auto maker, etc.) over which of them might be responsible. Insurers, incentivized toward safety to reduce claims payouts, could assist in investigating root causes of accidents outside the litigation process.
Q: On the supply chain liability side, how might the product liability regime evolve to protect consumers while fostering continued investment and innovation in ADS? For example, what is the likelihood that over time regulators try to stop unlimited liability by adopting some of the risk allocation methods used in other industries, for example, tort immunity, statutory limitation of liability, or even a taxpayer-funded backstop?
Raffman: As noted, I’d like to see supply chain liability rules evolve so that the entire supply chain for a vehicle is insured under one insurance policy, so that legitimate claims can be settled quickly and victims compensated without multiparty “complex litigation” in the case of every accident. Anything regulators can do to foster that outcome would be beneficial.
The one area where the product liability regime needs to incorporate limits on liability, in my view, is in the case of algorithms that are designed to respond to an emergency. Today, if a driver swerves to avoid a school bus and hits another car, the driver is assumed to have reacted instinctively and, if the driver is at fault, is taxed with “negligence” but not intentional conduct. Everyone understands it’s an accident. But if an algorithm decides how the vehicle will react – if we know in advance what the vehicle will avoid, and what it will hit – that introduces an element of intentionality that may, in actual cases, serve to inflame a jury in a way that a normal “accident” might not. But autonomous vehicles need these algorithms to operate, and they promise overall better safety than human drivers. So there needs to be some sort of regulatory safe harbor for algorithms that are consistent with industry standards (or whatever ex ante standards regulators choose to adopt) so that this litigation risk can be mitigated. That doesn’t mean victims can’t receive compensation … just that autonomous vehicles are not judged more harshly for using vetted algorithms than humans are for using instinct, in the moment.
Q: Cyber risk is a big consideration; a cyberattack could cause high-speed pile-up accidents, or traffic jams that disrupt the supply chain and trigger large economic losses. To what extent is the current state-based insurance regulatory framework (for commercial general liability (CGL), business interruption, stand-alone cyber and cargo insurance, etc.) equipped to handle these situations?
Raffman: I would expect that CGL policies would pay claims based on property damage or personal injury caused by a product, though I have heard arguments about new exclusions that might come into play. For cases of purely economic loss, i.e. due to fleet down-time etc., CGL would not respond but other policies like business interruption insurance might. Insurance companies and brokers are offering tailored “cyber-risk” policies to try to fill gaps in what more traditional policies offer. I think market developments, rather than regulatory developments, will lead here. The bottom line is that companies manufacturing products in this space need to look closely at what their policies cover, and especially at what they exclude.
Q: Are there any scenarios where owners or occupants of fully-autonomous vehicles might still be “contributorily” liable for human decisions, such as the failure to download ADS software updates or to bring their cars in for maintenance? For example, drivers frequently use their discretion to time the scheduling of safety-related maintenance, such as tire rotation and wheel alignment.
Raffman: As noted above, my stance is that the human who is responsible for ownership / operation of any autonomous vehicle will always bear a potential share of liability for property damage or personal injury caused by a vehicle accident. Only the casual passenger – i.e., the rider in an autonomous, ride-sharing vehicle – would likely not bear some potential liability.
Q: Vehicle manufacturers facing increased product liability risk may have the incentive to program their check engine lights more liberally (to induce owners to bring their vehicles in for service) or to aggressively sell lucrative extended warranties. This may raise consumer protection/deceptive practices issues. How do you think regulators will address this?
Raffman: I think the market will be more of a tonic than regulators or deceptive practices suits if manufacturers place unreasonable operating constraints on autonomous vehicle owners.
Q: The Boeing 737 MAX jet crashes put a public spotlight on how market segmentation of optional safety features can have tragic consequences. Do you think that regulators will allow ADS vehicle manufacturers to segment the market by charging more for optional ADS safety features, or enhanced ADS maintenance programs through extended warranties?
Raffman: My response is that ultimately, there’ll be market pressure for “optional” safety features to become standard, if they’re important. Backup cameras, once a luxury, are now ubiquitous. The movement toward safety will be a combination of market-driven (what people want and expect); insurance-driven (premiums, discounts); and potentially regulation-driven. But I doubt regulators get far out in front of the market.
*To read more by the Thomson Reuters Regulatory Intelligence team click here: bit.ly/TR-RegIntel
(Jason Hsieh is a contributing writer for Regulatory Intelligence.)
This article was produced by Thomson Reuters Regulatory Intelligence - bit.ly/TR-RegIntel - and initially posted on Oct. 16. Regulatory Intelligence provides a single source for regulatory news, analysis, rules and developments, with global coverage of more than 400 regulators and exchanges. Follow Regulatory Intelligence compliance news on Twitter: @thomsonreuters
Our Standards: The Thomson Reuters Trust Principles.