For those that haven’t been keeping track of the auto industry recently, it has become clear that autonomous vehicles will be a part of the future of mass transportation in this country. This view has been greatly bolstered by the September release of the first United States government guidelines on autonomous vehicles, which broadly outline standards for their use on U.S. roads.
It is important to note that this document, officially named The Federal Automated Vehicles Policy, does not contain official regulations. Instead, it is a step which indicates that the federal government is preparing to regulate autonomous vehicles on a national scale in the future. It stands to reason that the U.S. government does not yet have the information needed to craft official laws and does not want to overregulate this developing industry for fear of hindering growth.
An encouraging sign for future autonomous vehicle safety is that the document includes a section which emphasizes concerns about an autonomous system’s “Ethical Considerations”. These ethical issues are at the center of research done by Professor Jeffrey Miller, an autonomous vehicle expert at the USC Department of Computer Science.
Professor Miller’s research addresses issues like the hypothetical Moose Problem. In this scenario two autonomous cars are traveling fast, side-by-side, along a narrow road when one car’s sensors suddenly detect that a full-grown moose has jumped onto the road in front of it. There are only 2 possible scenarios: the car hits the moose and the moose’s weight kills its passenger, or the car swerves into its neighboring car and pushes it off the road, killing those passengers instead. This scenario poses a key question: should an autonomous vehicle preserve the life of its passenger at any cost, even if it requires harming others? Or should it make certain concessions, even if that means that the car’s passenger may get hurt?
“Driverless vehicles need to know ahead of time what decision to make since they can’t instantaneously revert control to the driver for input” said Miller. “These ethical dilemmas need to be considered during development so the vehicle can be programmed to behave in that manner when a situation arises.” It’s clear that the groundwork for framing, studying, and addressing these important ethical consideration is being laid here at USC.
While we see that the government is now planning to enact laws that require autonomous systems to account for ethically challenging situations, the new policy document provides no specific recommendations for addressing or even considering these problems. It does mention that the algorithms tasked with resolving such conflicts should “be developed transparently using input from Federal and State regulators, drivers, passengers, and vulnerable road users,” which seems to imply that a wealth of data will soon be needed. In a move signaling progress on the state-level, California’s DMV released guidelines supporting the federal document on September 30th and more states are sure to follow.
Research on developing ethically acceptable AI, like the kind conducted by Miller, may soon be at the center of a national debate on the future of our transportation system. Only adding to the hysteria, Uber has recently begun using autonomous vehicle to pick up actual customers around its research center in Pittsburg and they may soon launch the same program in San Francisco. It seems we’ve only begun to hear the debate on autonomous cars and how their system will be designed.