Earlier this week, the first fatal accident involving an autonomous vehicle occurred in Tempe, Arizona. While it’s too early to comment on the specifics of this particular case, the incident has prompted conversations around the world on a range of issues, including the technology powering self-driving cars, liability when there is no human driver at the wheel and how to engage the public in policy decisions.

To discuss the safety of the technology in self-driving cars, ASU Now spoke with Ashraf Gaffar, assistant professor of Computing, Informatics and Decision Systems Engineering in the Ira Fulton Schools of Engineering. Gaffar is developing an artificial intelligence system for cars that will augment autonomous vehicle technology, as well as traditional cars.

To discuss the policy implications of the accident and what regulatory changes may be on the horizon for self-driving cars, ASU Now spoke with Andrew Maynard, professor in the School for the Future of Innovation in Society and director of the ASU Risk Innovation Lab.

 

Interview with Ashraf Gaffar, assistant professor of Computing, Informatics and Decision Systems Engineering, Ira Fulton Schools of Engineering

QUESTION: Are autonomous vehicle accidents preventable?

ANSWER: We do not have enough information to determine if such accidents are preventable, but as accidents are investigated and simulations are developed, important information becomes available for technology adjustments. 

Q: Can self-driving cars be tested in such a way that the risk to public safety is minimized or eliminated altogether?

A: Part of my research at ASU focuses on using artificial intelligence in new ways, including an intelligent co-driver to observe driver behavior and intervene, if warranted, before an accident happens. This co-driver AI system works for both human and autonomous drivers as a back-up, observe-and-take-action mechanism.

During the past seven years, we’ve designed and executed a dozen scientific experiments covering millions of complex driving scenarios in our car simulator lab at ASU. For the past year, we have been testing situations very much like the recent Uber accident. Specifically, drivers are on a simulated busy road and encounter pedestrians crossing in a variety of unexpected ways, including outside of a crossing zone in the middle of a road at night. Fortunately, the simulator allows us to collect accident rate and driver behavior data during the accident without putting anyone at risk. This data is analyzed and used to improve our artificial intelligence co-driver and will be valuable for dramatically diminishing future accident risks.

Q: Does artificial intelligence offer the possibility of accident-free roadways?

A: The short answer is, not in the foreseeable future.

I work on the window of intervention, or WoI, which is the time between when an accident risk begins and the time the driver perceives and properly reacts to avoid that accident. All drivers do this safely thousands of times each day -- when approaching an amber traffic light or when the car ahead gradually or suddenly stops. Sometimes, the WoI is too short to react properly, and accidents happen.

Technology is not mature enough to match human intelligence on all fronts. That’s why we have human pilots on airplanes even when they have the most advanced autopilots.

With autonomous vehicles, the WoI is more challenging than in the air because roadways are much more crowded and chaotic than the skies. When the autopilot fails or gets confused, the human pilot often has enough time to take over. That’s not always the case with autonomous vehicles.

When an autonomous vehicle fails to handle a risk, the human driver needs to pay attention, perceive and assess the situation, physically take over vehicle controls and then decide what actions to take. Research shows that in some critical situations, even if the driver is fully ready in standby, it could take 10 to 20 seconds to take over. If the WoI is shorter than that, we enter a dangerous situation of negative WoI. Additionally, if drivers need to be in standby position at all times, it defeats the purpose of autonomous vehicles.

 

Interview with Andrew Maynard, professor, School for the Future of Innovation in Society and director, ASU Risk Innovation Lab

QUESTION: Do regulations need to be put in place governing autonomous vehicle testing? What can or should change from a public policy perspective to prevent accidents like this from occurring in the future?

ANSWER: It’s not yet clear whether the incident in Tempe between an Uber SUV and a pedestrian would have been prevented by stronger regulations for self-driving cars. However, it does highlight the need for stronger policies and regulations governing how self-driving cars are tested and used on public roads.

Arizona paved the way for testing self-driving cars on public roads in 2015 by allowing manufacturers to test them with minimal oversight. As Uber and Waymo amped up their operations in the state, new policies were introduced at the beginning of March requiring vehicles to comply with specific safety specifications. However, the details of these specifications, and how they will be implemented, have yet to be worked out.

Self-driving cars are increasingly present on public roads; as such, there is an urgent need to develop regulations that ensure public safety while remaining sufficiently agile to evolve with the technology. At the moment, we know that many of the self-driving cars on the road operate safely under predictable driving conditions. But, when driving conditions are poor and others on the road behave unpredictably, there is an increased risk of crashes resulting in injury or death.

Q: What other policy issues with autonomous driving technology need to be addressed as it becomes more integrated into our lives?

A: Self-driving car regulations need to go beyond basic questions of safety. There are also tough questions around who should be allowed ride as a passenger in a self-driving vehicle, who is liable in the event of an incident and how first responders should respond to incidents — especially where there is no passenger, or the passenger is incapacitated. There are also questions around how to regulate for malicious interference that impairs a car’s ability to operate safely. What should the penalty be for interfering with a car’s sensors, or intentionally fooling them, or even forcing a car to take evasive action by jumping in front of it?

Q: Uber has suspended its testing and some are calling for an end to autonomous vehicle technology altogether. What’s the right way to move forward? 

A: If developed responsibly, autonomous vehicles have tremendous potential to improve lives and transform communities. They could make a sizable dent in the nearly 40,000 vehicle-related fatalities that occur each year in the United States. They may substantially increase mobility for people who do not or cannot drive. And they could potentially open up new ways to develop smart and interconnected cities, where autonomous transportation is fully integrated into their design. As a society, we must decide what risks we are willing to accept to see the benefits of self-driving vehicles emerge.

Moving forward, it’s critical that everyone have a seat at the table when discussing the technology’s development, use and regulation. As we’ve seen with technologies from nuclear power to genetically modified foods, simply focusing on safety and effectiveness are not enough to ensure social trust and acceptance. Because self-driving cars will have a direct impact on people’s lives, there is an urgent need for developers and policymakers to start engaging in sophisticated ways with local communities to obtain their feedback and consent. This will help ensure regulations and technology implementations are developed in alignment with the community’s aspirations and values.