Newswise — The widespread push by car, truck, and drone makers toward increasingly automated vehicles has moved faster than technology and faster than legislation. A special issue of Journal of Cognitive Engineering and Decision Making addresses how research methods used in studying human-automation interaction might be lagging behind these fast-moving developments. The papers address ways to close the research gap to inform designers and engineers about what it will take to build safe and reliable systems.
 
The special issue opens with “Issues in Human-Automation Interaction Modeling: Presumptive Aspects of Frameworks of Types and Levels of Automation” by David Kaber of North Carolina State University. Kaber harks back to groundbreaking work by Sheridan and Verplank that defined levels of automation (or LOA; what the human does vs. what the automation does, and under what conditions). The lead paper notes some conceptual criticisms: for example, “some LOAs should not be used,” “LOAs do not identify responsibility for system outcomes,” and “LOAs don’t address the right design ‘question.’”
 
To illustrate his arguments, Kaber presents an example of descriptive levels of automation somewhat counter to the five levels defined by the Society of Automotive Engineers. For example, unlike the SAE definition, Kaber posits that Level 3 (conditional automation) should involve more human interaction, particularly in navigation, environmental monitoring, and system monitoring and intervention.
 
The rest of the issue contains commentaries from human factors experts on Kaber’s article. Some quotes from the commentaries reflect the range of reactions:
 
Thomas B. Sheridan (of Sheridan and Verplank): “I argue that making a science out of human-automation function allocation for system design may be an unachievable objective. …[O]verall design of large-scale human-automation systems … will continue to be a matter mostly of experience, art, and iterative trial and error.”
 
Greg Jamieson and Gyrd Skranning, Jr.: “In our view, the LOA paradigm has lost its momentum and is approaching a crisis. New HAI [human-automation interaction] challenges are emerging along with continual technological development.”
 
John Lee: “The challenge of designing increasingly autonomous vehicles shows that considering multiple perspectives may be more effective than fine-tuning the single perspective of ‘who does what.’ … The LOA framework …may not be sufficient for many systems.”
 
Mary (Missy) Cummings: “…[U]ntil self-driving cars can master uncertainty in all conditions that require knowledge-based reasoning to at least the same degree as humans, we will have only partially capable, but potentially very dangerous, systems that cannot cope with uncertainty.”

Evan Byrne: “Efforts to better understand human behavior with automation will increase the strength of HAI models and empower system designers - assuring that their painstaking efforts withstand the perils of human unpredictability.”
 
View the entire table of contents. To obtain any of the articles for media-reporting purposes, please contact HFES Communications Director Lois Smith (310/394-1811).

 
The Human Factors and Ergonomics Society is the world’s largest scientific association for human factors/ergonomics professionals, with more than 4,500 members globally. HFES members include psychologists and other scientists, designers, and engineers, all of whom have a common interest in designing systems and equipment to be safe and effective for the people who operate and maintain them. “Human Factors and Ergonomics: People-Friendly Design Through Science and Engineering.”

MEDIA CONTACT
Register for reporter access to contact details
CITATIONS

Journal of Cognitive Engineering and Decision Making