People are nervous about life-and-death decisions being made by algorithms rather than humans. Who determines the ethics of the algorithms?

Azim Shariff, assistant professor of psychology & social behavior at the University of California, Irvine, is a member of a team that created an online survey platform called the Moral Machine that gives consumers a chance to share their intuitions about which algorithmic decision they feel is most ethical for the vehicle to make.

What are people's moral priorities in situations where autonomous vehicles have to weigh the risks of harming different passengers, pedestrians, pets or even the driver? Do people prefer entirely utilitarian algorithms - where the car prioritizes the greatest good for the greatest number of people? Or do we think it's more ethical for a car to prioritize the lives of its passengers? Should young lives be valued over older lives?