The prevailing thought is to sacrifice the few to save the many, but what if that lone person on the rerouted track is your son? What would you do then?
Now, what if we ask a robot to make the same decisions—what will it choose?
It’s a question that’s been cropping up more frequently as machines begin to tackle elaborate human tasks like driving. “It’s the most complex daily thing the average person does,” says Professor Raj Rajkumar of Carnegie Mellon University’s robotics department. According to Rajkumar, self-driving vehicles will be a multi-trillion dollar industry by the end of this decade—that’s a lot of robot trolleys zooming toward those five people.
But the answer to the AI version of the trolley quandary—like all good ethics questions—is a trick: A robot would ideally never find itself into this situation, or so I learned while completing my mission specialist coursework with Uber’s Advanced Technologies Group to pilot their autonomous vehicles.