Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an interesting thought experiment, but I think a real situation with such a stark binary choice is so improbable that it's barely worth considering.

Cars are not on trolley tracks and computers can brake or swerve harder and faster than any human could.



How about an animal jumping in front of your car - a very common situation - when there's no way to avoid it. Computer could assume it's a human and do absolutely anything to avoid it, crashing into a tree and seriously injuring anyone in the car. I encounter so strange situations on roads, I can't imagine self-driving car being prepared for all of them.


Wow that would be a bad robocar. I've chosen to hit animals, rather than swerving and possibly losing control, on numerous occasions. A robocar that would rather total itself is faulty. Then again a robocar that would choose to kill a child is also faulty. Until they can reliably tell a child from a dog, robocars will not be ready for general use.


That's a high bar. A human driver with a split second to decide may not be able to do that consciously. Maybe robocars should only have to do as well as a human driver?


Manufacturers' liability for lawsuits would seem to require such a high bar? Eventually industry might be able to change the law such that the lives of children in the road are devalued (or perhaps such that animals in the road are valued more highly than car passengers?), but that will take time. Before that happens, a robocar that can't tell the difference just might have to drive real slow.


No matter how fast they break or swerve, it's not improbable that they might be in a situation where the computer has a choice - hit someone standing on the side of the road, or don't hit them,but potentially kill everyone on board of the vehicle. What should it do? Would anyone of us want to be the programmer who writes that code? Because I know that I definitely wouldn't want to be in charge of that.


Well the issue is, we need to program the cars so that we optimize the outcome when every car on the road behaves that way and is expected to behave that way.

Do we get the best outcome when everyone expects the computer to spare the pedestrian? When everyone expects the computer to drive "selfishly" or to try to minimize its own impact velocity?

I don't think it's a good idea to program the cars with a "utilitarian" policy of saving the most human lives possible, as not only does that require quite a lot of inferential power to be packed into the car, it doesn't set up any concrete expectations for the passengers and pedestrians. You would have to add a mechanism for the car to first, in bounded real-time, infer how to save the most lives, and then signal to all passengers and bystanders exactly what course of action it has chosen, so that they can avoid acting randomly and getting more people killed.

This is why we always demand very regimented behavior in emergency procedures: trying to behave "smartly" in an emergency situation, in ways that don't coordinate well with others, actually gets more people dead. Following a policy may "feel" less "ethical" or "heroic", but it actually does save the most lives over the long run.


> it's not improbable that they might be in a situation where the computer has a choice - hit someone standing on the side of the road, or don't hit them,but potentially kill everyone on board of the vehicle.

What makes you think this situation is probable? How often does it currently occur for human drivers? Ever? I don't have statistics at hand, but what exactly is the incidence of pedestrians who are not in the roadway getting hit by cars? And how many of those were caused by driver error that wouldn't exist for a self-driving car rather than a forced choice?


A discipline for hard choices in the realm of human mortality are what separates professions from mere occupations. It's what gave birth to the term 'software engineer' on the Apollo team. It's the price of lunch for the self driving car team. Ignoring the implicit decision doesn't relieve anyone of responsibility. Engineering means accepting the fallibility of one's decisions and the potential consequences of failure include people dying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: