Ethical Machines

Swami Gulagulaananda said:
Teaching ethics to a human being is hard. I wonder if machines are easier"

There is an old psychology question that we used to ask as kids:
There is a railway track on which trains typically pass on, and another track on the side that is not supposed to be used. There is a sign that indicates walking on the main track is dangerous. Walking on the side track is not a problem because trains are not expected to pass on it. A group of ten young boys are playing on the main track while a lone boy is playing on the side track. You notice the train approaching rapidly and are standing beside a lever that can be used to control whether the train continues on the main track, or switch it to the alternate track. Assuming that the side track is not risky for the train and that you cannot shout to shoo the kids off the tracks because they are too far away from you or do anything else - and given only the two following choices, which would you go for?
- Let the train continue on the main track and allow the ten kids to die?
- Or send the train on the alternate track and let only one kid to die?

The reason this question is interesting is because it allows us to choose between saving many lives versus a single life, but at the same time between saving the lives of those who broke the rules versus the one who followed the rules. While you can save many lives, you are making the rule follower pay the penalty for following rules. Or let many people die… Which would you go for?



Now, imagine that a program had to answer this question. I remember seeing a similar question on Twitter long back, though I don’t remember the source - where self driving cars have a similar dilemma. Imagine a self driving car in which you are seated. It’s driving rather fast down an empty street when suddenly some boy comes running across the street. It’s too late to stop the car. The car can do one of three things:
- Continue going straight and run over the kid
- Swerve left into a group of five boys
- Swerve right into a pole that may kill you

What should the car do?

Whose life is more valuable? How do you measure the value of life? Are all lives equally valuable? What if it is between the lives of an old man versus that of a child? Can we say that the child should live since the old man has already lived most of his life? These are very hard problems to solve.

Person of Interest is a wonderful TV series, a fast paced action filled show that has machine learning at its core. At one point of time, the machine (the central computer that uses ML is called ‘The Machine’ in the show) decides that a key politician has to be eliminated for peace. This is when the creator of the machine ponders over this decision. Is a machine equipped to take decisions that humans find hard to take? If the death of a single person can bring peace, should that single person be killed? The answer may seem simple - Yes, kill Hitler, save thousands of Jews… But can a machine reach that level of human thinking? As the creator continues, “What if the machine indicates that a large number of people have to be killed in order to reduce world hunger?” Of course, if there are no people, then there cannot be hungry people - Simple logic for the machine.

The idea of teaching ethics to a machine seems to be very interesting to pursue. I wonder if this can be taught. Perhaps then, we may not have to worry about Skynet… #GoAsimov

Popular posts from this blog

The (fake) Quest To Eradicate AIDS with Mythical Mystical Indian roots

THE CURIOUS CASE OF RAHUL GANDHI - Nitin Gupta(Rivaldo)

To each his own