LeadershipX Icon

How do you program ethics into robots? Can you trust machines with moral decisions? How do you codify compassion or mercy?

In hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units, often those who are on the edge of death. While the doctor may seem to have autonomy, it could be difficult in certain situations to go against the machine, particularly in a litigious society. Is the doctor really free to make an independent decision? You could have a situation where the machine is the de facto decision-maker (Source:  Wendell Wallach, Yale’s Interdisciplinary Center for Bioethics, Author of “A Dangerous Master:  How to Keep Technology From Slipping Beyond Our Control”).

Humans decide on a specific ethical law, and then write code for robots / artificial intelligence to deploy. Yet, what is the appropriate ethical rule? There are many exceptions and counter examples for every moral law. For example, if your ethical law was to maximize happiness, should a robot harvest the organs from one man to save 5? 

The system may crash when it encounters all the conceivable paradoxes or unresolvable conflicts.

How are ethical rules effectively implemented in an age of automation, robots and artificial intelligence?