A major challenge to the creation of machines based on artificial intelligence (AI) involves morality, a philosophical question that is itself challenging to pin down.
When people create machines, it should be based upon human-centered AI. This type of AI has built into it the ethical imperative of being human-centered based on fundamental human rights. The machine’s applications should be designed to improve the conditions of human living which would include human dignity, rights, freedom, autonomy, and purpose in life. This would require AI designers to clarify a range of preferences including the philosophical, ethical, and legal.
But this raises difficult questions concerning ethics, notably on exactly whose ethics should be programmed into AI. The challenge emerges when one considers that Western moral philosophy contains three main pillars or ethical theories: Aristotelian virtue ethics, Kantian deontology, and consequentialism.
These theories are distinguishable and cannot be reconciled. Stemming from Aristotelian philosophy, virtue ethics holds that the ultimate goal for the individual consists in a constant improvement of oneself as well as of the people and environments surrounding him. Immanuel Kant’s deontological ethics is based on the notions of duty, rationality, and universal rules, whereas utilitarian ethics, or consequentialism, is based on the consequences of one’s actions and whether these result in the greatest amount of happiness and the least amount of pain or the reverse of happiness. The challenge of ethics is stated by theorists Andrea Gilli, Massimo Pellegrino, and Richard Kelly,
“These three [ethical] approaches cannot, fundamentally, be reconciled when it comes to foundational issues. In simple terms, each person decides to behave as he/she prefers in every day interaction. Such individual preferences and choices are to be respected. However, when intelligent machines start making decisions with real-life consequences on other human beings, their ethical preferences acquire new salience” (1).
This raises the pressing question,
“[G]ranted all different ethical perspectives comply with the law, who, and on what basis, should decide what ethical stance should intelligent machines be based on? How often should such stances change and according to what parameters: elections, bureaucrats’ decisions, or other factors?” (2)
A hypothetical example of an imminent accident involving an autonomous vehicle illustrates the challenge of ethics,
“The onboard software has to decide whether to veer right, and face 100 percent chances of killing a young girl, or veer left and, for instance, having 50 percent chances of killing an elderly couple. The number of examples is infinite and could include a young promising professional and an unemployed person, or a wealthy sportsman and an underground artist and so forth. The key challenge remains: what should the autonomous car do? Its actions will be driven by the algorithm and the data, but this still poses a question: whether to base the decision on gender, age, probabilities, number of fatalities, their contribution to society or on any other consideration” (3).
Consider some of the moral complexities: should the autonomous vehicle have collided and killed a certain pedestrian(s) in an act that will result in the least amount of the reverse of happiness for all those involved (utilitarianism)? Or what would happen and how might it look if the vehicle had been programmed to operate according to a deontological ethical paradigm which says that killing pedestrians is always universally wrong? How would the AI take into account probabilities?
Further, if one would accept that the young girl, or the elderly couple, is killed, who gets to decide this? Clearly, machines cannot themselves determine what is right or wrong; AI machines “do not decide to go to the beach on a workday or that they prefer raspberry to apple. What they do, in fact, strictly depends on what coders have written on the algorithm, intentionally or unintentionally, and on the data they have access to and thus what data, implicitly or explicitly, reflect” (4). AI’s human creators must agree on what to program into machines and then make sure that machines will follow instructions.
An additional issue is responsibility. In the case of the accident of the vehicle, who would be responsible for the fatality? Would it be the programmers who wrote the algorithm, the data scientists who provided training data, or the car manufacturer? Answering this question will be a puzzle the courts will have to solve.
This point of the challenge of ethics calls for the role of philosophers, especially specialists in ethics, to become involved in the discussion. The creation of AI does not merely require scientific and engineering specialists, but also philosophers to debate whose ethics should be programmed into machines.
1. Gilli, Andrea., Pellegrino, Massimo., and Kelly Richard. 2019. “Intelligent Machines and the Growing Importance of ethics.” NATO Defense College, pp. 45-54. p. 45.
2. Gilli, Andrea., Pellegrino, Massimo., and Kelly Richard. 2019. Ibid. p. 52.
3. Gilli, Andrea., Pellegrino, Massimo., and Kelly Richard. 2019. Ibid. p. 48.
4. Gilli, Andrea., Pellegrino, Massimo., and Kelly Richard. 2019. Ibid. p. 47.