The Ethics of AI – Technologies change, ethics stay the same

The concept of ethics has existed for about as long as humans have been humans. Although nowadays, many people are disinclined to become entangled in discussions of ethics, whenever a new discipline or technology emerges, the question as to what the ethics of that technology are will inevitably be asked. Such is the case with the ethics of AI [1].

The current consensus around the ethics of AI is that we can build on the four basic principles of bioethics. These are beneficence, non-maleficence, autonomy, and justice. There is also a general tenor that we should augment that list with a fifth principle about ‘explicability’ [2]. Realistically, this was probably always an implicit component of the application of bioethics. Nevertheless, just as when a new CEO comes in, they are obliged to change something about the company, so too when a new technology comes along, we feel compelled to try to re-define the multiple thousands of years old discipline of ethics which has guided us thus far.

To be fair, the application of ethics in any new discipline is a topic which should be discussed and re-discussed. Humans have never agreed on a single, clear definition of what ‘ethics’ is - but we have still been generally pretty good at agreeing on what we definitely do not want to happen. Examples of what not to do in terms of ethics of AI include:

  • Not disclosing sensitive information

  • Not creating opaque applications, whereby the users and the creators don’t exactly understand what they do

  • Not using AI to enhance activities we generally consider to be unethical, such as stealing

Nonetheless, the core problem of ethics has always been bridging the gap between ‘knowing’ and ‘doing’. We can agree on the above examples. The general tenet about what is wrong and right is something that humans have an innate sense for – even if many different words and concepts can be used to describe that.

The question is - How do we ensure that the innate ethical ideals we have been following for centuries are implemented in practice for AI?

  • The first thing is to ensure that discussions on ethics are held, even if it is difficult to agree on specific terminology.

  • The second thing is to use those discussions to ask pragmatic questions about the concrete applications of AI, rather than trying to put labels on lofty ideals.

Aristotle would want you to ask yourself – Is this action consistent with what I consider to be virtuous behaviour? Use whatever labels which come to mind when you ask yourself that question (honest, integral, just, whatever). Other versions of this question include – Would you still do this action if you had to explain it to your mother or your daughter tomorrow? Would you do it if it were on the front page of tomorrow’s newspaper?

Immanuel Kant, the philosopher who defined deontological (read: rules-based) ethics, would want you to ask – Is this action universalizable? That is, if everyone in the world decided to do this action tomorrow, would that be logically possible? Taking an example from the finance world - momentum trading, it would actually fail this test. Momentum trading lives off the assumption that other market participants have identified fundamental information and are trading on that basis. If everyone only conducted momentum trading, so noone is actually conducting fundamental research, this assumption would not hold. Thus, momentum trading is unethical. Obviously, no one is physically harmed by momentum trading, but financial markets are more prone to bubbles (boom and bust cycles) because of it. There is a general consensus that bubbles are bad because they mislead the productivity of the real economy. This question about the universalizability of an action is likely to be helpful in many AI applications. Often, people will not be directly harmed by AI - but if you have a general feeling of unease about a certain use case, this may be your problem.

Utilitarian ethicists would ask - Does the sum of the benefit of the action outweigh the sum of the negative consequences of the action? This utilitarian idea underpins modern economic theory (yes, economic theory is a practical derivative of ethical theory) and it works quite well in an economic context. Monetary gains and losses can be neatly summed and negated. It becomes more challenging when the benefits and drawbacks leave the economic domain and enter e.g. the social and environmental domain. Still, the main thing is to discuss the list of pros and cons and take a decision you feel comfortable with, on balance.

John Rawls did subsequently invent what is known as contractualism, a school of ethics which focuses on the idea of the social contract and asks – Does conducting this action generate the greatest possible benefit for the person in society who is the worst off? This idea recognises that justice can’t mean that everyone is entitled to the exact same life circumstances. Instead, we have to somehow ensure that the people worst off in society would still consent to the social contract of that society. In terms of the classic question as to whether an autonomous car should run over the elderly person or the baby, we could say the person who dies is the worst off. Arguably, a person would prefer to be run over as an elderly person than as a baby. It’s a tough call to make – but ethics has always been about making decisions in tough situations.

Further reading & references:

[1] This website collates existing attempts to define ethics of AI:

[2] This publication summarizes the five principles of the ethics of AI and makes recommendations:

Blog by:

Dr. Christina Kleinau