Skip to main content

Wed, 04/21/2021 - 10:18

ETHICS FOR MACHINES? A PROVOCATION

1. Good machines?

1. Good machines?

More and more machines take complex ‘decisions’. Future autonomous cars will even occasionally have to take decisions over life and death: “There is a person on the road, so should I break hard and risk the lives of the occupants or break lightly and keep on straight?” What kind of preference should we build into the machine here? What is right and what is wrong? From these kinds of questions a new discipline has emerged: ethics for machines or ‘machine ethics’.

I want to suggest that this is a misunderstanding: The idea of machine ethics rests on a confusion what machines are. However, perhaps we can develop a new kind of ‘real’ machine ethics that is aimed at human designers and users of machinery.

 

2. Obligation and responsibility

Let me explain: Generally, ethics helps us find out “what is the right thing to do” and it involves obligation: “I should do this”. In order to feel obligation, you have to be responsible for your action. For some of your actions, you can be praised or blamed because they are caused by your choices; You could have done otherwise and you knew what you were doing. We do not hold small children or dogs fully responsible because they don’t fully know what they are doing. As for your car, you can kick it, if it doesn’t run, but that’s really not a very smart thing to do. Also, it doesn’t hurt the car at all: You can’t praise or blame a machine.

Sometimes it looks like a machine makes ethical decisions, but appearances are deceptive. Let us say the machine follows this rule:

If

   H=1 AND M=0

Then

   Set B=1

Else

EndIf

There is no responsible choice in following this algorithm (the fancy term for a strict rule one can follow step by step)? But what if I had put it in words? “If there is a human on the road and the human is not moving, then use break to avoid hitting them.” That’s the same rule (H=human, M=moving, B=break), but when I put it in words, we have a strong tendency to interpret the purely mechanical procedure as stating an obligation. This would be a mistake, however. The above algorithm is implemented, for example, by your toaster: If button is pressed (H=1) and heat sensor is not over max heat (M=0), then turn on heating coils (B=1), otherwise do nothing. Is your toaster trying to avoid burning your toast? Is it responsible for its actions or obliged to do one thing rather than another? Not really. For your toaster a burnt toast is just as good as a crispy brown one … it does not evaluate the environment at all. The same goes for an autonomous car.

The trap into which people are falling here is to say because a system behaves like me, it is like me. This is a fair reasoning strategy with humans (“if she screams after a brick fell on her foot, she is in pain”), but it’s a fallacy for machines. People who fall into this behaviourist trap ignore internal mechanisms, and then they extrapolate into current and future machines having properties like intelligence (Kurzweil, Bostrom), intention (Dennett) or ethics (Anderson & Anderson).

 

3. Resolution: Ethics without responsibility

So a real machine ethics would allow us to talk about ethics without falling into the trap of assuming that machines have real obligation. There are good traditions we can draw on.

Aristotle explained the word “good” in terms of function: A knife is made for a purpose, a good knife is one that fulfils its purpose well – and the properties that make it fulfil it’s purpose well are called its “virtues”. For a paring knife being sharp, flexible and having a narrow blade are virtues. Aristotle thought that humans also have virtues that allow them to fulfil their purpose, namely to be fully human, to reach a state of happiness or fulfilment (eudaimonía). Human virtues are character traits like courage, friendliness, honesty, etc. So, in this ‘virtue ethics’ there is no principled difference between virtues for humans and virtues for things: We can very appropriately describe the virtues and vices of toasters or cars.

These are virtues of artefacts that we make for our human purposes, and they are measured by how well they contribute to these purposes. The ultimate purpose is happy humans (and other animals), so a good machine is one that contributes to this. That’s all there is to it.

So, what is left for ‘machine ethics’ is the obligation of the makers of machines to contribute to the overall good – and if the machine does not do that, the makers are responsible. We don’t need a special ethics for machines; we already have an ethics for making and using machines.

 

Vincent C. Müller is Professor in Ethics of Technology at the University of Eindhoven. He is also an academic fellow at University of Leeds and the Alan Turing Institute. http://www.sophia.de