Direkt zum Seiteninhalt

The moral of the machine

Reading Time: Minutes
Do intelligent machines have emotions, can they act morally and so do they have rights, too?
Teachtoday spoke to Dr. Oliver Bendel about intelligent machines’ capacity to act morally and the wide field of machine ethics.
Dr. Bendel, when was the last time you did something that was in your view immoral?

I often do immoral things, both in the view of others and my own view. One does not have to subordinate their life to morals, not as an individual, not as a group, not as a society and not as a company. But, of course, there are behaviors that I find to be immoral and which I generally avoid. Even an ethicist must not only engage in moral actions. He needn’t be good. A physician needn’t be healthy either. One is simply occupied with a topic on a professional level. One explores a subject.

Can machines have morality?

I use the term machine morality much like I do artificial intelligence. Machines have no consciousness and no will. They have just as little empathy. This is why human morals are simulated in machine morality. For example, it’s about teaching certain rules to autonomous machines and AI systems, that are morally justified and that they should follow.

What exactly does the machine ethics deal with?

Machine ethics explores and produces machine morality. It’s reflects on moral and immoral machines and tries to build them. It’s really about exploring such machines and highlighting the opportunities and risks that come with their use. Machine ethics is therefore not about saying that all machines should be moralized. It simply explores the possibilities. Since 2013, we’ve been creating chatbots that recognize their user’s problems and adequately act and react. For instance, we developed the “Goodbot” that interacts with the user according to moral principles. In contrast, we also developed the "Liebot,” which lies systematically. The “Goodbot,” one might say, is the morally good one, while the “Liebot” is immoral or morally bad. Continuous lying from people is bad, since it destroys trust in relationships, friendships, groups and societies. With the Liebot, we have been able to show that a machine that lies systematically can be equally corrosive, be it on webpages or among service robots.

What is the difference between human morality and machine morality?

Robots have no awareness, no will, but they can follow rules very well. A machine has no reason to be moral, not of its own accord. In humans, morality at the least makes collective living easier.

How does one teach AI morality?

Morality can be taught to autonomous and semi-autonomous systems. Among them are certain robots and AI systems, or robots that are connected to AI systems. Rules can be planted into them that are morally grounded. In doing so, one can work with annotated decision trees, for example, like those I developed a few years ago. Equipped with sensors, these machines go through the world and work their way through question after question, like how old, how big or how far away something is. In the end it takes one of several provided decisions. For each question, the decision tree is annotated to reflect why it’s important or why it is being posed. This way the moral assumptions and justifications become quite explicit.

Do AI systems, like language assistants, only have obligations or do they in some way have rights?

I'm careful with terms such as "obligations" as a machine ethicist. I’d rather speak of “liabilities,” and maybe even that goes too far. But we may of course seek out metaphors in the hope that we’ll be able to understand one another. In any case, robots and AI systems have to do what we want them to. As an ethicist, I do not believe that robots and AI systems have moral rights, because they lack consciousness and capacity for suffering, awareness and the will to live.

In medicine, nursing and health care, there is much talk about the future use of robots as caregivers. However, in these fields it’s not about optimizing workflows but just as much about empathy, emotions, affection. Can machines learn to respond to the people’s feelings?

They already do. There are robots on the market that can react emotionally. Robots, like Pepper, can through facial and voice recognition figure out something about a person’s emotional state and can adjust their behavior and their statements accordingly. Empathy, however, is not what I’d call it. Robots that adapt their behavior to that of people can be found in care and treatment situations. One well-known example is Paro, a baby seal robot that is intended to help people with dementia.

Thus, we do have robots that recognize emotions and display them, but do not have them. Robots and AI systems will never have feelings in my opinion. For this, there needs to be a biochemical basis. Therefore, one shouldn’t leave patients or attendees alone with robots in a nursing or treatment situation, for security reasons and because the presence of people, of feeling and compassionate beings, are especially important in this situation. For the most part, robots are also designed to be used by some specialist. This is how Robear, a prototype from Japan, gets used in tandem with hospital personnel.

Can machines with artificial intelligence evaluate the consequences of their actions, like humans do?

No, not like humans. But there are machines that can assess the consequences of their actions. Otherwise, there could be no automated driving. For that, consequences must constantly be foreseen, compared and evaluated. This is an important field for machine ethics. One can develop systems that follow specific rules but not rigidly, rather that include the possible consequences of their decisions.

How does a self-driving car decide in complex situations, to put it dramatically, in situations where human life and death is at play?

It should not decide. I am opposed to quantifying it with people’s views or qualifying it to count potential victims of accidents or to judge them by age, sex and health. Of course, automatic braking should occur when there’s a person on the road, and I’m for integrating emergency braking assistants in as many cars and trucks as possible. But otherwise, I advise caution and restraint.

Where does it make sense to use self-driving cars in the future – and where does it perhaps not make sense?

Autonomous cars should drive on highways. Urban traffic is too difficult for them. There are many pedestrians and cyclists on the road, and every second there are thousands of things to assess. Driving in the city is communication, one waves, one winks, one smiles.

In Sion, in Switzerland, there is an autonomous shuttle. But it travels at low speed and on virtual tracks. This is not transferable to normal cars. On the straight and open highways, where there are no passers-by, many accidents can be avoided with automated driving. Machines can do much more than people in some areas. For example, they can see at night. Or around the corner, when a parent system is available. Autonomous trucks could have a great future, in addition to autonomous buses and shuttles.

Interview: Martin Daßinnies

Read more in the "AI in everyday life" dossier
/mediabase/img/4213.jpg Artificial intelligence, i.e. self-learning computers, already accompany people in many walks of life today. AI will change the world
/mediabase/img/4210.jpg More and more people are using artificial intelligence. But few really know what AI really is. What is artificial intelligence?

Digital assistants

The Human Robot

Share this article!

Post the article with one click!
Share