Kategorie: Interview
"Machine morality reflects certain aspects of human morality."
Oliver Bendel, Professor at the FHNW School of Business in Windisch, talks about machine morality, the limits of artificial intelligence and the risks and opportunities it harbors.
Oliver Bendel was born in Ulm in 1968. Having studied Philosophy, German Language and Literature and Information Science at Universität Konstanz and gathered initial professional experience, he did his doctorate in the area of Business Informatics at Universität St. Gallen. In April 2009, Oliver Bendel was made professor at FHNW School of Business in Windisch. He is an expert in the fields of knowledge management, information ethics machine ethics.
kinofenster.de: Herr Dr. Bendel, what are the central differences between human and artificial intelligence?
Oliver Bendel: Human intelligence can apply itself to completely different tasks. Artificial intelligence as a discipline selects certain tasks and tries to perform them with the artificial intelligence it has created. This can only emulate certain aspects of human or animal intelligence. Hence, an AI system can learn to play Go, sort and interpret pictures and recognize emotions. We call this type of discipline weak AI. Strong AI dreams of a system that can solve all problems that human beings can solve. We are light years away from that.
kinofenster.de: What is the difference between human morality and machine morality?
Oliver Bendel: Machine morality reflects certain aspects of human morality. Just as Artificial Intelligence as a discipline produces artificial intelligence as a thing, machine ethics produce machine morality or moral machines and examines them. Machine morality simulates human morality. The debate about whether machines are truly intelligent, truly moral, truly social etc. is utterly superfluous. These are metaphors being used and turned into technical terms. Machines are machines. Terms have completely different meanings in different disciplines. When theologians or philosophers speak, sometimes in a derogatory fashion, of autonomy in the context of technical systems, they fail to understand that the term has developed its own meaning in engineering sciences. For one, it’s about drawing a distinction between automatic and autonomous machines. One follows a defined plan while the other is constantly recording data to enable it to make its own decisions. They may even learn additional skills in the process.
kinofenster.de: In your opinion, how will our everyday lives change through the application of artificial intelligence, for example in traffic or production?
Oliver Bendel: Artificial intelligence can be used for automation and autonomization, for example autonomous driving. I am not sure whether this will become ubiquitous. And that is also because of the limits of artificial intelligence. Artificial intelligence also plays a role in the digitalization of production in the course of Industry 4.0. But the Corona pandemic has shown that, while we may like to throw around buzz words such as “flexibilization” or “individualization”, we have difficulties putting them into practice. Ideally, a smart factory would have begun producing the much-needed protective masks after two or three days. Of course, this is not just a question of intelligent production control – it also requires the organization of production facilities and raw materials.
kinofenster.de: In what way can artificial intelligence be used to solve social and economic problems?
Oliver Bendel: I don’t like to speak of artificial intelligence as a general subject. There are thousands of different AI systems. We can also let AI cooperate with machine ethics and the field of artificial consciousness. With AI systems for example in the production of wares, we can automate in a specific way very easily. Others can be used to recognize dangers, such as forest fires, and then deal with them by means of human or machine power. For example, a drone with image and pattern recognition. Others still make it possible to develop scenarios and prognoses or make diagnoses. In this way, theoretically and practically approaches could be found to overcome climate change or cancer.
kinofenster.de: In social professions, it is not just about optimized working processes but also about empathy, emotions, affection. Can machines learn to adapt to human emotions?
Oliver Bendel: We are building such machines. We are building machines that, on the one hand, can simulate empathy and emotions and on the other, can recognize empathy and emotions in humans, in order to respond to them with simulated empathy and emotions. Social robotics and machine ethics have made astounding progress in this field in the last ten years. Whether machines should be made to simulate empathy and emotions depends on the context. Sometimes it is a good idea, sometimes it is not. Imagine a teacher robot: if the child does something good and does not receive praise, then something is wrong.
kinofenster.de: Which areas of use do you consider problematic?
Oliver Bendel: I personally have problems with the use of AI in the selection of job candidates. I do not want a machine to analyze me and I also do not think that an AI system would select the best candidates at the moment. And even if it did: judgments of this kind made by machines are still problematic.-My data are mine, especially when they are combined and analyzed – when an AI system wants to uncover anomalies and deviations.
kinofenster.de: Do you see other dangers?
Oliver Bendel: It is important to judge each aspect on its own merits. For example, facial recognition can be problematic if it is used to surveil public areas. It can be discriminatory because it tends to be trained by white faces and Black people come under suspicion because the system doesn’t recognize certain differences. But I would not fundamentally ban scientific research and development of facial recognition. It also harbors chances especially for historical purposes, for example tracing a person, or emotion research. I would just severely restrict its use.
kinofenster.de: How realistic do you consider the popular sci-fi scenario that human beings develop a superior AI that becomes independent and turns on humanity?
Oliver Bendel: Such an AI system would be super intelligent and hyper moral, for the human being is the greatest threat to the planet. But I am against the extinction of humanity, which is why I say "hyper moral" and not just "moral". In a contribution I recently made to a book, I reflect on the possibility of a super morality. But any such morality could be over moral and hence in a certain way immoral. But to get back to the question: I do not think it is very realistic. Before an AI system destroys us, we will destroy ourselves.