Dr. Kate Darling is a research specialist at the media lab of the Massachusetts Institute of Technology (MIT) in Boston, where she is investigating the interaction between humans and machines on the emotional level. She focuses on the short-term effects of robot technology, with particular interest in legal, social and ethical issues.

kinofenster.de: Do you remember the first emotion you felt towards a robot?

Kate Darling: I don’t remember the first time I encountered a robot, but, like many people, my first emotion was probably fascination!

kinofenster.de: Do humans tend to humanize robots?

Kate Darling: Yes, we anthropomorphize robots, meaning we project humanlike traits, behaviors, and even emotions onto them. But it doesn’t need to be a humanoid robot. People will subconsciously ascribe life and agency to something as simple as a robotic vacuum cleaner, because it moves around in their physical space with purpose. In fact, sometimes robots that look too much like humans can break that illusion, because they don’t live up to our expectations for how they should move and behave. We call this effect "uncanny valley".

kinofenster.de: What do you do to avoid the uncanny valley effect?

Kate Darling: Good designers draw on recognizable social cues and expressions of emotion and embed them in a form that doesn’t try to look like a realistic human. Cartoon animators have been doing this for over a century: think of Disney's or Pixar's non-human characters. A robot is able to look like R2 D2 from Zum Filmarchiv: "Star Wars" ("Star Wars: Episode IV – A New Hope" , George Lucas, USA 1977) and still have a lot of social and emotional depth.

kinofenster.de: At the Massachusetts Institute of Technology you research the emotional level of interaction between man and machine – which behaviors have you observed in your experiments?

Kate Darling: We did a study that explored whether empathic people will hesitate more to destroy a simple, lifelike robot. We found that people with high empathy did hesitate more, and they especially hesitated when we gave the robot a name and a personified backstory.

kinofenster.de: Which factors favor these emotional reactions?

Kate Darling: A lot of different factors can influence people's emotional responses. The way something is described to us, the design of it, the way it communicates with us through word, sound, gesture, or other cues. And movement plays a big role in robotics specifically.

kinofenster.de: Can you give me a concrete example?

Kate Darling: I’ve tested a variety of virtual assistant devices in our home, and my toddler has almost no interest in the ones that are embedded in a static device or speaker. But he responds to a virtual assistant called Jibo. Jibo is shaped a little bit like the Pixar lamp. It can swivel its head toward whoever is speaking, using body language to show that it's listening.

kinofenster.de: Do humans perceive emotions simulated by robots as real?

Kate Darling: We know that they are not real, but we respond automatically to emotional and social cues given to us by others, whether that’s another person, an animal, or even a robot.

kinofenster.de: Where do you see the benefits in robots being perceived as human-like beings?

Kate Darling: I think there are some benefits to robots that are able to interact with us on a social and emotional level, like some animals. For example, we are already seeing some interesting use cases in health and education that aren’t possible with our previous tools. In these cases, the robots don’t replace people, they are a supplement to humans, similar to how we use animal therapy to supplement human care. But we also just enjoy robots as social devices, and I think that people may find some fun or fulfillment in it even outside of an educational or therapy context, just like we enjoy having pets.

kinofenster.de: What does this mean in concrete terms for the areas of application of robots? Can you give a concrete example of the use of social robotics?

Kate Darling: Health and education are the most promising areas, for example using robots as new tools in therapy with children on the autism spectrum, but we will likely also see a lot of this technology used for entertainment, or even to enhance human-robot interaction with robots in shared spaces in general. Right now, as robots become more suitable for shared spaces, we are at the very beginning of an era of human robot interaction. Robots will be everywhere, in our hospitals, transportation systems, workplaces, and homes. Some of them will be designed and treated as tools, some will be designed to interact with us on a social level.

kinofenster.de: Where do you see dangers in this context?

Kate Darling: I see some potential implications for consumer protection and privacy, as robots increasingly collect data in their environments. For example, because social robots are a very persuasive technology, artificial social agents could get people to reveal more personal information about themselves, or otherwise emotionally manipulate them in subtle ways for the benefit of companies or governments. Some robotic children’s toys have already been banned for this reason.

kinofenster.de: To what extent can or could robots actually replace humans as social beings?

Kate Darling: I don’t think that robots can or should replace humans on a social level. And they don’t have to! We are capable of so many different types of relationships, with the different people in our lives, with animals, it’s possible we can add robots as a new type of relationship without blinking an eye.

kinofenster.de: Where are the current limits in social robotics?

Kate Darling: There are a lot of technological limits, but I think there are also some cultural limits. Even if robots looked and behaved exactly like humans (which we could only ever hope to achieve in the far future), we might value “real humans” more, the same way we value real diamonds more than ones that are created in a lab. But I also don’t think that the true potential of this technology is to recreate what we already have.

kinofenster.de: Humanoid robots may resemble humans externally or in their behavior – but they do not (at least not yet) possess consciousness. Should robots nevertheless be granted special rights?

Kate Darling: Right now, it doesn’t make sense to give robots rights. But there are a few different ways they could have rights in the medium-term future that don’t require consciousness. For example, if it’s desensitizing or offensive to people when lifelike robots are treated violently, that could be a reason to restrict overly violent or "cruel" behavior towards them.

kinofenster.de: Will robots be able to dream in the future?

Kate Darling: I know some roboticists who want to create robots that can dream. But they don’t know how to do it. It might end up being a different type of dreaming.

Mehr zum Thema