Our Messy Robot Relationships with Kate Darling


Kate.png

Have you ever seen a robot and called it cute? Have you ever seen a drone and felt afraid? Have you ever apologized to Siri or yelled at your rumba to get out of the way? Have you ever named your car?

Our relationships with robots are complex and messy, to explore this topic, we interview Kate Darling, a leading expert in Robot Ethics and a Research Specialist at the MIT Media Lab.

Kate Darling researches the near-term effects of robotic technology, with a particular interest in law, social, and ethical issues.

Follow Kate Darling on Twitter @grok_

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Kate Darling_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Kate Darling_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

And.

Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information, we are your hosts, Dylan and Jess.

And in this episode, we explore the topic of robot empathy.

Have you ever seen a robot and called it cute? Have you ever seen a drone and felt afraid? Have you ever apologized to Siri or yelled at your Roomba to get out of the way? Have you ever named your car? I know I certainly have. Our relationships with robots are complex and messy. So to explore this topic, we interview Kate Darling, a leading expert in robot ethics and a research specialist at the MIT Media Lab. Kate researches the near-term effects of robotic technology with a particular interest in law, social and ethical issues.

And if you're listening to this episode right, the day that it releases Happy Thanksgiving and Thanksgiving week, I suppose, to our U.S. listeners. And we have a lot to be grateful for at the Radically I podcast, including this specific interview, which was a long time coming just and I have both been big fans of Kate for God going on like a year now individually. And so it was just like one of those dream come true moments of just being able to sit down with someone who has really, you know, defined this field of robot human interaction and also robot ethics. So we are so excited to share this interview with all of you today.

We are on the line today with Dr Kate, darling, Kate, thank you so much for joining us today. Thanks for having me. Absolutely. And today we are talking about robots. And that's very exciting to me because I have been going over to a friend's house recently and they have this little Roomba and they have put these little eyes on the Roomba and they're very amused every time the Roomba kind of comes out and does its cleaning thing and they have a whole name for this Roomba. And I'm just curious from your perspective, what is going on there? Why do we call our robots by their names or create names for them? And is that wrong? Is that bad? Is that good? How do we make sense of that?

Oh, that's the big question. Is that bad? Is it good jumping right into it? So, yes, people love to name their room buzz whether or not they have googly eyes on them. But I'm sure the magic of googly eyes really helps.

We're just talking about this grocery store robot that is in all of the grocery store chains in the stop and shop chains in the in the East Coast and in the US now called Marty.

And Marty was developed to be a very practical device, kind of like the Roomba. It it'll just scan the floor for any spills or anything that's hit the floor. And it does its job pretty well, just like the Roomba is a very single task robot, but they slapped some googly eyes on it. And so now everyone calls it Marty and anthropomorphize. Is it Project Life on to it?

So googly eyes help. But robots are just really interesting artifacts in that they tend to tap into this biological response that we have to moving things in our environments.

We're very physical creatures and robots move around in our physical space in a way that feels autonomous to us. And so, you know, aside from the fact that we love to humanize anything from cats to the pet rock craze that we had in the 70s, where people had pet rocks and of course, googly eyes on anything and people will anthropomorphize it.

But even beyond that, robots with their physical movement really tap into some deep biological tendency to perceive life in in these artifacts. So there is some you know, there's some research behind that. There is there's a lot of anecdotes like I know from iRobot that over I think almost eighty five percent of people named their Roomba. So, yeah, even the Roomba, even just a disk that roams around the floor to clean it. That doesn't have any particular doesn't it doesn't have a face unless you make one. It doesn't act like an animal. It doesn't perceive you or know the difference between you and a chair. But people will empathize with it and name it and feel bad for it when it gets stuck and clean up for it. So.

So maybe that's actually a good place to start, a slightly better place than what is the morality of robotics in the 21st century that where do you draw the line of what a robot is and what it isn't like? Do you consider your cell phone a robot or is there like a threshold that a robot has to cross?

Well, there's no good definition of robot, so there's no universal one or one that even I mean, it kind of depends on for what purpose you're trying to define it and for what audience. Every community has their own definition. That's true for a lot of definitions, but it's particularly true for robots where the definition has kind of changed over time. Used to be that a robot was anything kind of new that automated, something that people weren't familiar with. And then, you know, after a while it just turns into a dishwasher or a thermostat or whatever, whatever, or a vending machine or whatever the newest gadget is. And we still tend to do that a little bit.

But roboticists and I you know, I tend to work with the roboticists, so I like their definition. They tend to view robots as anything physical that can sense its environment, can kind of think about and make a decision based on what it is sensing and then act on its environment. Now, that gets really messy when you try to drill down on those terms because.

Basically, a cell phone does all of that, a cell phone is a physical thing, you can sense its environment, it can vibrate or make light or whatever, and so technically act on its environment. But none of us would call, none of it in robotics. None of us would call a cell phone a robot. So it's not a perfect definition.

But, yeah, that's that's the best I've got for you.

See, I'm a little bit biased because I'm one of those people who likes to name my car and personify my succulence and apologize to Siri or Alexa. So I have a certain opinion and attitude towards bots and of course, like physical robots. However, we're defining that, too. And I'm curious, do you have a robot that you interact with, that you act a certain way towards, or how do you act towards robots?

Oh, totally. Oh, yeah, I totally do that. It's yeah. I can't pretend to be an unbiased scientist. I totally anthropomorphize all of the robots in my home. My child does and I very much encourage that. So yeah. So and one of the first robots I bought that I really viewed as a robot was this baby dinosaur robot called the Pleo. That's super cute and makes all of these lifelike movements and sounds and kind of wants you to pet it. And yeah, I, I have five of them at this point and I treat them like pets and I've named them all and feel bad for them when something happens to them. So. Yep. Yep. I do that to.

So why do you do that? This is some of the research that you've done, of course, but why do you think that you do that? You mean because I know how the robot works and I do it anyway?

I mean, this is yeah, this is well known in kind of the robotics community, too, that like even roboticists who have built the entire thing themselves with their own hands and programmed it to like, follow your gaze or whatever, you know, social cues the robot gives. They know exactly what it's doing and they still respond when the robot swivels toward them because we have this. Like I said, biological like this, it seems evolutionary, it seems like we can't really turn it off, we respond to agents in this social way, especially agents that can communicate with us physically. We just respond to those social cues automatically. So, yeah, even the roboticists, even I who know how the robots work. Yeah, we just love to do this. We are such social creatures at heart.

So is that different? So when I just talk to succulence versus when like we put googly eyes on a Roomba, is it like a is it a different kind of like substance or thing or is it all just kind of like the same way that I might talk to, like a dog that I own?

Is that, like, the same thing that's happening? Maybe even like neurologically is what's happening when I talk to a robot?

That's a really good question. So anthropomorphism has a bunch of different theories for what causes it and what factors can enhance it. I kind of view it as a spectrum and where anthropomorphizing your succulent that a lot of people do that right. We are such social creatures that we will talk to our plants and anthropomorphize our plants. But if you then add additional things, like if you put googly eyes on the succulents or if the succulent is moving around in a way that isn't just blowing in the wind, but actually appears to have agency, then I think that enhances that even more.

So there's a lot of different factors and robots tend to be this perfect cocktail of all of those factors where they're more than just a stuffed animal, they're more than just a car. They can also, you know, move and respond to you and give these social cues kind of intentionally. That really kind of as Sherry Turkle at MIT psychologist likes to say, really pushes our evolutionary buttons.

So one of the things that I at least see as a difference between succulence and robots is that one of them tends to look much more human than the other. And the way that we've chosen to design them, at least based off the succulence that I've seen. So I'm wondering why is it that way? Why do we design our robots to look like humans when we could make them look like anything?

That is a good question.

Well, so one reason that people design them to look like humans or have some human attributes is because we respond more strongly to cues that we recognize. But what people often don't understand is that the robot doesn't have to look just like a humanoid robot from Westworld or Blade Runner for us to respond that way. In fact, that's sometimes counterproductive because if it looks too much like a human and it doesn't behave exactly like a human, then it kind of disappoints your expectations and will either keep you out or just make you feel disappointed by its performance. Like with Sofia, which is this humanoid robot that's gotten a lot of attention recently. It was just basically a walking puppet. But but people are fascinated by it. But in social robotics, some of the design principles are more like what animators do. So the animators can take anything, a teapot, a bunny rabbit, a succulent, and they can put human emotion or movement into it enough so that you recognize the cues. But it doesn't have to look human. You can still see that it's a desk lamp or a succulent or whatever. But in doing so, they've honed the art of creating something that's even better than humans in some ways, like much cuter. And so that's where we're really successful social robot design lands. But I totally agree with you that, like, recreating humans is a really boring goal. And it I always am a little bit disappointed that there are so many humanoid robots, both in in being designed, but also in science fiction, when really we should see so many more different form factors. It just makes a lot more sense to me that we should be creating something new.

So what are some of the the trends and I guess robot design that you're seeing and then we can kind of bridge to maybe some of the implications of that. But are we seeing more like humanoid robots coming on the market? Are we seeing more folks like Marty that are just kind of really tall and scary? Some folks, myself included, like what are we seeing out in the field right now?

Well, there's a variety. So I think that in social robotics, there's still a lot of humanoid form factors, but a lot of them are don't look too much like a human. They look more like a. Like a robot, as you would imagine it, like an alien, or if it doesn't look to him, there's a lot of humanoids for art and entertainment purposes and then there's a bunch of people developing humanoid robots, not for social purposes, but because we've you know, we have a world that's built for humans and they're like, well, if a robot needs to be able to function in this environment, it has to be able to have a human shape in order to open doors and walk up and down stairs and go down corridors and whatever. I really disagree with that. I do think that we lean a little bit too heavily towards creating humanoid robots, which are very expensive and difficult to create. And it doesn't make a lot of sense, given that we can use wheels, we can make robots climb walls, given that we should have infrastructure that accommodates things like wheelchairs and strollers and, you know, not not every human walks on two legs either. So we should be thinking a little bit more outside of the box on that. So I do think, you know, there is there are there are a lot of different form factors out there. But I think we still lean a little bit too heavily in that science fiction humanoid direction.

And what is that design choice mean for us as humans interacting with these social agents, as you're calling them? Is it better that they're designed like humans and we have greater trust for them, or should we next the human element and just treat them like machines? What are the implications of either side of those?

That's a great question. So the implications kind of depend a little bit on what you're using the robot for.

So I mentioned, like, the practical implications of designing something that looks to human and disappointing people's expectations that just like from a practical standpoint, doesn't make a lot of sense. But if you're creating robots that people relate to and trust, like they say, you are following like a Pixar esque design and you're creating robots that people find very appealing and empathize with, what are the implications of that? It depends. There are actually some really, really cool use cases for getting people to develop an emotional connection with a robot in health and education. So we're seeing robots that are able to engage children with autism in ways that we haven't seen previously.

So they might become a new new therapeutic tool that works differently from an animal or a human or another type of toy. It's just a new kind of interactive experience for kids that can help them with certain skills and not just children with autism. Like there's a lot of educational stuff that's being explored, whether robots can be a tool for teachers because they're so engaging to kids and can help engage them in learning and in ways that might help a teacher with a broader curriculum. There's also therapy robots being used like the Parro baby seal. That is kind of an animal therapy replacement in contexts where we can't use real animals. It's very soft and fluffy and responds to touch and gives you the sense of nurturing something. The first time I held when I was like, can I take this home with me, please? And therefore, they're very adorable. And some people are worried about this. They say, oh, we don't want to be replacing teachers and human caregivers and caretakers with these robots. But I actually think that these robots are really good supplemental tool. And so long as we're using them as such, there's huge potential there.

But where I see kind of more of an issue is if we're using kind of what is a very persuasive technology to manipulate people's behavior or their emotions in ways that don't benefit the person interacting with the robot, the benefit someone else, the people who designed the robot or, you know, corporate interests. If if this is being used to advertise or collect more data than anyone would ever willingly enter into a database or to compel people to buy in purchases or whatever, we have a long history of coercion through persuasive design from casinos to modern apps. So social robots could be used as a tool for that. And that's something that I find much more concerning than the question of whether the the baby seal robot is going to replace someone.

And at least in my experience, we're living in this world with a lot of narratives about robots and a lot of snow, on one hand we have this anthropomorphic prizing, which I can never say that word correctly. And then on the other hand, we have like this almost this fear that's happening with robots are coming to take our jobs. Robots are coming to just this real deep feeling of replacement or fear of replacement. So we have these competing narratives. And I think sometimes it's hard to separate the reality from either the hype, but also from the dystopic view of everything. And I'm wondering, because it sounds like what you're saying is that there's also some real things that we should fear or maybe design around or avoid in our design of robots in terms of coercion. And I wonder if you could take a little bit more about what that reality is or if there's any examples.

Yeah.

So one of my pet peeves about how we generally talk about robots is we have a lot of assumptions, some of which are influenced by science fiction and pop culture, some of which are influenced by our anthropomorphism and our constant comparison of robots to ourselves. And so there is a lot of fear of replacement. Like you said, the robots are coming to take our jobs. The robots are going to replace our sex partners or teachers. It's that seems to be the entire conversation or there's like, you know, the public intellectuals who are like, not only are the robots coming to replace us, they're coming to destroy us all. Artificial superintelligence is a grave danger to humanity. And I don't mean to sound to I think it's great that people are worried about some things, because I do think we should be thinking critically about new technologies. But I do think some of these concerns are a little bit misplaced and they tend to distract from some of the real things that we should be concerned about.

And this this replacement fear is really, I think, driven by some moral panic rather than actual technology and the developments that are happening. I don't really see what's coming to replace us anytime soon. Artificial intelligence is very different from human intelligence. The skill sets are completely different. I'm not saying that labor markets won't get disrupted and that that won't cause a lot of pain or require a lot of intervention. But yeah, this whole replacement fear does seem to be a little bit sci fi driven. So I would really. Prefer that we focus on some of the things that are. Happening or going to happen, I mean. I know that you all have explored a lot of issues on this podcast related to your bias in A.I. and facial recognition and a lot of some of the problematic uses of AI that are popping up and I think are much, much greater concern than artificial super intelligence at this point in time.

And then there's also, yeah, just like the privacy and data security and the emotional coercion aspect that I foresee becoming an issue in the near future, I don't see too many examples of it right now. I don't know. Woody Hartzog has a paper about unfair and deceptive robots, where he gives some examples of, for example, Twitter about a tinderbox that will like they'll flirt with you on Tinder and then they'll try to sell you something. And so I think that that's just like a very, very minor version of what is to come as social robots become more part of people's lives and people start relating to them emotionally. You know, there's no doubt in my mind that companies are going to try to use that in any way that they can. And I think that there's some very blatant, blatant things that people will reject. Like people don't like to be manipulated, especially not by anything that's interacting with them socially. But I think that it could be done in a really subtle way that people aren't totally aware of. And that really worries me. And it also worries me that we're already seeing this.

So there was this robot dog called the EBOW that came out in the 90s and was super popular in Japan and also in the US somewhat.

And Sony ended up discontinuing the EBOW, I think, in the early two thousands. And then they pulled tech support for the remaining Ibos a few years ago. And the people who still have these robot dogs as part of their families were really upset that their dogs were basically going to die because there was no more tech support and there even started to be some funerals for them in Japan, some Buddhist ceremonies.

I believe there's a temple that still holds these Buddhist ceremonies for the for the Ibos that can't be repaired anymore. So it's interesting to see people develop such a strong connection to their robots that they mourn them like they would a pet. But then with the new version of the EBOW that just came out, Sony launched a new new EBOW. It's not cheap. And now it's linked to a cloud subscription service that after three years you need to pay a monthly fee for or a yearly fee for. But they haven't set the price yet for that.

And I don't like I'm not going to assume that Sony has any ill will to exploit people's emotional connections, but they're very well positioned to do it. If this ibro turns out to be as popular as the last one, then they could just set the price according to people's emotional attachment to their robots. Right. Instead of according to whatever costs Sony has to cover. So I think we're going to see some of that in the future.

If you think of the movie her, did you did you see her? The Spike Jones movie, it's really about love, not A.I. or anything, but it's the short version is this guy who falls in love with an AI assistant and this doesn't happen in the movie. But what if the company had suddenly issued a mandatory software upgrade and then like now it costs ten thousand dollars if you want her to keep living and continue talking to her, he would have paid that in an instant.

So it's just we really sometimes put a little bit too much agency on the robots or on the A.I. itself, as in, oh, it's coming to take our jobs, it's coming to do X, Y, Z. But really, we should be thinking about the incentives of the people who are creating it and what power they will have in this new world.

See, it's interesting because I feel like when I've heard about the fears of robots being turned off or deactivated, it comes from these like Hollywood dystopic films that you referencing where like the robot has gained consciousness and to deactivate them would be to like, quote er quote here, kill them. And we have fears of doing that and being like the robot gods. And what I'm hearing from you is that actually that's not as much the fear is. It is just our connection with them and are turning them off. Even if we understand that they're just a pile of metal with a little bit of software in them, it's actually our connection with them that we are afraid to deactivate. So what do we do with that? How do we design robots in the future to help us with that issue? Because that has nothing to do with complex software, right? That has everything to do with our humanness.

Oh, totally. So I like to make the analogy to animals because it shows that well. So first of all, you're right. I think that we're going to face this question long before we develop robots that can have any sort of consciousness. People are already going to feel bad about turning them off certain robots or treating them in a certain way. But animals looking at our historic relationship with animals also shows that even if we develop conscious robots, that might not mean much, depending on what robot it is like. There are animals that are perfectly conscious and that feel pain and we don't we don't care. We or chicken nuggets like but are our pet dog we care a lot about. We wouldn't want our pet dog to be turned into Nugget's and at least not in America.

And so it's very it's culturally driven. It's we're driven by our emotions. If you look at the history of animal rights, we've only protected the animals that we care about are the ones that are cute. It's always been all about ourselves. So I think that's interesting. And then how do we deal with that? How do we develop robots that can help us with that? Well, I don't know, because, like, you don't want to throw the baby out with the bathwater, right? Having these emotional connections to robots, just like having emotional connections to animals, could be really useful. And it can it can help people with loneliness. It can help people therapeutically. There's no inherent reason that I think it's a bad thing. Even I I even used to argue that we have some stories of soldiers becoming really emotionally attached to the bomb disposal units that they work with. And I used to say, oh, we need to figure out how to design these robots so that they won't become emotionally attached to them because we don't want them risking their lives to save the robot or doing anything inefficient on a battlefield. But then I read more about the history of animals in war and realized that animals were actually such a source of comfort to soldiers in a really stressful, traumatic situation that actually having that bond, they would develop very strong bonds with the animals. And then it's kind of like it's better to have loved and lost situation. So, you know, you don't want to just discourage people from anthropomorphizing robots because there is so much benefit that we get out of it as the social creatures that we are. And because I don't think we can prevent it, like there's people name their Roomba, you're not going to be able to stop people from anthropomorphizing the robots that we have. And so I think we just need to be very aware that this is happening and be very aware of which situations can enhance that and then kind of work with it because it's not going away.

So let's stay on the topic of animals and the analogy to animals for a second, because we know that you have a book coming out soon about exactly that topic. And I was wondering if you could tell us a bit more about maybe what the thesis of the book or what you're exploring in that book is.

Sure that well, the book comes out in April. Twenty twenty one, I think April 20th. There's already an Amazon page, which I'm very excited about. I think only my mom has ordered it so far, though. It's because the book is called The New Breed, and it looks at our history of using animals for work, weaponry, companionship, and what we can learn from that history as we integrate robots in the future. Because, you know, we were just talking about animal rights and our emotional relationships to animals.

I think there are a lot of parallels there that show that this fear of being of robots replacing human relationships also existed to some extent when pets first started becoming a big thing but didn't bear out like it turns out, we are capable of many different types of social relationships, and it seems just more apt to compare robots to animals because they aren't they don't perceive the world like human. They aren't they don't have human intelligence and the form factors aren't necessarily like humans, but we've already dealt with this whole range of other non humans and had social relationships with them. But it goes beyond just social relationships. I mean, we've used animals for all sorts of work in the past and we've partnered with them not because they do what we do, but because their skill sets are so different from ours. And same with robots. Robots have the ability to sense things that we can't see or recognize patterns in data or do kind of grunt work that we aren't able to do as quickly or with as much strength. So I just think the comparison makes so much sense.

So the book has all these really cool stories about like how we use dolphins in the military as echolocation devices and how we're using underwater drones for the same tasks today and how we used carrier pigeons for thousands of years. And now we're starting to use pigeons to deliver medicine to remote areas. And so kind of showing that instead of being this one to one replacement for human jobs, robots can actually supplement us in really cool ways. And just trying to move away from this replacement aspect that we talked about before, because I do think that that's kind of this. It lends itself to this technological determinism. Oh, the robots are coming to take our jobs. So the robots are coming to replace our social relationships. Then we don't even think about how to design or build or integrate them differently if we're already assuming that they're going to take our jobs. So I really think that a different analogy opens us up to more possibilities, and that's what I hope that the book does.

Yeah. I'm wondering in your research about animals, was there a similar narrative that animals were coming to replace humans or is this kind of a new thing with robots?

I didn't encounter anything in the work realm, although I'm sure someone like someone's going to email me after they hear this podcast and be like, oh, well, actually, don't be like, dang, that should have gone in the book anyway. I think in the in the social relationship category, yeah. There were psychologists who were like, oh, you know. Becoming emotionally attached to your dog if you're a lonely person, could be pathological because it's much easier to have a relationship with a dog than humans. It's going to take away from your human relationships. And, yeah, that pretty quickly became not really a viable position anymore now that every household has a dog.

And and we're actually glad when our uncle, who is lonely, gets a dog, like, obviously we want him to have more human contact, but at least he has a dog. We're not going to take the dog away.

So in that realm, I found a lot of stuff in the work room. I looked more at people's concerns about machines and automation in the past and like, for example, how the Luddites got like a really bad rep for the. So the Luddites are what is now used as a derogatory term to describe people who are afraid of new technology.

But in fact, it was this movement of weavers led by Ted Lood during the Industrial Revolution, and they they were protesting automated looms and they said a bunch of equipment on fire.

And I think a bunch of them got arrested for it. But what they were protesting wasn't the machines. They weren't actually anti technology. They were protesting the fact that the factory owners were using this technology as an excuse to gut worker rights. And I think that that actually still applies today, where if we focus too much on this idea that robots are taking the jobs, we don't focus enough on what decisions corporations are making in terms of how jobs are being removed or moved around. And I think we really need to be focusing on the corporations who are who are making the decisions because there are many different ways to integrate technology into the labour market or into your labour processes.

And we should be criticizing the choices rather than criticising technology or integrating technology and and robots, I guess, into our daily lives. I'm going to stick with this animal metaphor for just one more moments of bear with me, because I'm warning this is going to take a little bit of a dark spin. But when I think about animals and bringing animals and domestic animals into our life, I think about what you were saying earlier. And there's like an inequality there, right? Like we we think so sincerely about our dogs and our cats. But then we have these slaughterhouses with certain animals that are for meat production and they are just like totally separate realms of thought for the way that we treat these animals. And a lot of animals are abused and misused. And I think about the same thing happening with robots in some ways. And some people really do abuse and misuse robots. And there's certain things like, you know, sexual exploitation and just like blatant most of it, I think comes at least from what I've seen with sexual abuse and misuse. And so I'm wondering, do you see these problems being an issue with the way that we adopt robots in the future? Is this a reason for us to be hesitant about bringing robots into our lives? What have you seen there?

Well, I do see some open questions. I'm hesitant to compare different rights movements to each other. So, like, I don't want to. In any way say that robot rights are equivalent to the animal rights movement, given that animals are sentinels and they can feel and they, that's just a very different history as well. Each rights movement has its own kind of history and context. And it was oh, God especially annoys me when robot rights philosophers compare it directly, like a direct equivalents to slavery, because that's just not appropriate.

But I do think that people's behavior towards robots does raise some questions. And there's similar questions to what was raised in the beginning around animal rights, because in the early days in the West, at least when some people were campaigning for animal rights, they realized that even though people are trying to empathize with animals because pets are becoming more of a thing among kind of the upper classes, they thought it would be ridiculous to pass any laws that would protect the animals from brutal treatment because that would go too far. Like what kind of precedent would that set is? They're just animals, after all. So they had to make a different argument. And the argument that they ended up making was that it makes for cruel people to behave in a violent way towards animals. And that really caught on, especially back in the day, because it had a hint of classism as well. The the lower classes in the cities beating their donkeys. We need to teach them better behavior and therefore we will pass animal protection legislation. So it was really in the beginning all about us. And I think that there's a similar question to be asked about robots. Like what does it say about people if they're willing to be very violent towards something that responds in a lifelike way? And there's some indication in research that there is a link between people's behavior towards lifelike robots and their tendencies for empathy, which tells us that maybe, at least it's an indicator or a red flag or might say something about a person. It doesn't tell us whether it's desensitising, whether as a child, if you beat up a lot of robots, you turn into a brutal adult.

That one's a little more complicated. And it also has parallels to the violence in video games question, although now it's on a physical level and we know that there's a difference between physical things and things on a screen. So it's it's kind of a new question. But also looking at some of the research on animals and children, it's really hard to establish clear evidence for whether it changes people's behavior rather than just telling us something about people's tendencies. And but I do think it's a question that we need to ask and we need to ask sooner rather than later. And ideally, we would have some sort of evidence based policy, because you already see and you mentioned this, some sexual quote unquote, misconduct. There are child size sex dolls. There's the question, question of virtual child pornography, which different countries have come down on different sides on whether that's OK or not. And if we have. People wanting to, you know, have these this deviant behavior and and wanting to perform with robots, is that a healthy outlet for this behavior or is it something that just perpetuates it or normalizes it further and we just have no idea what direction it goes? But people are already calling to ban sex robots. So instead of succumbing to the moral panic, if we could just find some sort of evidence that we can create evidence based policy, that would be really, really good.

I was thinking a lot about as a child of the 90s, I was thinking a lot about the violent video game comparison. And I still I still struggle with that. I think I actually struggle with that more as an adult, because as a child, I was like, don't tell me what to do. I want to go play movie, that kind of thing.

But but there's there's something real about the media that we consume and then how that impacts how we act out in the world. And then there's something different about something physical being in front. Like it's not just something on a screen, it's something. Embodied.

And is that what you've seen in your research at that level of embodiment, whether it's in like anthropomorphizing or in this violence or in any other examples, is is the embodiment part of what makes this kind of a different thing?

Oh, it totally is.

So not in my research specifically, but in the field of human robot interaction, there's like by now a pretty large body of research showing how much embodiment matters. Like we we empathize more with robots when they're in body. We follow their directions, we respond more to them, like it enhances the anthropomorphism, it enhances compliance and enhances everything because we're such social creatures. And but it also like as games get more physical with VR and ah and those lines are getting blurred. We don't know what that means either. So I do think it's a new question. But then also we haven't resolved the violence in video game question. I mean, like you said, it's and it's a really tricky one too, because every new media format and robots included, but also video games, comic books, everything creates this moral panic. And you have parents and teachers or you have the NRA blaming video games for like school shootings. So there's a lot of rhetoric that kind of. Ends up influencing politics and legislation, and the research itself tends to be totally inconclusive. We still don't know the answer to the violence in video games thing.

See, that's interesting because video games have been out for a long time and this issue of violence in video games has been out for a long time. And so when you're talking about creating evidence based policy, it seems like that didn't really happen in that domain. And so if this is something that we want to do for robots, how do you see us figuring that out? How can we how can we fix this problem?

Yeah, I mean, well, when I say it's inconclusive, it's not that it's not that we haven't tried. Right. So people have tried and there have been like.

Some studies that establish that video games can maybe influence certain types of behaviors a little bit, so I think that we have to try and we have to see it could be that we get a clearer answer because it's physical, because it's a new question.

And it could be that it's just too difficult because it's a really it's really easy to link people's empathy to their behavior, but it's harder to do a longer term study that really shows a change in behavior because there's so many different factors that can influence that, that it's really difficult to study.

But I would hope that we would try, instead of just assuming something or at least that we try and we can say it's inconclusive rather than just again, assuming something based on moral panic.

This topic just of robots in general is really interesting and also overwhelming to me, because there is just as I'm hearing you talk, there are so many different contexts that these robots are used. And even the concept of robot are the definition of robot is kind of at play right now. And so I think it makes it really hard when we think about the ethics of robotics and especially the ethics of designing robots. But with with all that said, with all those different context that we know are out there, if you were to maybe give advice or speak specifically to people who are designing robots right now, thinking more in the ethical lens, like what advice would you give?

That's a big question. It's an excellent question. I think well, first and foremost, it's it's really interesting to me to watch robots kind of move into shared spaces and to see that some designers don't think at all about the fact that people will anthropomorphize the robot or treat it like a living thing, even though it's a machine. And so you see a lot of silly design decisions that could be avoided.

But like on an on an ethical level. Where to even start?

We kind of need to revamp the design processes from the ground up because there are so many issues that get entangled in technology design, whether that's, you know, with with robots specifically or social robots specifically. You see, if people make the design look too humanoid, I mean, we talked a little bit about why that's not practical and why that's boring, but also it can reinforce a lot of biases, stereotypes, gender and racial stereotypes. And that's just not necessary in my mind and could be very easily avoided. But then there's I mean, there's so much stuff if we're talking about robotics in general, like. You know, we've seen over and over again in the A.I. sphere these examples of search algorithms that reinforce gender or racial biases, you have A.I. issuing risk scores and courtrooms that are racially biased. You have hiring algorithms that disadvantage certain people. And it's just there has to be there has to be a different design process where technology is in. Created by people of just certain of one world view and one demographic. So design processes need to be more inclusive, but also they need to be more ethically informed and we need to be thinking more deeply about what we use the technology for, because, you know, maybe we shouldn't be using A.I. for every effing thing in the world. Like, I know that it's the new hot thing to do, but no, I shouldn't be issuing risk scores in courtrooms right now and shouldn't be making decisions about people's lives. And so, you know, I think designers need to be thinking about that as well, because I used to think, oh, we'll just let the technologists build the technology and then the legal people will sort it out later. That's not how it works. The design decisions get set really early on. Everything becomes entrenched. Now that I work with the designers, they see it.

So I think we just we need to revamp how technology gets built in general for those people who aren't roboticists or designers, but are just people who interact with technology in their daily lives. Maybe it's just a high tech or they have their own robots.

What should what should they all do? Or I guess we all do. Should we continue to personify and fall in love with our robots and name them, or should we start to grow less attached to them and put a wall up so that we don't fear their inevitable demise in the future?

So I'm not opposed to the anthropomorphism. I think just like, you know, there was this period in animal science and animal research where anthropomorphism was pooh poohed and it was unscientific and that even the animal research community has moved away from that because first of all, you can't prevent it. People are going to do it. Human scientists are going to do it.

But second of all, it is part of who we are. And you can make just as many mistakes by ignoring that. So I think that anthropomorphism is not a bad thing, but I would like to see people become tech literate or at the very least, you know, things that we all can do very easily is, I think, a little bit more about our assumptions about robots and whether we're just thinking in a science fiction way. So I do think, like the animal comparison really helps anyone really be like, oh, wait, I'm comparing robots to humans. What happens if I use this different analogy? Does my concern still make the same sort of sense? But also we can vote for people on both a local level or a federal level who care about worker rights, who care about consumer protection, who care about privacy issues, and who understand the technology. Because I think that politically, so much of how our political and economic systems set incentives for companies or anyone developing the technology, I think that is something that we sometimes forget.

We can all influence that for anyone who is looking to engage in these conversations further, looking to preorder your book on Amazon or just find and connect with you online, where is the best place for them to go for that?

Probably Twitter. I'm grokked, G.R.. OK, underscore on Twitter, which is a science fiction reference from Highland's Stranger in a Strange Land, which is a terrible book, and he's a terrible, sexist author, but I have always liked the word. So don't read his stuff, but find me on Twitter and that's the easiest way to find me.

It will be sure to include that link and many more in the show notes. But for now, Kate, thank you so much for coming on the show and talking with us about all of this today.

Thank you. This is so much fun.

Again, we want to thank Kate, darling, for coming on the show today for this wonderful conversation. And Dylan, let's start off with you. How are you feeling right now? I'm feeling great.

Oh, good. All right. Well, thanks for joining us. OK. See you next week.

No, it's it's a it's a good week. It's a good week for me personally. It's you know, it's Thanksgiving. A lot of a lot of good, good food and and family and all that. But in terms of this conversation, which is probably what you were asking about, I was I really enjoy this conversation with Kate.

And it goes really well with our conversation that we had that we released at least a few weeks ago with Ryan Caillaux about robot law. And this time we got a slightly different perspective on robot ethics and robot morality, which is actually I haven't published a lot in my career, but this is one of the few topics that I have published on. And so it was just really great to be able to to nerd out and get Kate's perspective on some of the goods and and should nots and how we as a society are increasingly being empathetic or putting robots in this new social location for all of us. And I know I just really appreciated her insight. Was there anything in particular that she brought up that stood out to you?

Yeah, I mean, I think I mentioned it and you could probably see it most of the questions that I asked. But I personify the heck out of everything in my life.

I'm one of those people and I like this conversation just hit home in so many ways for me. And it made me feel a lot more normal than I felt before, which is nice and hopefully did for some of the listeners, too, because, I mean, I can't be the only one out there who, like, names my car or my succulence, which I know I said like a million times in this conversation.

And I think it's just crazy to me that there's in the future, in probably the near future, there are going to be a lot of potentially social robots in our day to day lives. And it kind of makes me think of the movie I robot a little bit where I mean, unfortunately, I was like a dystopic take on it. But I appreciate that Kate brings in this utopian angle about the future of robots and that they don't have to be something that's scary and foreboding and taking our jobs and our livelihoods, but that they can be something that actually supplements and adds to our lives and enriches the things that that we as humans need and that we can actually find in robots if we lean into that, that's social need and that social emotion that we feel with them.

Do you name your succulence differently than you name, say, your car or like your phone or something like do you have different categories of nomenclature for each?

Yeah, definitely. I mean, like a succulent would be something more like a like a rob, but like my car keys like Berry. Actually my my new car is called Larry. So there's there's a nomenclature there. Right. Well they seem to both be just like standard male names.

So it seems like the same note to me. But there might be, right. Yeah, that's true. I do.

I mean, I'm the kind of person who would like name my dog Ben, though, you know, really, this is all just get like just mail generic names is. Yeah, OK. And I think a lot of us fall into that.

Right. Like I got a dog recently and, you know, I had to choose a name and a lot of the names that are out there for dogs are, are male, generic, male, like human names.

But I guess that's a good point, though, is that it's no longer human. You know, you don't it doesn't need to be human anymore. Yeah, that's a powerful statement. Just thank you.

But I think in conversation, you know, one of the examples she gave about that, that robot at Stop and Shop who is just like this, you know, is taller than I am and has these like stop and shop being the grocery chain that she referenced on the in the northeast of the United States and has these giant eyes that they put on it. And originally it was supposed to pick up mass and things like that, but everyone's just terrified of it. And there's like a million blog posts out there about how people hate this robot so much. And I think that's such a great example of ways that the personification of robots or the anthropomorphism that we put on robots can impact our relationship with it even before, like the intellect, like if we think about it or if we take a step back and we're like, OK, that thing doesn't it's not it's not really thinking there's no, like, AI involved with it. It's not learning. It's just like sitting there cleaning up messes. But the second you put eyes on it, it's like, oh man, I need to watch out for that thing. Like I can't let my dog around that. I can't let my kid around that. I'm I'm terrified. So it's like those how we design these. I was going to say designing these creatures, right? Even that is problematic, but we designed these these robots has there are ramifications for how people interact with them. And I think it's like it's not necessarily we're talking about empathy. We're talking about like perceived empathy that people put on these inanimate objects that we've called robots.

Yeah, well, I mean, it's interesting, Don, because you said that putting a googly eye on something might make people fear it or act differently in this feeling of, I guess, fear, maybe distrust of the robot.

And my immediate reaction was, oh, there's googly eyes on it. I'm going to look out for that robot now. I'm going to I'm going to give it a name. And if it bumps into something, I'm going to feel bad for it. And I caught you a few times where you were saying the robot who did this and I catch myself all the time. Even earlier today, I was talking about Siri and I catch myself saying, oh, well, I was yelling at her. And that's something that I find super interesting is like even when we're talking about things in the context of talking about personifying them, we are still like actively personifying them even through just the language that we use to describe them.

Yeah, it's one thing I'm thinking about which has we didn't talk about it all in this conversation, but it made me think of it was this concept of disability studies and what disability studies would have to say about robot empathy, because one of the critiques that disability studies brings to creation of of things of anything that's like human like is that we go to this one particular identification of what a human body looks like. Right. So it's like a particular thing, like, you know, two eyes, two ears, et cetera, et cetera, et cetera. But not everybody, every person's body looks like that. So you go to this typical thing and then you design around it. And I wonder what a disability studies critique of some of these robots, like even like how we design eyes, how we design some of these human like characteristics, maybe in these like. Mm. Typical ways like quote unquote making out scare quotes here. But I think it's important for us to think about, especially if there's anyone out there who's doing the design work right. On the stuff to think about how that critique might, might be levied. So, you know, the the pros and cons, I guess, of of that robot empathy that we seem to have.

Yeah. It kind of reminds me of our conversation a while back with Dr. Miriam Sweeney to where we were talking about the personification of virtual assistants and how there's a little bit of a catch. Twenty two there where if you, you know, make virtual assistants, female voices, then we trust them more, but then we perpetuate stereotypes. And the same is definitely true for robots, too. I mean, you make a robot look a certain way, you probably can easily perpetuate stereotypes or cause harm with that. But there also might be a lot of good that can come of that, especially when it comes to the way that we interact with these systems and these bots, the ways that we trust them. I mean, there's just there's a lot here and there's a lot of power in the language we use to describe them. There's a lot of power in the decisions that go into designing them. There's a lot of power in the ways in which we decide to interact with them in useful or harmful or abusive ways. There's just there's a lot here.

There certainly is a lot here. And and I think something we keep coming back to is that. You know, there are folks out there, including in the engineering space for these robots who are like everything you just said, are like, no, you know, like there aren't there aren't ethical considerations here. We're really just trying to make it so that, you know, children are going to play with this Roomba. Right. We're going to design this thing so that people use it. Right. These aren't ethical things. And I think what we're saying and a little bit of what Kate might be saying or said in this conversation is that, no, actually these design decisions have real emotional ramifications. Right. Like the way that we do these things matters. And so I think that's. Is that correct? Yes. Am I paraphrasing what you just said? Yes.

The way that we do these things matters.

And for more information on today's show, please visit the episode page at Radical I dug.

If you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite podcast. We want to remind you all, especially those in the U.S. this week, to have a happy Thanksgiving. And if you're not celebrating Thanksgiving for any number of reasons, we hope that you have a large amount of gratitude in your life and you continue to seek that out. A reminder to catch our new episodes every week on Wednesdays to join our conversation on Twitter at radical iPod.

And as always, just stay radical and.

Also, I think that last sentence I said there could have summed up and wrapped up every single one of our episodes, whether we do things and it matters, the things that we do matters. That's why we make such a great podcasting team. That's right. Happy Thanksgiving, everyone.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Audio to text transcription just got more accurate. Sonix accurately converts most popular audio file formats (like WAV, MP3, OGG, and AIF) to text. Sonix has the world's best audio transcription platform with features focused on collaboration. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Get the most out of your audio content with Sonix. Sonix is the best automated transcription service and supports over 40 different languages; transcribe your audio files today.

Let powerful computers do the work for you; automated transcription in minutes. Powerful integrations with the most popular software allows Sonix to easily fit within your workflow. Automated algorithms have improved a lot over the past decade. Automatically and accurately translate your transcript with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.