Episode 14: Emoji Design, White Accountability, and the Ethical Future of Chatbots with Miriam Sweeney
Miriam_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.
Miriam_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.
Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess, in this episode.
We interviewed Dr. Miriam Sweeney, a critical cultural digital media scholar who studies anthropomorphic design, virtual assistants, voice interfaces and A.I. through the lenses of race, gender and sexuality. Her current project, Facing Our Computers, Identity Interfaces and Intimate Data, explores the linkages between identity, design and data valence in A.I., voice assistance, digital assistance and chat bot interfaces. In this interview, we explore a lot of topics, some of which include what are some of the ethical concerns we should have about chat bots and virtual assistants? How can these technologies perpetuate gender stereotypes? What is ethical anthropomorphic design? What are the ethics of emoji design and why does it matter?
We are so grateful that Miriam was willing to come on the podcast, especially because she was one of our first supporters on Twitter that gave us words of encouragement and words of wisdom as we began this project. So it's very special for us to be able to interview her and now to share this interview with you.
We are so excited to have Miriam Sweeney here on the show today. How are you doing, Maria?
I'm doing very well. Happy to be here.
Thank you so much for joining us. As we begin, I was wondering if you could tell us a little bit about what motivates you a little bit as a researcher, but also just as a person. Sure.
So I motivated by by people. Right. And I've always been interested in sort of the social science or this or the people aspects of things. And that still motivates me today and also motivated by the things that are all around me. Right. The technologies that I use and encounter and interface with all the time. So kind of combining those two things I've been very interested in and in researching technologies, interfaces, design, you know, digital media that pops up in my daily life. So, yeah, we can I don't know how much background do you want at this moment, but we can keep exploring that.
Yeah. Well, do you have a favorite? This is a completely unfair question. Do you have a favorite technology or a favorite like technological interface? Because I know you do a lot of interfaces and we'll get into the specifics. But do you have a do like a favorite or a most effective maybe it's another way to frame that interesting and most effective interface.
Yeah. I mean, it's interesting. Like, I'm a Mac user and so I do love the way all of my Mac products play together nicely, you know, creating sort of a smooth environment of interfacing for me. Also, in terms of like writing and doing research, I mean, I write in word, which, you know, is we are all familiar with the kind of socks. But I love interfaces like Scrivner that allow you to do things like storyboard and like think differently. You're outside the box. And, you know, those can be clunky as well. But, yeah, I mean, I'm always interested in interfaces that are letting me kind of breakup ideas or move across platforms smoothly. So that's what I like as a user.
Do you think that you grew up being a very technologically inclined person, like always surrounded by gadgets and getting the newest thing from Apple or Microsoft or whatever company it was? Or do you think that that just kind of happened naturally with your research as you started to grow into that in academia?
That's a really interesting question. So, no, I would not have described myself as technologically like inclined. That wouldn't have been an aspect of my identity growing up at all. I didn't have a computer at home growing up, so I had one when I first went to college. My first experience of getting a computer, gosh, I really did not think we can talk about this, but now I'm very excited. It was like a gateway, you know, and this big, like, cow patterned boxes or whatever. And I got a computer to go off to college. And I was like, what the hell is this? You know, like, I had done a little bit of computing in school, typing and stuff like that, which makes it sound like I'm like live in the Dark Ages. I'm like, not that old guys, OK? But like, our household just didn't have that. So my own, like, self narrative was never like, oh, I'm going to go forward and like study technology like that. Was it a part of the thing? And where that actually kind of came into the picture was much later when I was getting a master's degree, I was getting my Masters in Library Science at the University of Iowa, and I was offered an assistant chef to work with information technology services.
And I thought, oh, you know, I am not qualified for this. Like, I am just like a very, like, just medium user of, you know, your main office products. And they were like, no, no. Well, this dude, you and you'll figure it out and you'll help with instructional design. And so from that, like graduate school student assistantship is really where, again, like exposure and the confidence to think of myself as technological. So it's interesting because I think that narrative is actually not uncommon that, like I know a lot of other women particularly would say, oh, no, I'm not necessarily technological, even though we're interfacing with technology all the time. But in that role, I gained confidence to be able say no, like I am interested in technology and I have technological skills. Right. And now, I mean, I would say like I'm a digital media scholar, I study technology. But that was not that was like not like a passion building kits from elementary school kind of thing. So it it came later.
Yeah. Yeah. For some of our listeners who might not know what digital media studies is or even library science. Could you just say a little bit like just kind of situate what the questions that you're asking in general are in your research?
Yeah, sure. So library and information science, I think a lot of people have no idea what that is. Right. So maybe they often think like, oh, like librarians. I got it. You know, it's like books and like buttons and cardigans. And, you know, that is that is represented. Yes. But it's it's a broader, more interdisciplinary field that also takes into account sort of all the ways you might think are, you know, managing information and preserving information, accessing information. Through a number of kinds of technological systems and analog systems as well. Right. So it's it's not just library institution, although that's part of it, but it's also database management. And it really bridges a lot of like humanistic and computational kind of domains together, which is really exciting. I always tell students, like, whatever domain you're coming from. You have a home here because there's always a way in, you know, to migrate graduation studies. So my way also bridges those domains as well. So I take kind of a humanistic rhetorical cultural studies approach to studying digital media technologies and, you know, kind of bring those two perspectives together.
And one of the fields that's pretty well known, at least outside of the academic world, is HCI human computer interaction. And I knew about that before. I knew about what information sciences was. Full disclosure, I'm a P. D student in an information science program. I still don't know what information science really.
I think actually a lot of you on my program don't know what it is either.
Do you think you might be able to explain some of the differences between HCI and information science, or are they just kind of situated within each other? Like, what is what are the differences? What really are?
That's so funny, because the other day I was actually having a conversation with a colleague about defining information science. And I was like, you know, I don't know if I can define it. I mean, I got a degree in a teacher I recently tenured. I'm like, I still don't know. But I think it's because it is such a like a little like, you know, a hairball of different domains together. So, yeah, you're right. It's like if we were mapping it, like, I definitely would say HCI is related. And I think what we're really talking about is like just the little differences on how people view the kinds of questions they would prioritize, you know, and information science. I think about things like, you know, information, behavior and, you know, metadata and linked data and the semantic web and how we sort of organize and ontologies. You know, I'm just gonna give keywords. And then in HCI, I'm really thinking about, like, you know, user interfaces, user experience, you know, these different kinds of questions about access that have to do with the technological system. And those are obviously related, you know, and we often share words and terminology. But I find that like it's the orientation on the kind of questions that interest and motivate that are just slightly different. So, yeah, and I look at myself in there as well. But with the additional edge of like sort of the more like, you know, cultural studies driven kinds of questions. So I'm always like HCI. Like, I kind of do that, but I don't do it like HCI folks describe it. And then like, you know, so it's like we're all kind of we can all be in the same place, but the frameworks are a little different.
And in my experience, we can also get really lost in the jargon and sometimes lose some of where we're going and why it matters. And I'm wondering, it's like maybe an example might help for our listeners as well. So I know you do some work and some research on digital assistants and chat bots, and I'm wondering if you could say a little bit about that research and then maybe even more about like why why that's important and why that matters.
Yeah, sure. So, yeah. So chat bots and digital systems, voice assistance, virtual assistants. These are all words that kind of mean similar, a similar kind of set of technologies. And I find that people are familiar with these things if you just give examples of them. Right. So they might not know what a chat about is. But if you're like, remember Microsoft Clippy, you know, then they come back and like, yeah, that's right. And it's like, right. That's a step up. Or, you know, older things like Juliana, Litsa. These are chat bots that are kind of famous. And now we have virtual assistants and voice assistants like Siri and Alexa right there, all around us all the time. So those are the kinds of technologies I'm interested in. All of these technologies are, you know, kind of social technologies. They are they are they're designed to interact with people and the sort of smart virtual assistant technologies like Alexa and Siri and all of that also are, you know, driven. So they are smart technologies. They use machine learning and they're doing all of that in a way that, like the earlier chat bots weren't really doing. It's the related you know, it's kind of a continuum of technologies. So why does it matter, these technologies? Well, you know, they're increasingly integrated into, like, everything we're doing. So, you know, I'm astounded because I started studying these technologies at a time when I was like, no, guys, this is gonna be really important. And now I'm like, I don't you don't even have to make that argument because they're integrated, like all across your house in us, in every, you know, an Internet of Things. Technologies like voice interfacing is like, you know, the killer app of this time moment. And so it's like we're seeing the integration of these and they're moving from just like. You know, personal use and domestic use to like, you know, public use and e-government and like health care and across all these different domains. So really we're seeing kind of a huge ubiquitous spread of these technologies.
You made me think for the first time in many years about about Clippy, though, specifically, specifically remembering that the image of Clippy in Microsoft Word.
And I always thought back to like, you know, AOL, AOL, like, aim, insta messenger as like the first time where I interacted with chat bots. But you're right. It's like Clippy has it has a place in my heart also partially in my nightmares. The eyes always kind of worried me a little bit, but especially as a kid. But I'm wondering if. Do you see kind of the ethical questions that we might ask about a chat bot like Clippy teeth? Do you see them as kind of the same questions that we might ask about Alexa or some of the more modern chat bots? And what might some of those ethical questions be? Yeah.
Good question. Yeah. There is a continuity, right. Because I think with Clippy, you know, you mentioned those googly eyes that are watching you and there are some questions about things like, you know, like surveillance and standardization and how the interface is sort of capturing or information or encouraging us maybe to engage in particular ways with the interface. And so certainly those kinds of questions are present with like Alexa. But, you know, but greater even, you know, in terms of the questions about data capture and, you know, transparency in the system. Right. Like, what data is being captured? Like when is Alexa listening? And then what happens to that data? Right. So I think that, yeah, there are some similar kinds of ethical questions. Something that I'm interested in as a way into those ethical questions is also asking about, like the design representation itself. You know, like how we choose to represent those technologies, you know, in humanistic ways also conveys something. Right. The designer is making choices about particular identities that, you know, are seen as desirable and and why. You know, that's an interesting question. But for me, it's just the questions that kind of allows you to start pulling the threads of the other kinds of questions that you're asking about, you know, questions about like what's happening behind the interface with the data and stuff as well.
I'm a little bit curious what your viewpoint is on a specific part of the design representation that happens in a lot of virtual assistance. So I know there's a little bit of a tradeoff when it comes to female voices in virtual assistance, because there's been a lot of research has been done on how people tend to trust female voices more, which is why a lot of companies tend to make the virtual assistants default to female. But there's also been a lot of backlash that these female voices are kind of perpetuating the stereotype of the female secretary. And so I'm curious if you've dove into this dilemma in your research and even if you haven't just kind of what your opinion is about it?
Yeah, sure. Absolutely. It's. It represents to me this kind of design logic like a kind of chicken and egg situation that we get into. Right. Where, you know. But people like it, right. People respond well, like we polled users. Right. So that we have HCI folks who are like, OK, we've done lots of studies. We see that because of social norms, you know, user sets prefer female voices around particular kinds of subjects and male voices around other kinds of subjects. And it's related to ideas about gender and authority and domains. Right. So we we would take more guidance from a female assistant with domestic matters, but a male assistant, perhaps that is more computationally oriented, you know, that kind of thing. And then it becomes like, OK, so maybe then a good design practice is just to use that information and give people what they want. But then we're now locked in a feedback loop that's kind of tautological in a way. Right. So something I, I am interested in is thinking about how the kind of design, best practices and design guidelines become instantiated and then kind of unquestionable, you know, like, well, we can't deviate from that or user base doesn't want it. They won't react to it. Right. Or or such things. But then it becomes like, OK, well, we really locked that gender nauman now, you know, like it's kind of immovable. So I think there is a danger in that that tautology that it becomes unquestioned design logic that gains kind of like a universal status that is immovable, but then we can't design out of it, you know. So I think that is a problem. Right. Like, we should definitely be be challenging ourselves to identify stereotypes and then, you know, not just. Because it's easy to do so, right? Like that, there's more to it than that.
So what do you think is the role of folks or companies who are designing these chat bots and questioning some of those gender stereotypes?
Like I could imagine the argument being that we're gonna lose revenue if we don't use this, you know, substantiated like female voice because it's the best practice of the industry. And we want to make money because we're a small startup, something like that. But do you think there's an ethical need, I guess, for these companies to to change the status quo in that way?
Yeah, for sure. I mean, because what I just heard you say is that actually gender and capitalism share intertwined logics. Right. That it's profitable to, you know, rely on that stereotype. So we can't disrupt the gender logic because it would disrupt the capital logic and in fact, they're the same. Right. So this is the thread that we need to pull on, actually, is to see that there's a lot of logics that line up and they start to kind of bolster each other. It's interesting, like my research now, I'm starting to kind of explore interventions into design that are trying to what I'm calling kind of like re mediator, like reface gender and the interface. So rather than just leaning on that, you know, that tried and true design tactic are trying to do something else. And there's kind of different tactics. Right. Like we're seeing I think tech companies are having kind of a moment of being called into accountability in different ways across the board. Right. Like Facebook has been getting nailed with like since 2016 with, you know, hey, what is your role in providing providing good information and things like that? And Amazon, certainly, you know, we're seeing a lot of news about how you treat workers and workers are striking and all of these kinds of accountabilities.
And one one interesting thing is that, like this question about like gender and voice assistance is also kind of emerging as like an accountability piece. And so we're seeing, like, more options for integrating like, oh, male voice, assistant voices, you know, and as a as an option or there's also some that are I forget which company is who's doing the, like, celebrity voices. Right. Like John Legend. Things like that. Like how the celebrity voice. Right. More user customization. So that's kind of like one end of it. It's kind of a very normative approach. Right. That like, OK, well, people are mad that, like we only have like female voice assistant. So we'll give you more options, like male voice assistance will be fine. And then we see on the other hand, you know what I would say, like more radical interventions, that's appropriate for this podcast, I think, to ask more questions about like, well, what would what would it mean, though, to, like, try to disrupt gender and design a little bit. So I don't know if you saw the the q the gender list. I I'll have to send a link after the podcast if you have notes and things for them.
But, you know, companies and nonprofits are kind of partnering to, to position like, hey, what if we designed a voice assistant that you could talk to but you know, modulate that voice into like a really gender neutral frequency and use it that used the voice training off of non binary, you know, trans people and then take, like, you know, kind of an amalgam of those voices and put those together, you know, what would that give us? Could we design a genderless, you know, virtual assistant? So that's interesting. You know, it's a different approach. And then there have been other approaches, like there's the feminist A.I. Initiative. They have a cool website where they're trying to like specifically design like feminist A.I. Like what does that look like in terms of the kind of gender transfer transcripts? You're a I like actually says to you, how do they respond? And in ways that kind of break some of the gender dialogue that is sometimes created. So there's some different approaches, you know, where people are trying to do just that. Like, let's let's interrupt let's think outside and not just repurpose that common sense logic.
Yeah, it's really interesting with virtual assistants. I think that there's a lot of underlying issues that people don't really notice, like, you know, gender normatively. And there is a lot of the things that you were just mentioning.
And one of the things that people tend to think about a lot when it comes to virtual assistance, especially in my experience, is this surveillance piece that you mentioned before, the first ethical dilemma you mentioned and the fact that they're always listening to us and this like fear of how much data are they collecting. And I have to ask you, as a researcher in this space, do you own it? And am I an Amazon Alexa or a Google home or a virtual assistant of some kind? If so, are you scared of it? And if not, why not?
That's such a good. It's like harder and harder to not you to not be in that landscape, right. So I have really tried very hard to not be in a voice assistant landscape. So I do not have any Alexa enabled items, you know, and I have thought about getting, you know, an echo device just for work purposes, but I really just not that comfortable with it, honestly. And I also just kind of want to not like I you know, I'm like I don't I don't need to own it. There's lots of ways to research this thing, but. And that's it. That the digital methods topic that we can talk about. But yeah, I have really. Voice assistance have been like the line. And I have like increasingly encountered other like Internet of Things kind of technologies. Right. Like recently I got a smart thermostat. But she's not or it's not been Lexia enabled. Right. I know. I know. I call myself. So yeah, I've really tried to resist it, but they're hard because they're becoming integrated into, like, literally everything. So it's it's becoming an environment where you do have to opt out because opt in is just the norm, you know. So, yeah, I always tell my students, though, like, hey, who who owns one of these? Just throw that in the trash. You know, like kidding. Not kidding. Just seriously, because I do worry that we we're becoming we're linking our data through so many systems, which is obviously happening either way. And, you know, if you're carrying your cell phone around and it's not like you're immune to those, you know, your cell phone is also kind of listening to you. Right. And I have Siri things, but I. I have that turned off. So it's not that you're not in that surveillant environment, but there is something about the voice recording the biome, that biometric feature that I feel we need to. And like facial recognition, like we need to really watch, you know, how deep we're gonna let these biometric capturers roll out.
I have to catch you on the anthropomorphizing the. Yeah. The air conditioning unit or whatever you said it was, because this is something that happens a lot.
I do it with my own, with my roommates. Amazon Echo. And with, you know, different chat bots and virtual assistants that I've talked to, I think it's pretty common for people to use she or he and to really think about these devices as a human like thing. And I know that you've done a lot of work with anthropomorphic design. So I was wondering if you could maybe first just define what that is and then also talk a little bit about the work that you've done in that space.
Yes, certainly. So anthropomorphism is, you know, giving the trait human characteristics and traits to an animate device. And we do we do this with devices and things and objects and animals all the time. Right. So it's not just it's not just limited to technology. So people anthropomorphize their their ships and cars as she, you know, and their rocks. I don't know. Whatever whatever pet rock. I don't know where they came from. So it's really just a familiarity thing. It's how humans socialize with each other and we just apply it out to other stuff as well. So it's a very kind of it's it's a feature of being a social creature. So that in of itself is not really a problem. So anthropomorphize nation as a design strategy then kind of specifically takes that and then cultivates it and leverages it within design so that we can feel or the user can feel more comfortable with the technology and interact with it in ways that they already are socially familiar with. Right. So it's like I know how to talk to you all. We're both humans. We can do that. OK, actually talking to other humans is really challenging and full of a lot of sticky areas. So it's not that simple of a metaphor, really.
But the idea is that if your computer has similar features and characteristics, you will kind of be like, oh yes, this is familiar. I know what to do. So it's really just about familiarity for the user. An entry way in to setting up a kind of expectation for interaction in the device. So but it's drawing on social features and again, social features, actually very tricky. And so that's what I think is fun to study about anthropomorphic design, because, you know, in the examples we've already talked about, you know, features like gender and race or sexuality and background and class all come into play in human sociality. Right. Like, when you meet another person, there's actually a lot of background information that is framing the kind of interaction you have. So even though it sounds easy, like, oh, well, we'll just designed, you know, a friendly computer interface to talk to you as a person. Thus, those same factors are still there, influencing how we interpret things like trustworthiness. Right. Or friendliness. You know, those kinds of characteristics that user fees are designed, you know, is trying to facilitate for us. Those are really actually kind of heady categories that are framed by things like race and gender and class strength.
When you were talking just now, I. Thinking about marketing as well and how these different categories that we're familiar with can be used to make us more comfortable in ways that might be really beneficial in one way and possibly tap into some more. I do want to say abusive territory, but I'll say abusive territory in terms of how they're deployed. How can we determine some metrics for like when it's maybe appropriate to use some of this either gendered language or anthropomorphic design? And when it goes too far or it's dangerous?
Yeah, it's a good question. It kind of comes back to the question again about, you know, kind of creating these design standards and the role of standards in all of these kinds of design situations is itself a sticky area. Right. Because a standard is usually taking best for a certain population kind of approach. And that that always means that what's best for a certain kind of population stands and then for best for. All right. And it's our questions about power can ask, well, then who's who's best practice, you know, gets to kind of blanket head demonically over, you know, the best practice of others. And so that is like the sticky point of trying to say, like, OK, well, in these instances, like it's OK to do X or or what have you. So I, I find that, like, I am able to offer kind of a universal guideline for that and metrick for it. But we can look at or in the approach I should say I choose to take, is to survey the landscape and look at design trends to see if we can identify like, OK, what are the prevailing design trends and then where do we see deviations from those trends and why like what does that mean? So, for instance, you know, as we've been talking about, mostly virtual assistants are designed as women. A lot of times they're also culturally coded as white women by the vocal stylings they have and by the kind of sometimes even by the like, you know, the naming or the the the embodied features they might have.
And if they're just voice, then it could be the kind of the kind of English script that they're that they're using, but also the kind of uses and applications that they're being marketed for. Tell us a lot of things about the cultural codes around gender, race. So I'm always really interested in virtual assistance that deviate from that somehow and and then wonder like why what kinds of choices that the designers make about audience and use that dictated that in this case they wanted to use, you know, a assistant that was, you know, maybe racialized differently or gender differently. So, yeah. So it's not so I don't have an answer about like what what are the metrics that we should be establishing. But I do think that there is some work to be done around, you know, the kind of strategic use of of different identities in different markets and questions about like who's controlling that identity. Right. Is that dictated by the community? Who is the audience or or not? You know, like those kinds of questions of power are often sort of surface, as you know, dig into the examples a bit more.
That's actually something that I'm curious about, because if we were to create virtual assistance that are a little bit more culturally diverse and aware of their surroundings. So let's say we have an Alexa that is launched in South America that is like much more Latin X than the Alexa that is launched in the US. Does that mean. Is that bad for a company that is use based to try to create or try to deploy something in a place that they might not necessarily capture all the cultural norms of? Is it better that they try? Are there some harmful consequences by doing that? What is your take on that?
Yeah, no, it's a really good question. Right. For me, the question is that we have to link like the design of the interface with the deeper, you know, operations and applications of the technology itself. Like that, they're not actually separate. So a question that we have is, so if the question is like, OK. So we designed, you know, Tenex Alexa for Latin America or South America that, you know, uses colloquial speech and the Krech dialect and, you know, maybe looks like, you know, looks Latin X is not Y, perhaps. Right. Is that OK? I guess the question I still have is, do these technologies at the core, like serve these communities or are we just trying to get buy in? So that, again, we're there. So the companies are trying to create trust. But is that trust deserved? You know? And I think that those questions go together. You know, because if you don't, the design is trying to create trust and community, but trust in community for it to be authentic, need to actually be in service of the community. And I have some questions about whether that's being achieved across the board with some of these technologies. Does that make sense?
Yeah, it raises the question also of intent versus impact as well. So a company could have the intent to just try to make a bunch of money in an emerging market. And the impact might be to create a more inclusive space based on certain definitions. But to what degree do we treat that as, you know, ethical in the entire story of it is because if the intention isn't there, then, you know, and then that's that's where I get into as more philosophy student like, well, what what models are even using.
But I was wondering if we could take us a same same but different turn in the conversation to talk about emojis, because I know you've done some work about emojis, which is connected, especially in terms of how, you know, race and gender is reified in these technological spaces. But what's the deal with the emojis? And again, like what? What is at stake with how we design our emojis and how we use them?
Yeah. So, you know, thinking about interfaces, but maybe from like a little different example. Emojis to me still have a lot of, you know, interface effects. We're interfacing through emojis and interpersonal communication. To me, emojis became you know, I wrote a paper about emoji because I was really interested when the skin tone modifiers rolled out, you know, in two thousand sixteen. And I was just like, wow, this is an interesting approach to, you know, the the emoji problem, which had been when emojis came to the U.S. and North American markets, that they were all white to start with. And, you know, users, mostly users of color, you know, were like, what the hell? You know, this is not representative of me in any way. And there's obviously a problem here. So the the modification from Unicode or the response to all of us was, OK, we're going to put these kento modifiers out there. And to me, it just really became kind of like an interesting, like, microcosm of thinking about like racial politics and the kind, you know, the kind of representational politics that we're seeing play out across different kinds of media, like at the same time that was happening.
At the same time, the call for diverse emojis was happening in the US. It was also like the moment Black Lives Matter really surfaced. You know, the Trayvon was Trayvon Martin. It was like the same moment. And then right after that was like Oscar. So why, you know, so there was like all of these issues about not just representation, but also police violence and, you know, racial inequality and oppression that persist. And just, you know, or ever are still ever present. And then emojis, you know, are part of it. So it's like an interest. It's interesting to see the way that these technologies, again, are just sort of fitting into the social landscape. And then to kind of unpack their meaning, like, what does it mean? So I was particularly interested in the differences between life, white folks trying to grapple with the skin tone modifiers and black, indigenous and people of color, you know, using and analyzing these new tools of representation and the sort of differences and in comfort and critique and approach.
Yeah. And I don't want to I don't want to assume your own ethnicity. But if you do identify as a white person, like what? What is it like for you doing that, doing that work? And how do you, I guess, hold yourself accountable as a white person in that conversation?
Yeah. No, I. I'm your average Midwestern white lady. So absolutely. That's an important question to locate oneself in one's work. Yeah, I, I often start from thinking about whiteness. Like whiteness is kind of a framework that I'm interested in as I approach a lot of different technologies, because whiteness has been presented to us through technological apparatuses as as both a universal and invisible kind of framework that is often organizing technology. And I think it's important to make that visible and just to see it right for what it is, which is an ideological framework that is right on in there. And so with emoji, the same thing for me was it was, you know, thinking about whiteness and the way that whiteness and and other racial frameworks. Right. Other racial ideologies in shape, user interpretation of how they're using technology as well as the designers understanding of what they're actually encoding and then the actual encoded encoding like the. Code itself, right? What is actually written into the code? So thinking about those different layers and ideologies of race like are part of all of that. So I was finding that for white users particularly, that they they were like, whoa, skin tone modifiers or introducing a range of questions I've never had to ask before using hoses, you know, like before I would just pop a thumbs up in there and we were good.
But now I have an existential question. I have to identify myself as white. If I want to choose the white emoji and white folks are not used to thinking about whiteness as a race, as a racial position, they're just used to thinking about it and encouraged to think about it as a universal position. Right. And so that's interesting because there's a lot of alignment to me with how technology is designed. And so that was what drew me into the emoji conversation. It became kind of an interesting and interesting point of inquiry because I found that, you know, users of color didn't have that same existential crisis. They were like, OK, great. Finally, some representation choices that match. They were not uncomfortable with identifying, you know, a racial position naledi in the interface because they you know, they're always being interpreted differently by the technology or outside of the technological framework. So that kind of tension is super interesting. And also just tells us a lot about our built environment. So it's not just emoji. But we can think about the ways that those same dynamics maybe interplay through a lot of kinds of technologies.
And Miriam, you are on the radical A.I. podcast.
So we would be remiss if we didn't ask you. Our radicality questions of how we love to ask our guests. So I'll start off by asking you first how you define the word radical and based off of that definition.
How you might situate yourself or your work in that definition or if you do at all.
Sure. So. Yeah. Wow. It's such a deep question. I'm glad that you didn't open with that question because I needed a minute to warm up to it. But I really think that I think that radical I do think of like, you know, a change of system, like a you know, an overhaul of system or a changing system outside of the system, out of framework. So we're not working within the rules. We're trying to break that in and find something else. So for me, in that process, I think of like the radical as an expression of potential, you know, like there's a potential for more outside. And I find that very helpful. You know, like the idea that that outside of the system, maybe we can be better, you know, maybe we can find freedom or, you know, folks talk about that in terms of like getting free or liberation or or even just thriving, you know. But towards a system that represents our own interests. So I think that maybe for my work. And I don't know that I characterized my work as radical particularly. But I do think that in my work, I am trying to see the systems of technology and society and culture as systems so that we can understand the way power is exercised through those systems and dream something else. So the potential for something better is is in that as well.
So as we're recording this right now, it's at the end of May. And as we were talking about before the interview started, the semester just ended for for you all. And I'm wondering, kind of in reflection on that and reflection on your on your students from this past semester, if you had, like, one piece of let's first like this if you like, one thing you wanted them to have learned from this past semester, like one like ideal like lesson for them to have taken home, taking to heart and like will live forward with it inside them. Like, what would it be?
Well, I think it's just that it's OK to question stuff. You know, I had a student in one of my classes at the end of semester. I actually asked them reflections like, what did you learn here? You know, that's helpful for all of us. And one student said, I guess it's okay to be critical because being critical doesn't have to mean, like, you know, you're just negative. It can mean that you're asking questions. Right. And I thought, yes, that's exactly right. That we have to critique, especially the things that we love, you know, and use. So as a digital media scholar, I didn't study digital media. I hated it. You know, I studied digital media because I loved it. And I got involved with online communities and things. And that and that is where I start asking questions about like identity and community and the interface and all of that. But it's definitely incumbent on us to then hold ourselves accountable and hold, you know, those systems accountable. Hold those companies accountable. Hold our politicians accountable. In pursuit of systems that we can not be exploited by, you know, systems that we can connect with others without that layer of extraction happening from. And that's what really I think motivates me. Yes. So. So students stay critical. It doesn't mean you have to give up the thing that you're criticizing. It just means that you're holding it to a new accountable standard.
And for students or professors, academics, industry members, anyone who's listening to this, if they'd like to engage more deeply with you and or your research is their best place for them to go to do that.
Yeah, certainly. I would love for anyone to reach out. My email address is in e Sweeney s w e n e y. One at you, dog eat you. So any Sweeney, one at you, a dog need you.
And of course, like always, we'll make sure to include some of the topics that we discuss and also some of your research in the show notes for this episode. Miriam, thank you so much for joining us today. It's been a pleasure.
Thank you both for having me. This has been a lot of fun.
We want to thank Dr. Miriam Sweeney again for joining us today for this wonderful conversation. And as we always do, now is the time for a quick debrief of our very first initial reactions to the interview. And Dylan and I recorded these right after we re listened to the interview. And in this case, we interviewed Miriam quite a while back. So it was great getting to hear this conversation again. So, Dylan, after listening to our interview with Miriam again, what was your initial take away this time?
Yes. Or we're about way like maybe three weeks ago is when we did the interview and now we're recording this outro.
And I, I really enjoy this conversation. I know. I always say that just I say that every single time, but it's because we have some great conversations with great people. But there was something really special about this interview for me.
As I as we mentioned in the intro, you know, as Miriam was one of those people who really cares about this project that we're doing and was really I feel like I'm willing to be vulnerable and authentic. And it felt like a very just natural conversation, more than an interview.
So I really enjoyed it, but especially because we also got to talk about some fun questions, too, which we don't always get to do. So in this case, we talked a lot about emojis.
And I know I kind of like even before the interview, I remember saying to Miriam, like Miriam, can we talk about emojis, please?
And and we did because we asked about it and she said yes.
And there's, I think, a lot more in this topic of emoji design and in chat bot design and in voice assistance and and virtual assistance than we might.
Give it credit for. There's so many different design choices that are made across the product development cycle and each of those choices.
And this is my language. Each of those choices is is a political choice. It's like, well, who are you going to have representing? Like, what voice you're going to have representing your voice assistant.
You know, the amount of money, time and design questions that went into designing things like Alexa is like is immense.
And each of those decision points it, maybe even each each of those pain points of development, there were decisions being made and things that were chosen and things that were not chosen. So Alexa could have a very different voice and it would carry itself and represent a very different. I guess I'll say it would make a very different impact and not different, even necessarily in a particular way. But you get into these questions of like human robot interaction and human computer interaction, which we talked about a little bit in this interview. And those decisions are just are so complex. And I think we as consumers don't always think about it like that.
You know, it just if I'm going to send you like a thumbs up emoji or something, I don't necessarily think about the political or gendered or racial implications of that emoji. You know, I just want you to know that I accomplish a task or something like that or that I agree with you. But it's really like the breakdown is about how we communicate with one another. And emojis have created this visual representation and symbolism. Do you thoughts about emojis? Just.
It's interesting because recently I've heard from black colleagues and colleagues of color that to them actually using an emoji was a political decision. And it it's really meaningful because a lot of different organizations don't actually even provide the option for people to use the race identifiers or modifiers for emojis. And even the companies that do actually offer the option for a race modifier are kind of making the implication that to not be white, which is the default, is to be other. And that's a political statement in itself, too. And this is kind of also touching on something else, which was my biggest takeaway, at least initially, which was this idea of standardization. And you and I have talked at length about this. I know I'm a freak about standardization, but really I think it's so important to talk about here, because as Miriam said in the interview and she said it so well, the danger or one of the biggest dangers of standardization is that you're deciding that what is best for a certain population is going to be what's best for all. You're making that value judgment. And so even by choosing something as a default for a virtual assistant, for an A Mojie, whatever it may be, you are deciding that that default is what is best for everyone and you are automatically othering everything else, which is a political statement, and that's going to be harmful for some communities.
And so just just to clarify.
So I do like I personally do believe that it's a political statement, also a statement about power.
And as you're pointing to like standardization, what options are available means that we're making decisions about, you know, what's important. What we're signaling in certain ways and what we're not. And I think that sometimes.
And I don't want to use we here because I think we can.
It's like who who's the we? But I can speak for myself where it's like I lose. Track of that. It is a design decision. Like when when I do send you in emerging, right. It's I'm not necessarily thinking about it, although I probably should be. Right. Well, especially in terms of, like, intent versus impact. So I have a number of black colleagues who are really, you know, excited that finally there is like I did the most recent example would be bandaids railed even in the past week when recording this.
Finally, the Band Aid Corporation is like, oh, oh, you know, I know people have been asking for different color ban band bandaids for different skin tones for decades now. But finally, they're getting on board. And I have some black colleagues who are, like, psyched about that. And also, it's like, why the hell would this take so long? Like it people been asking about it. Different groups have been asking about it.
So so why is it taking so long to allow for greater representation of skin color in bandaids, especially band aids that get branded as like skin tone, band aids like it will will match your skin? You know, these are some of the ways that they're branded out there. And it's the same thing for emojis and chat bots, etc. It's like issues of representation. And these are issues of symbolism, which is something that I study a lot. It's like, what are the symbols that we're promoting and what are the symbols that we're not promoting?
One of our favorite scholars had a previous person that we've interviewed, Dr. Ruaha Benjamin, in in, you know, Twitter conversations, et cetera. Generally uses like the the the praying emoji with with black hands. And the first time that she did that conversation with us, I it's something like really clicked for me, like, oh my God, this is like when we do the opposite, it's the same thing that we do when we're like maybe not the exact same thing, but it made me think about like all the different images of like Jesus Christ or images of God. Right. The iconography where we're saying, oh, God was white, you know, oh, Jesus was white. And there's just like it's so deeply entrenched. We're obviously historically that's not like the case, although I guess we can argue about what God means historically and everything.
A very different podcasts. Now, that's resident Buchan's.
Obviously, you could tell him I'm passionate about this because everything that we do in terms of like translating race in our religious symbolism, but also our social symbolism, like emojis.
It it matters. And it's not just like the single time that we use that emoji.
It's like everyone over and over again.
It's like iterative process that we're telling a particular narrative again about what you're saying, like what's standard, what's not, what's the ingroup and who's the outgroup. And that is not only political, but it can be oppressive. It can be harmful. And it's so tied into that topic of colonization that we talk about a lot on the spot.
Yes, I was actually one of the reasons why I really loved Myriam's explanation, a definition of radical as well, because like you're saying, Dylan, we can choose a narrative that we want our technologies to embody in the future. And Miriam was explaining, well, when it comes to radicals, she wants that to be an expression of the potential for changing the system and changing the status quo, fighting back against that. And so instead of using our technologies as a way to forward the narrative that's been happening for hundreds of years that embodies racism and sexism and discrimination, we can have the option to create the future that we want to create instead.
And for me, I guess like the the final kind of take away from this conversation and listening to it again was like these decisions matter, like they matter now and they matter going into the future, because it is that that story that we're telling about how we're representing meaning and this almost gets to like a philosophical place.
But I think it is a philosophical question.
It's like a question of what it means to be what it means to be human and then what it means to represent human communication out in the world, which like that that shit matters.
So I guess that that's just really, really matters.
And it is this like iterative process of of making, of normalizing.
And it's a question of like, well, what do we want to normalize or do we even have to know too many questions to answer in too little time? And maybe we'll get to them in a future in the interview. But for now, for more information on today's show, please visit the episode page at Radical. I thought.
Yes. I'm so jazzed about this topic. I want to keep talking about it. No doubt we have to. We have to. Now, let's try to make it up one hour. I know. I know.
So if you didn't enjoy this episode, we do invite you to subscribe rate and review the show on iTunes or your favorite pop culture. Join our conversation on Twitter at Radical Pod. And as always, Jess, you got this.
Stay radical. I'm a professional at saying stay radical Hugh Parker. So can you get paid for? It might be just I don't know if people outside of California know what the shock ahead is. Oh, really? It's also called the call me hand. Is it really? Oh, that makes sense. Yeah. Call me Symbol. US too. Let's make a radical emoji. Maybe.
Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Create and share better audio content with Sonix. Lawyers need to transcribe their interviews, phone calls, and video recordings. Most choose Sonix as their speech-to-text technology. Create better transcripts with online automated transcription. Sonix converts audio to text in minutes, not hours. Better audio means a higher transcript accuracy rate. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Manual audio transcription is tedious and expensive. Quickly and accurately convert your audio to text with Sonix.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.
If you are looking for a great way to convert your audio to text, try Sonix today.