Episode 15: IBM, Microsoft, and Amazon Disavow Facial Recognition Technology: What Do You Need to Know? with Deb Raji

deb.jpg

What does it mean that IBM, Microsoft, Amazon, and others have distanced themselves from developing facial recognition technology and providing facial recognition data to vendors? Should you be skeptical? Where is the hope? To answer these questions and more we welcome Deb Raji to the show. Deb is a tech fellow at the AI Now Institute Working on critical perspectives to evaluation practice in AI, conducting audits on deployed AI systems and facial recognition, and AI auditing policy. She has worked closely with the Algorithmic Justice League initiative and on several projects to highlight cases of bias in computer vision. Deb was named one of MIT Technology Review’s 35 Innovators Under 35 for her research on the harms of racially biased data in facial recognition technologies.

You can follow Deb on Twitter @rajinio.

Relevant links from the episode:

Algorithmic Justice League

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing

MIT Technology Review 35 Innovators Under 35 - Inioluwa Deborah Raji

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Deb Raji_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Deb Raji_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess.

In this episode, we interviewed Deb Raji, a tech fellow at the A.I. Now Institute, working on critical perspectives to evaluation practice in A.I., conducting audits on deployed A.I. systems and facial recognition and A.I. auditing policy. Deb has worked closely with the Algorithmic Justice League initiative and on several projects to highlight cases of bias in computer vision. Recently, Deb was named one of M.I.T. Technology Review's 35 innovators under 35 for her research on the harms of racially biased data in facial recognition technologies.

We reached out to Deb to provide her insight on the recent breaking news from IBM, Microsoft, Amazon and others who have announced decisions to stop providing general purpose, facial recognition and analysis technology to law enforcement and other vendors who have used the technology for mass surveillance, racial profiling and violations of basic human rights and freedoms.

We're so grateful to Deb for coming on the show on such short notice and sharing with us her expertise and her vision for the future. We're so excited to share this interview with Deb, with all of you.

Hi. Thank you so much for coming on the show today. Thanks for having me. It has been a crazy last few weeks for you, hasn't it? I can see you getting everywhere.

Yeah, it's crazy. It's actually been like a roller coaster, especially like I feel like the world has sort of been like up and down since March. Just complete, like, chaotic energy. Like the whole world is going through this, like, chaotic period. And I was kind of just kept coming off of that, like reflecting on a lot of just the chaos happening in the world. And suddenly the last couple weeks, it's just been like an insane kind of flurry of change in response to that chaos. So I sort of really how I've been digesting it. So, yeah, a lot of the actions of IBM, Microsoft and Amazon to sort of press pause on facial recognition has been sort of the accumulation of like years of advocacy. And then just in this moment of kind of like reckoning of the racial relations in the states has sort of prompted all of that. You know, all of this response. And, you know, even just yesterday, the post ad being cast in the in New York City, all of this is, I feel like in response to sort of this moment of reckoning. And I'm so grateful and so excited to sort of see the kind of future rebuild, you know, coming out of this sort of moment of like, you know, insane that. So, yeah, I really am hopeful after weeks of sort of being very despondent, despondent and very angry and very frustrated and finally kind of feeling hopeful for the future and what kind of future we build before we dive into some of this current events.

I'm wondering if you could tell a little bit of your own story of your role in all of this advocacy, because I know you've been part of these conversations for a little bit. Now, if you could just walk us through kind of how you got involved with it.

Yeah. That's like the origin story. Yeah. I feel like.

I feel like I feel like a I kind of fell into this space and I'm I'm I'm really grateful to be part of this journey that the machine learning community is going through in terms of just recognizing the reality of the impact of its work and, you know, the consideration and the thoughtfulness required in order to do a good job building this kind of technology. So I kind of came in thinking that I would sort of be like a builder of machine learning systems and sort of working on the Applied Machine Learning team at a tech company, just thinking that I would be like creating these systems and, you know, falling for the narrative of like, oh, yeah, you know, my responsibility ends there in terms of just making this work. And I thought for a really long time, I if it works, that's fine. That's good enough.

And it wasn't until I actually I got into, you know, got an engineering team, saw the whole process from beginning to end that I started freaking out because I was like, none of this.

And not only does none of this work, you know, a lot of the data sets didn't have faces that look like me that included, you know, the face of a black woman.

And I started kind of panicking a little bit because they tried to have conversations with people.

Mostly it was kind of involved in the research space at the time as well. So, like when I was trying to have conversations with people in the machine learning research space around like, hey, you know, this facial recognition dataset doesn't actually have a lot of people of color or even, you know, we don't we don't really represent this concept in a way that makes sense for people of color. So, you know, I was having a conversation with someone building a model to sort of differentiate between hairstyles and hair types and only one category for, like all of black hair. I was like horrified. I was just realizing the amount of power machine learning engineers had to define the outcome of a model. It was staggering to me. You know, all these decisions that they were making, not recording, not documenting in any way, but just having watching that decision sort of influence the outcome and dictate the outcome in a way that felt completely, you know, that that kind of left those that were impacted, completely powerless to stop it. So I think watching that happen for about a year kind of prompted me to start investigating this for myself, trying to advocate and talk locally to sort of people that I knew. And that kind of led me to Joy's work at the mighty Media Lab. And she was sort of ramping up the Algorithmic Justice League at the time.

And she was sort of one of the other people that cared as much about this topic. And it was kind of funny, too, because, like, once we found each other and we started talking, we're like, oh, my gosh. Like, so what else really cares about this? And the computer vision space? And that's really important. So we had that initial conversation and noticed that kind of synergy and that alignment. And that was when I started working with her.

And she obviously, you know, she contributed the gender shade study and identified a lot of the biases in facial recognition. And I just thought that was such a powerful way of articulating this point of not everyone is part of this conversation. Not everyone is included in this. And that's affecting the people that are sort of subject to these predictions. So, yeah, so. Following that whole situation, I kind of on the other side of this landed in a place of like. We need greater accountability for the people building these systems. We need to sort of capture more of these decisions as they happen, but also we need to completely reinvent the way that we evaluate and assess these systems and audit these systems, like the fact that and this is sort of gender shave's. And afterwards, I'm I'm very fascinated by these sort of machine learning systems that we identify in the wild that are already out there, already deployed, already affecting people. That's really sort of the bread and butter of like what I am interested in investigating and auditing, because exposing sort of the ways in which those systems fall short exposes the fact that, you know, our processes for greenlighting the deployment of these systems is really broken and misses a lot of perspectives and neglects complete, complete, complete entire communities. So, yes, was that for me was really what prompted me to get involved in this space and to become like super active and excited in terms of, you know, pushing pushing things forward and in a new direction.

And from what I've seen, it seems like you are incredibly passionate in this space and that you kind of seem to have always been passionate from when you started working at Clarify and when you were back in at the University of Toronto. And I think I remember Joyce saying in an event that you were both in last week at the trivia of the code advice at the Human Rights Watch Film Festival, Joy mentioned something about you reaching out to her and asking you if you could work with her. Is that is that true? Is that what happened?

Yeah.

So I and it was because I had I had kind of oh, I was actually passionate about like entrepreneurship and like startup life. Like I was like, startups are so cool. I consider myself like a very creative person. I really prioritize creativity and like building things and like, you know, taking an idea and actualizing it in the real world with like something I was super into. And I thought, like, oh, code is like such a great way to realize, you know, like your ideas. So that's how I ended up. I clarify because I was like, oh, I want to just work at a startup and like, understand what this is and how it works. And then once I started noticing, like I mentioned, you know, some of the.

Practices that are sort of so normalized machine learning. You know, these are people building products that wouldn't we millions of people. And we're also there's no structure around our practices.

You know, we're we struggles to sort of think about our role and our accountability and our level of responsibility. So me coming into that space and seeing all of that and then sort of, you know, having that panic moment and trying to articulate it, but having people be like, no, this is the way it's done. You know, we can't really do anything.

That prompted me to, like, seek out anyone that, like, cared about it, enjoy it actually at that point, already given a TED talk, because what had prompted her to create the algorithmic justice league, which she had tried to use a facial recognition product and it couldn't identify her face. So she had to use this white mask and she had this whole video that she had made around that experience. And she had given a TED talk on that experience. And me listening to that tough talk was sort of me hearing for the first time someone that was working on this topic. So it was yeah, just like I was like, oh, this is exciting. This is great.

So I sent her I actually sent her.

But a code of light meant that I had sat on my own. I was like, super.

You know, this is like something that I was like, I don't understand why no one is seeing this as a problem. And then when I finally met her, it was like, oh, there's someone else that actually cares about this. So, yeah, I like that. Email is like super bursting to pull up.

Now it's like so I could play every role that you get with respect to sending e-mails, just like keep it short and sweet and like it was just like a way to log it had like, you know, like code attached to it. I mean, I think she would have responded with, like, let's have a call two months from now or something. She was super busy.

But I'm really glad that I was able to catch your attention and we were able to work together at the end because obviously that's been an incredibly sort of important formative experience with respect to learning how to actually take action on that intuition that like things were not quite right.

It seems like for a long time people weren't taking action on any of this, including algorithmic bias. And now it seems like at least in the past week or two weeks, there's been a lot of people trying to take action at the very least. And even, you know, I just saw on Twitter earlier today, Ridgway has this new book deal for a book called Justice Decoded, which is very exciting. I mean, that's that's a huge deal. And people are starting to pay attention to this more and more. And I'm wondering for folks who have no idea who are listening right now about what is happening, what is going on, especially around facial recognition software and surveillance. I know you mentioned the Post Act earlier.

If you could just give LICA as short as possible, I guess like a one to one of what is happening right now and what are you what you're seeing on the ground?

Yeah, I just wanted to comment really briefly about like that earlier thing of like this topic is becoming mainstream in a way that's like super fascinating. Joy's book obviously incredibly exciting. I had a call with her and we were like, oh, this is crazy.

And then also coded bios, which is a documentary that was shot a couple, you know, over the last couple of years. That coming out literally last week as well. The John Oliver piece coming out, you know, where she had to live just tweeted about it. AOC tweeted about it. So it's kind of becoming this mainstream topic in a way that I don't think a lot of us expected. I think with respect to sort of what's happening right now, I really do think it's a reflection of the times. You know, police and law enforcement has always weaponized facial recognition, especially against minority groups, to target them, to track them. It's a technology that I kind of refer to as inherently toxic and inherently dangerous just because of how centralized it is. It's kind of like the equivalent of having, you know, a faces of people. A lot of people don't necessarily realize that a face is, you know, identifiable biometric and it's as sensitive information as a fingerprint. So imagine if you uploaded your fingerprints to Facebook, to all of your social media. So the sensitivity of the data involved is not something people always recognize. Now, imagine having that information for millions of people and having that in a centralized database that's controlled by one authority figure. The potential to manipulate that, the potential to take advantage of that situation is so high.

And law enforcement being given that power in a context and in an environment that we're in right now where people are really beginning to doubt law enforcement's ability to use that power for good. I think what's sort of the perfect storm to begin a conversation of why do we have this technology? Why is it here? Who's actually forward? Who does it actually benefit? And if it's, you know, if all of this authority in terms of controlling the system is given to these institutions that we're beginning to doubt and we're beginning to question, how does that put us at risk as a population, as a general population? And I think we're also going through a period of accountability as well, where, you know, for a long time we didn't really we were hesitant to. The use of facial recognition while this more nuanced conversation was happening. You know, there's a lot of points of concern. So, you know, our work has been centered around thinking or, you know, thinking about the racial discrepancies between the performance of facial recognition. And it was this important starting point of like, wait, this technology doesn't actually work for the people of color that are being disproportionately targeted by this technology.

And it was a great starting point. But as you kind of see with, you know, I love the story of IBM because they're kind of the perfect sort of textbook example of this journey.

They started off understanding through gender shades that their model was biased and not performing as well. For darker skinned woman as it was for lighter skinned men. There was about a 30 percent performance disparity, which is fully unacceptable. And people kind of initially attacked the technology for that reason. And then IBM, in their attempt to sort of diversify their data set, realized that, you know, you can just take flicker photos of people of color and add that to a data set and say that you solve the problem. That's a huge privacy violation. So we began asking questions around privacy and the conversation for them shifted towards the game about are we doing this in a way that respects privacy? And then, you know, later on they had attempted to sort of discuss. They had proposed this idea of precision policy. So they were they were attempting to discuss this idea of are there positive use cases for facial recognition that are, you know, that are worth the cost of these negative use cases. And but it was sort of this resounding like, no, not really. You know, a lot of a positive use cases don't necessarily justify the risk of of harm that we see with the negative use cases and the way that it's so easily manipulated in the data so sensitive makes it this inherently kind of dangerous technology.

So IBM's decision to divest from facial recognition came after this whole journey that I think a lot of other companies and other groups are going through. And it's kind of precipitated by this moment of realization that way. You know, some of the institutions controlling this technology are also sort of institutions we need to begin to question and we need to revisit. So people are much more open to the idea of reform, but also even abolishment of the technology. And I think, yeah, it's kind of a combination of all these things with respect to specific actions that have happened. You know, IBM has committed to no longer sort of participating in the facial recognition market. Amazon has decided to sort of take a moratorium of a year to say that we're not going to sell facial recognition to police for a year. But they had sort of created exceptions for certain groups, but they had kind of committed to not selling it to police for a year. And then Microsoft soon after that had committed to not selling facial recognition until regulation was put in place in the US with respect to its use.

And since then, we've also seen, like I mentioned earlier, the post in New York City, where it's an act around the NYPD use of facial recognition being sort of fully disclosed and all other surveillance tech also being fully disclosed to the public and accessible to the public and recorded and reported to council to NYC council. So I think those measures of accountability and transparency with respect to its use are also kind of gaining steam as well. So that's been really great. And this also comes after, you know, earlier this year, Nyst, which is the National Institute of Standards and Technology in the US. Sort of for the first time, evaluating the technology for performance on skin type and skin tone and ethnicity and gender and these different demographic factors, age as well. That was not something betterness did before. And they cited our paper, which is very exciting. So this the shift in this change in this evolution in terms of how we talk about facial recognition and how we critically discuss, you know, how does it actually work? This lack of faith in this questioning of the technology. Is this kind of emerging phenomenon? I'm really grateful to see that wave come in.

Yeah, the current events are definitely incredibly hopeful, seeing companies like IBM, Amazon, Microsoft really jumping on this bandwagon. And I'm curious if you see, obviously, the potential for this to continue for years in the future. But then also if there's anything underlying that, we're kind of missing here. When I see Amazon banning police from using facial recognition technology for a year or pausing for a year, and then Microsoft waiting until federal law regulates it.

So we have those like waiting until they're like, what condition? Yeah.

A good thing. Like, do you think that there's enough faith and federal policy and regulation in the future to actually implement these systems and fix these problems? Or do you think that this is kind of all just happening very, very fast and everyone just is doing this is like a PR stunt? Like what? What do you see this?

Yeah, I definitely don't think it's quite a. Stunt mostly because, well, in some cases, or in the case of IBM, it it likely was more connected to PR to make such a public statement. They had already removed facial recognition products from their publicly available sort of developer tool kit like September twenty nineteen. They had already kind of shifted away from the facial recognition market and it wasn't as profitable for them to be there. So this current statement around, you know, that decision that they had made it quite a while behind me privately, quite a while ago, to make that a public stance in this moment was definitely sort of a convenient, you know, thing with respect to their image and also connecting it with the racial discourse happening right now. You know, that that was definitely convenient for them. But I do think that, you know, these stances being public and reemerging in the public discourse is like an inherently positive thing. I think that, you know, these stances do sort of reflect the the successful efforts of advocates to highlight the concerns of the technology and how that is sort of being accepted within these different companies and connecting that conversation or facial recognition used to, you know, racial injustice to, you know, abuse by police departments and abuse by different elements of law enforcement is, you know, a concept that not everybody actually is aware of. So the fact that these companies make these statements, you know, even the PR angle, I think is beneficial to the conversation and to the advocacy work with respect to the actual sort of impact of the actions themselves. I literally expect nothing with respect from these companies.

Like I will never rely on corporate self-regulation to lead us to the promised land of what we need with respect to protecting ourselves from facial recognition and other surveillance technologies.

I really don't think companies are ever going to go all the way with respect to what we need from them. I am much more excited or more interested in how regulation can kind of restrict use globally right across all the entire industry. I mean, this is something that's come up a lot in my research. So in our paper, actionable auditing, we reassess, you know, the companies that had been evaluated and gender shades for their racial bias. And we noticed that, you know, companies that were not included in that initial audit don't make any improvements. And then in our following paper saving face, we we evaluate sort of audit companies and we look at tasks other than gender classification. So we say, you know, for for model objectives outside of the ones you were audited for. Did you even improve with respect to that? And the answer is no. So there's a lot of interesting dynamics with respect to how auditing works. And you have to be like, you know, you have to kind of design and audit with such a specific target in mind, in such a specific objective in mind. And if it's not designed in that way, then companies just don't react. So it kind of creates an implicit case for policymakers and for policymakers to take seriously their role in terms of creating effective restrictions. Right. So if a company if a if a company or a set of companies can't identify well, we'll make that decision to completely address their issues on their own and they need to be called out. But we can only call out a limited number of companies. Then it's up to policymakers to use the results from those limited companies to create these widespread rules.

There's a lot of companies that are almost like that are very difficult to audit. So any see as a huge, influential company in the space that nobody's heard of, a clear view was, you know, stealth for a very long time. People do not hear about it for a very long time. And other other companies that are just like not recognizable names like Amazon and Microsoft, but are, you know, majority hemorrhage accounting and other ones, too, that people do know in this space. But, you know, if I talk to my family about it, they might not be familiar with. So some of these companies that are not necessarily mainstream household names but are hugely influential in the facial recognition market, but do not have any kind of public interface for us to audit with those kind of companies or the companies that I want to see regulated.

And they will sort of be the ones that are the reason why. I think, you know, regulation and policy is really the frontier we need to aim for in order to actually sort of see the impact that we want to see and protect as many people as we want to protect. Yeah. So I do think that like this, that the current the last week is sort of an encouragement to keep going. But definitely limited with respect to addressing all of these issues. And the last thing I'll say, Greg, I've been talking for a while, but the last thing I'll say is that law enforcement is just. One element and one strand of this facial recognition is used in immigration, for example, in the US is quite staggering.

And the way that different public agencies interact with each other, there's a lot of loopholes and there's a lot of ways like we've seen with local banned by California, by Oakland. How those police departments, through access to, you know, amalgamated and sort of joint databases with other counties where facial recognition is used is legal. They're still able to kind of get like this proxy access to facial recognition and they're able to work through these loopholes. So I think it will require sort of like widespread statewide or national policy and regulation to truly restrict the use of the technology and truly kind of ensure the safety of the people currently affected. So, yeah, we have a long way to go. But I do think that the last week has still been this encouraging kind of step forward.

What do we do with facial recognition technology? Is it just this inherently unjust system? Because it's, you know, embedding our biases into this very particular technology or it is there.

Is there a silver lining to the technology itself? I guess I'm kind of devil's advocate here because I have a very particular. Oh, I know.

I was going to say and, you know, whenever I'm asked to give, like, what are the positive use cases of facial recognition, I'm like, well, there's like entire marketing departments at Microsoft and Amazon committed to this question.

Just to get a facial recognition. Right.

So I highly suggest checking that out. But I have I'm very familiar with their marketing materials and they're kind of pitch for what makes sense and how it's being used and you know, what the benefits are. I will note that a lot of the people selling this technology, their client is not, you know, the 14 year old boy that gets mis identified walking down the street and ends up having to deal with, you know, getting falsely arrested and posting bail and all of that, like their their client is the police department. So in their mind, they feel like they're providing a service for that police department to be able to filter through images and do their job more efficiently, even though that. That technology might actually end up becoming a risk for these affected nonusers down the line, down the stream of sort of impact.

So, yeah, they would sort of argue that, you know, there's efficiency benefits for for their clients in terms of just like use cases that are hard to find fault with.

You know, people will say, you know, finding missing children, for example, or they'll talk about. They'll talk about sort of innocuous use cases like, you know, filtering through and tagging like, you know, your photo app images and putting them in the right folders with different people's identities, with all the pictures of my mom going to Folder X.

And I think that that's.

Easy to accept and to just sort of swallow is like this is fine. But I do feel like there is something about facial recognition that is sort of characteristically difficult and challenging to overcome, which is that there's so many. Axes of concern, like there's so many things that are wrong with it. You know, there's the privacy issue and there's the there's the bias issue, which is as day because of the research and the work of a lot of great people. But there's also this foundational issue that I find, which is that because it's such a collection, it's like a huge collection of sensitive data of, you know, many, many people. And it's this identifiable biometric. It can be so easily weaponized by whatever authority figure and that authority figure. You know, the the use of the technology is so under the control of this authority figure to do whatever they please with it in a way that is really alarming. I often try to cite this case that was mentioned in the coded biased documentary of Aladdin Towers and to kind of just give a summary of that situation. And then it towers as a rent controlled apartment in Brooklyn, New York. And the tenants found that their landlord, you know, in his quest to evacuate as many rent controlled apartments as possible in order to raise the rent for the entire building would kind of be harassing certain of the tenants. And most of the tenants were black and brown people. And the landlord himself was way. And there was this tension that existed within the apartment complex.

And he had actually petitioned to install facial recognition despite protests from the majority of the tenants. And they had suspected that it was because he was aiming to sort of monitor them in different ways and continue to harass them and effectively extend his authority over the environment using this tool. And in such a great example of, you know. They did care about the fact that, you know, there was this bias performance or the tool is not built for them or to work on them. They did care about the fact that, you know, they're sensitive information was being collected and they didn't know where it was going and it wasn't encrypted in any way that protected them. But mostly they cared about the fact that this landlord now had this extension of control and authority over their lives and their sense of security and safety. And it was such a warped sort of, you know, because the justification of the landlord was to promote safety and security. And that's very often the justification for the use of facial recognition is we want to make people feel safe. But the installation of this tool was making people feel not safe. So, yeah, that was an incredible case. And I think I would encourage people to look into that. It led to the Biometric Barriers Act sort of being proposed as a bill in Congress to protect people in rent controlled housing against the unfair installation of biometric surveillance technology. So I really do think it was like a great example of how the tool is so easily weaponized that almost these positive use cases of finding lost children. You know, we need to explore alternative measures for solving those problems that don't involve this very toxic and harmful and dangerous technology.

One of the reasons why I think the facial recognition technology in particular is just such a good use case for A.I. ethics, because it's kind of like this catch 22. Right. So you have this one side of things where if the technology is broken and biased, there's so many issues. And then on the other side of the technology works. There's also so many issues. And I'm wondering if in all of this, despite the fact that there's a few cases where this technology might be beneficial for some, especially marketing companies and marketing boards inside of companies, do you think that it's worth it?

Yeah, I, I definitely don't.

I don't think that tradeoff is worth it. I think the conversation around facial recognition sometimes. I like that you came to the fact that, you know, if it works, it's problematic. And if it doesn't work, it's also problematic. And sometimes we get stuck in. But if it doesn't work, it's problematic. Stage of things. So, you know, if it doesn't work out for people of darker skin, then, you know, I'm more likely to get a false match than anyone else. And then because of that, I am more at risk of being falsely identified or falsely suspected of a crime, for example, or falsely pulled into the immigration system, which happens quite often. And I think that case is terrifying enough for people to begin to question the functionality of the system and whether or not it works. And and I feel like that is usually what we need in order to get people to, like, press pause and to be like this. Technology doesn't actually quite work and it's actually dangerous when it doesn't work. So we need to pull this off. Put pull this out of the market.

We need to stop selling this.

And I think that's that's a it's a great it's where our research kind of lies and it's a great start to the conversation. However, there's this more nuanced thing around. The goal is not actually getting it to work, because even when it does work, there's so much that we can talk about with respect to what safety means with respect to other risks, such as privacy and transparency and disclosure and that nuanced conversation. I feel like it's also kind of emerging. And like you said, it brings up the stuff that like.

Actually, it's not worth it. And which is the calculation. IBM ended up doing where they started off with this question of, oh, let's just make it work and they'll be fine. And then they kind of landed in this place of work, even if it does work. You know, it's just not worth the trouble.

It's not worth the trouble at all of building this technology that has such minimal benefit and is so immature in its current form. Yet can sort of become incredibly problematic and is so easily weaponized. Yeah, I totally think that calculation is being made by a lot of people right now. And we're seeing that the result is that, you know, it's just not worth it.

As you mentioned, all these successes going on in facial recognition technology and fighting back against it.

Are there not without context? There's a lot going on in our world right now. A lot of anger, a lot of sadness, especially the protests and the riots after the murder of Jorge Floyd. And I'm wondering is.

I guess I'm wondering what how you see it all connected with what is happening right now in our world or in our country that is allowing these things to finally be pushed through?

Because you've been working on this for a long time. Joy's been working on this for a long time. And finally, we're making some movement and it seems like it's coming all at once. So what's going on?

That's a great question. I've been reflecting on like what?

What happened? You know, the murder of George Floyd was such a wakeup call for everyone. I think everyone was just kind of you couldn't look away. And, you know, it's so unfortunate that that had to happen in order for us to truly begin this conversation. But I'm I'm so grateful for those that are leading the charge with respect to making sure that we continue to not look away, that we continue to pay attention, that we continue to challenge ourselves as a society to really confront these issues. I do think that because it's this moment of racial reckoning, it's I've been calling it. People are not shying away from the conversation in the way that they used to. Excuses that were sort of swept under the rug in the past are now sort of expose and people are directly challenging. So I think that's one important thing. The other thing, too, is, you know, I was I was very clear. We remember the whole Trayvon Martin situation that happened and George Zimmerman and, you know, the the the impact of that movement was sort of this reform, the police ideology of like, we need to reinvent this this this group of, you know, this authority figure, this authority group.

We need to kind of rethink how they operate. And people are now, you know, after George Floyds murder, people are now like, wait, actually, we don't want reform led to increased investment in the police to get to get these body cameras. Reform led to like implicit bias training, which we can't even measure the effectiveness of it. We don't even know how well it worked. And clearly, there's stills there's this historical kind of anchoring of the police and some of these problematic, you know, concepts and these problematic ideologies. So now that the stance is more towards, like you hear, like defend the police or like abolish the police, and it's more like radical stance of, you know, we need to completely, like, flip the table. Also, I feel like it's it's it's a little bit like fueled by that anger and that sadness in the morning of the black community coming out of Cobbett. You know, when people talk about the George Floyd protest, they need to understand that, you know, the black community was disproportionately affected by the economic downturn, disproportionately affected by coded, already grieving. You know, so much from all of that. And then dealing with this kind of recurring instance of, you know, racial injustice. So it was sort of this perfect storm for people to just be like, we've had enough. You know, we need to completely. Stop whatever we've been doing before and completely reimagine a new future that is complete, radically different. Oh, very on brand with you guys.

Radically different from the way things where I had a great at a great sort of you know, I've heard some great speakers speak to this especially rude Benjamin and said a lot of incredible sort of comments around this idea of a guy going through a similar dynamic of people sort of being so frustrated with experiencing the harmony, these systems that they're just committed to reinventing and reimagining a future of this technology that doesn't look anything like what was there in the past.

So I think because of that, we see a lot of this surveillance policy being pushed forward and a lot of the companies themselves reflecting on their own role and reflecting on their own commitments and revisiting them and actively connecting these decisions to the racial injustice that we see today. IBM statement directly connect that dialogue. The letter to Congress that was that was written by the CEO. He directly says, you know, we understand that the police is misusing this in order to terrorize racial minorities. And this is not necessarily something that they were public about before. But now he can make that statement and say, like, it's not a radical position anymore, like we're all kind of radicalized to do something about this.

And it's just easier, I think, to come out with a stance against this harmful technology. When you understand that, you understand what's at stake, you understand that it's such an important element of protecting people that have been racially terrorized for it for centuries. Right. So I think that's really what I see as a large sort of the huge kick with sort of the one understanding the extent of the state. I think people underestimated the terror that the police had been sort of getting away with on certain minority groups and in certain minority neighborhoods for a long time. So that's become much more visible in the last couple of weeks where it's very difficult to ignore. I mean, the second thing sort of just being that it's much more you know, there's a lot more people on your side. There's a lot more people sort of embracing a radical vision of what the future can look like. And, you know, this is no small part, sort of. This is definitely the you know, there's no small part played by a lot of the advocates that have been asking for these changes for a long time and being dismissed as radical. And now everyone's kind of like we know we do need we do need a broad and sweeping changes in order to reinvent society towards something that we actually want to live in.

Yeah, we're a huge fans of the word radical, that is for sure.

Yeah, I was I said that and I was like, wait, no, that's very on brand. Actually, you guys were radical before it was cool to be or far behind many others.

Speaking of this, this idea of a radical vision for the future. I would love to know what your idea is for and really, I guess what your hope is for the future in all of this.

Yeah, that's a great like Segway to, you know, this idea of so, you know, what are the projects I'm working on right now is thinking through this idea of participation in this idea of empowering regular people like, you know, people in my family that might not necessarily be on the engineering side or on the research side of A.I. to be able to have a say in what A.I. does with respect to their life. And I think that this is you know, we've organized this workshop and I assume we're organizing this workshop at ASML upcoming in July 17th, if you want to apply.

But it's it's this idea of, you know, the workshop is called participatory approaches to machine learning. But it really is kind of an indication or an invitation for other people to, you know, from different disciplines, whoever you are, you know, whatever kind of expertise that you have to bring. But it's this it's this invitation to come in and say, like, how do we actually get, you know, everyone that's impacted by this and everyone that wants to have a say in what the system objective is like, what A.I. is, how do we get them to sit at the table and how do we get them to shape what the system actually does and what the system is? And at first, it seems kind of disconnected from the audit, the audit work I do and the documentation work I do. But it really is connected because I see, you know, audits as this great way to communicate to a broader audience about the limitations of the system. You know, for a very long time, people thought that facial recognition worked because they were told that it worked by a lot of these technology companies. So when you audit these systems and you communicate those results excellently well, which Joy is very great at doing, you know, through poetry, through art, through a research paper, through an op ed. We will begin to understand the limitations of the technology in a way that actually empowers them to stand up to it, to, you know, to attend that city hall meeting or to in the case of Atlantic Towers, you know, organize with the tenants association to fight against the use of facial recognition in their building.

It kind of empowers them within a broader system where this technology is really imposed on them in a way that is alarming. A lot of I guess the people affected by the technology also have no say in the taxonomy of the labels being used on them. No say in terms of if a prediction is false. They can't really contest it. So if all of these dynamics in A.I. where you begin to realize that A.I. is really the centralized technology. This technology where a few people define almost everything about the system. And if you go back to sort of my earlier comment on what makes facial recognition toxic, but that's a lot of what makes it you know, a I system's really problematic is that you have so much data collected in one place. You have so many resources required to train a model. And it's all controlled by just a few people. And you know what would happen if there there was a wider scope to the participants in terms of defining what that technology is and how that technology is used in the context in which it's used and if it's built at all. All of this decision making, what would it mean to be more inclusive with respect to that?

And what would it mean even at minimum, which is a lot of the work I'm I'm doing right now, which is, you know, what would it mean at minimum to communicate more effectively to the broader population about how well the technology works, the limitations of the technology, the process through what through which it's evaluated and different design decisions made along the way? Like what would that actually look like and how would that change things for the way that this technology is kind of being integrated into society?

So it's like a weird, like technical policy, like social science. Question of like multiple dimensions.

And like there's definitely different voices coming in to say different things about it. But I'm kind of excited with that direction of like like this like kind of collaborative Democrat Democratic version of this technology where we all kind of can, like, stand up and participate in terms of defining what role it's supposed to play in our lives. Right now, we don't even know when A.I. is being used. And we don't even we have no clue, you know, what went into that system. You know, a lot of the engineers at the company don't even understand where the data comes from. You know, and certainly regulators and policymakers don't know that. So right now, there's like and that opacity is also like part of what gives those few people that do know so much more power. So, yeah, there's so much I could say about this. But really, again, sort of like flipping the table, like being like you have to include us in this and defining what this technology is and how it's used and where it's used and and if it's built at all.

That's kind of like my read. That's my radical vision for what I could like. You should get everyone to say that at the end it's like, what's your radical vision for a I?

So I do a follow up on that. But first, I think this question that that you've raised, I think throughout this interview about, I guess, people saying, you know, facial recognition technology works and then, like not necessarily providing a definition for what it means for a technology to work. Like for whom? To whom, what are your metrics of it working and who isn't actually benefiting. But I do. I just want to follow up on the radical question. And just what exactly do you mean by radical? Like, do you have an understanding of like what radical is in this? Yes. And then do you situate your own work within that definition?

Oh, that's lovely. Yeah. Oh, I hope so.

I don't know, it's you guys that will tell me. I don't know. I can't invite myself to the club. I have to, like, invite me. I could be like, oh, yeah. That's radical enough. You can come here.

I wish I wouldn't have the privilege of being of being sort of included in this conversation or radical I or participating in joining in and the like, you know, exploration of what that looks like. I think, you know, the people that you've invited on this podcast in the past are sort of great examples of people that are thinking at that level for facial recognition and that question of like, what does it actually take for this to work? I would like to see sort of a version of A.I. where, you know, from beginning to end, there's opportunity for inclusion of the perspectives of the people outside of kind of the traditional authority figures within a space. What would it actually look like to create a version of this technology where different people can kind of be involved? I guess people outside of the traditional locus of control are kind of invited in to get to participate and to define what that system actually does. I've been reflecting a lot, too, on just, you know, what it means for policymakers to actually play a role in terms of facilitating that. Like I said, there's different dimensions of the problem. You know, I and I can kind of list my understanding, which is one issue being disclosure. You know, we don't know where A.I. is being used today. And that itself is really problematic that I don't understand all the A.I.

systems being used on me. And, you know, it requires years of advocacy to pass something like the Post Act where we, you know, get the NYPD to just tell us where they're using surveillance tech. And then the second thing is sort of around assessment. I don't think we understand or it's properly communicated to the public, but also even amongst developers and researchers, you know what the limitations of the technology actually are. We don't realistically discuss limitations and we don't realistically assess and evaluate the technology that we build. We don't properly audit them. We don't properly evaluate them. And that's been a lot of the work around gender shades, but also the Follow-Up work to that. And then there's sort of this like third question of restriction of like it's not always appropriate to use a I. There's several context in which it doesn't make any sense to do that. And you know, how can we actually either design the system itself or design, you know, our social systems to really give us a say in terms of being able to adjudicate when it's appropriate for us and what we can do to either contest a result, refusal to participate in the system. All of these things for me are sort of interesting, their actions.

And Deb, as we reach the end of this interview, I'm finding myself resonating a lot with your story about working at clarify and clarify and realizing that the system is broken, but not knowing exactly what to do about it. And I'm wondering if you can offer a piece of advice for anyone else who might be in that situation right now and what they can do to try to make that radical change that we're hoping.

Yeah. I would say my suggestion with respect to that is just to keep looking for for just keep keep looking. It's sort of, I guess, the the advice because I just kept, like, investigating, you know, who else if anyone else, if I feel like I'm not connected in the right way or I don't have enough resources to sort of make this stance on my own. What can I do with what is available to me? What are what are sort of the investigations that can begin to sort of think about? And then the second sort of thing that I really reflected on was sort of like, who else is trying to do this and that?

Can I kind of connect with that person for the sake of accountability, but also just encouragement that, like, you're not the only one in the world that cares, or very likely not the only one affected. And you're very likely not the only one that cares. So I would encourage them to just keep looking for those allies. And, you know, once they find those allies, they'll be so much easier for them to do that work. Yeah. But I definitely think the other advice as well is that when you notice these kind of problems, you know, even when even before I was taken seriously around it, I was very vocal and I got a lot of those good that I'm much of is a good career advice.

Like I'm not sure how enduring that was for, like, you know, this like small girl to just be like constantly talking about this thing that people were like, is that important?

But I think if you if you really are noticing an issue and you know, you're developing your thinking around it, it helps so much to be super openly, you know, communicative about it. Just just talk about it with whoever cares. And also people that don't care if it really ends up becoming important, because while you're talking about it, when you're so vocal about it, the people that also care about this issue will be able to identify you and find you.

But also people that are sort of learning about the issue and developing their thinking around the issue as well. Educate them that it's something that matters and they can begin to understand and identify you as a person that cares about the same thing.

And if folks care about the same thing and want to contact you to follow up about this interview or any of the wonderful work you're doing, where can folks find you and connect with you?

I am very active on Twitter, probably too active.

I get really mad on their Twitter is one of our favorites. Know we we we love you. That's great. Everyone should follow up on it. Yeah. Very vocal.

Yeah. I would say Twitter is probably the easiest way to access me. There's also email like debit. I know gorg. I, I'm, I'm pretty responsible for email or even through a I now you can definitely still reach me. So yeah, I think that's probably the easiest way to get in touch with me.

I guess LinkedIn is a thing that still exists, but I haven't checked it in a while yet. Yeah.

Feel free to reach out via email or Twitter. I think those are probably the way that the way to get to me.

Great. Well, thank you so much again for coming on the show. Devin, thank you for all of your amazing work that you're doing right now.

Yeah, for sure. No. Thanks for having me.

I'm hoping to like what may be radical enough to be a radically title. My radical. Two. This is great. You guys are doing such a great job with this.

And I really appreciate you guys like collecting all of these perspectives in one podcast. That's super awesome.

We again want to thank Deb so much for joining us on the show and for coming on on such short notice. We also want to thank the Algorithmic Justice League and so many others who are doing this vital work and vital advocacy to change these deeply racist systems that are embedded in so many elements of our technological systems, but especially in facial recognition technology and facial analysis technology.

We will leave our longer debrief for our next Minnesota. And in the meantime, we are going to let Deb's words speak for themselves. But one thought that we want to leave all of you listeners with in regards to facial recognition technology and the current events, is that sometimes the least harmful way to create a technology is to not create it at all. For more information on today's show, please visit the episode page at radical A.I., Dawg.

And if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite pod catcher. Join our conversation on Twitter at Radical, a iPod. And as always, stay radical.

A radical move.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Create and share better audio content with Sonix. Do you have a podcast? Here's how to automatically transcribe your podcasts with Sonix. Sonix has the world's best audio transcription platform with features focused on collaboration. Sonix converts audio to text in minutes, not hours. Create better transcripts with online automated transcription. Quickly and accurately convert your audio to text with Sonix. Sonix takes transcription to a whole new level. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.