Industry AI Ethics 101 with Kathy Baxter


Kathy.png

What do you need to know about AI Ethics in the tech industry?

To explore this question we welcome Kathy Baxter to the show.

Kathy is an Architect of Ethical AI Practice at Salesforce, where she develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research.

Follow Kathy Baxter on Twitter @baxterkb

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Kathy Baxter_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Kathy Baxter_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

And.

Welcome to Radical, I'm a podcast about technology, power, society and what it means to be human in the age of information.

We are your hosts, Dylan and Jess. In this episode, we covered the 101 of A.I. Ethics as seen and experienced in the tech industry.

Our guest today is someone who has been at the forefront of developing ethical A.I. platforms and strategies within industry for decades. Cathy Baxter. Cathy is an architect of ethical A.I. practice at Salesforce, where she develops, researched and formed best practices to educate Salesforce employees, customers and the industry on the development of responsible A.I. prior to Salesforce. She worked at Google, eBay and Oracle and User Experience Research.

Dylan and I have so much respect for the work that Cathy and her team are doing at Salesforce, especially as they were one of the groundbreaking ethics teams within industry. And so we have a lot to look up to. For in this field. And we also learned so much from an insider's perspective on what ethics is actually like when it comes to practice instead of just theory. So we are so excited to share this conversation and everything that we learned from Cathy with all of. We're on the line today with Kathy Baxter, the architect of ethical I practice at sales force. Kathy, welcome to the show.

Thank you. Thank you for having me.

Of course. And we would love to start off by knowing before we I know who you are as a researcher and a practitioner and industry, we would love to know who you are as a person. So could you tell us a bit about what motivates you to do the work that you do? Yes.

So I come from a little bitty town in Georgia, and I've been really fortunate to be able to spend my career in Silicon Valley and have influence on a lot of the technology that many people around the world use from eBay and tons of products like Google and now at Salesforce, products that are used by 95 percent of Fortune 500 companies that you probably don't even know they're using Salesforce.

But having that kind of reach is really a privilege to me.

And I my my background and my experience is quite different from a lot of people that I work with in Silicon Valley. And so it's very important to me when I am sitting at the table and having this kind of influence over technology that touches so many people in society that I am able to represent them, their values, their context of use and ensure that this these are products or services that are inclusive of everyone. That's really, really important to me to think about who is impacted, how and how do we ensure that we are creating a more equitable society. Too often, technology only increases those divides. How can we instead use it to decrease them?

So, Cathy, one of the reasons why we really wanted to interview you is because you have kind of seen it all right. You've worked at these massive companies. You've seen it from the user side. You've seen it from the technical side. You've seen it from the management side. And I was just curious if you were to point to some of the biggest topics in ethics right now, like what would you say? Those are?

I think the biggest issue for me is thinking about A.I. that impacts human rights. A.I. has a huge potential to help us identify bias in decision making that we didn't even know existed in the first place. But it can also drastically magnify that bias. And so when we are using AI in medical or health care decisions, is this particular treatment going to be more effective than another treatment? Is this patient a better candidate for this treatment or not? Anything in the area of social justice who gets bail or parole, predictive policing, who gets access to benefits, Medicaid, housing, food, all of those things? There is huge potential for EHI to make much more fair decisions than humans might be making now. But unfortunately, it also has a huge potential to make much less fair decisions. So, for example, instead of thinking about how can we use AI to make the benefits claims process easier, less Byzantine, get those benefits back to you much faster. It can often be used instead to identify fraud. And we know from from studies that there's very little fraud and benefits claims.

But if you use it as a stick, a way to catch and punish people instead of make the system better for all, then that those biases, those inequities are only going to be magnified. So that's one of the areas that I think is of the the most focus for me is how do we identify the places where I can help us make a much more fair and equitable decisions without at the same time allowing it to be used in a way that is much more punitive, harmful and magnifies the existing biases.

And I'm assuming that this is some of the work that you're doing at Salesforce right now. But I think. The big elephant in the room, both for Dylan and I, maybe for the listeners, is what is Salesforce and why are they interested in ethical A.I. in the first place?

That is a great question. When I told my mother that I was going to leave Google and come to Salesforce, she literally cried.

She said, Well, what do I got to tell my friends?

Nobody's heard of Salesforce and I had to. I've tried many times to explain to her, what is the cloud?

What is customer relationship management? But basically any customer call center you call into or you D.M. any sales person that calls you any marketer, they are most likely using Salesforce to power the work that they are doing. So we're a business to business software creating productivity technology that helps people just do their jobs better. The reason why I ethics is so important and ethics and technology really why it's so important at Salesforce is because this is part of our DNA. When Marc Benioff and Parker Harris founded Salesforce, they identified what are core values are going to be trust, customer success, innovation and equality. And that's been baked into the DNA of every decision that we make. And it has not changed since the day they started the company. So when we develop solutions, we really want. Excuse me, we really want to ensure that the technology is going to be trusted by our customers and society that is going to empower our customers to use that technology more responsibly, that it is going to be inclusive of everyone, that it impacts not just ourselves, not just the immediate end users, but our customers customers.

And so Salesforce hired the one of the first ever chief ethical and humane use officers in the tech industry, Paula Goldman. And so we have a team of folks that we really think through how we are developing our technology, how our customers use it, and how it impacts the world at large.

I'm interested in this concept of trust in general, but also specifically around I in our culture and our society right now. And you mentioned the word society and trust in customers, trust in A.I., but also society's trust in the eye. And I'm wondering if you could say more about that. Do you think that society in general trusts A.I.?

There are often the the visions of A.I. as being the Terminator and the cinema singularity, and it's coming to get you ajai this kind of general intelligence where we're a long way off from that. What is a much greater concern that society hasn't been as aware of these last couple of years? Are the smaller places that A.I. is embedded into our lives that you don't even realize it on your phone? The smart speaker on your desk with Black Lives Matter, it's people have become much more aware of facial recognition technology and how that's being used in society. Many people don't realize that owners of buildings for public housing, they have installed facial recognition technology in their surveillance, the same that is used in many very high end, expensive apartment buildings and hotels. But the difference is that in public housing units, those cameras are often turned inward. It's all about recording and surveilling the people that live in those buildings as opposed to the affluent apartment complexes and hotels. Those are pointed outward. And it's all about who do you think is the greater risk? Who is it that you want to surveil and record and track who is coming and going? And so we have to think about not just the training data, the. Oracle information that trains are systems that could be biased. We can't just think about the model, what are the factors that the ad uses to make a decision? We also have to think about how is that A.I. being applied and to whom is it equal? Are some people benefiting and some people paying? So those are the those are the conversations that I think don't happen as often in society broadly, but with the awareness the of the Black Lives Matter movement. Those conversations have started increasing. And so I'm really happy to see that this isn't just limited to the research and ethics community. More policymakers and society as a whole are becoming more involved in those conversations when we talk about discriminatory, biased or surveilling technologies.

There's a little bit of contention about who is responsible for the negative impacts of these technologies. Some people think that it's the large tech companies. Other people think that it's the consumer. Where do you fall in that debate?

For all of the technology, there's rarely ever one single person that's that's responsible. Obviously, the companies that are creating the technologies have a very large hand in deciding not just can we do this, but should we do this? What are the guardrails that they put in place to ensure that this is used safely to prevent abuse and harm. So that's the first place that responsibility lies. The second place is whoever is implementing that technology. So the creators of the facial recognition technology, for example, aren't always the ones or usually are not the ones actually implementing it in these public spaces. So whoever is implementing and deciding how that technology is applied and to whom, there's responsibility there. And then finally, society and policy makers have a role to play in this. So we're now starting to see cities and counties and entire states step forward and say this is a red line. Here are the places that you can't use facial recognition or hear the things that you can't use it for. Illinois passed a law restricting the use of A.I. in hiring decisions. So more and more and more we are seeing a desire to put regulations in place to put limits on what is acceptable and what is not acceptable. And that's I think that's very heartening. It's always better if people can do the right thing without having laws or regulations, force them to do it. But having agreed upon sets of regulations so that everyone is protected is, I think, a better solution.

Just as I have experienced in talking to folks in the responsible tech community, this new focus on especially like human centric design and technology and then also this user centered idea. And it seems like there's a lot of folks out there who are trying to figure out, OK, we have this goal, like we have this thing that we know we want this value and now how do we actually do it? And one of your expertise is in asking the user and trying to embed the user into some of those practices. And so I'm wondering, how do we do that? If we do, what is the role of the user in this? And then how can companies center the user to create more responsible tech?

Yeah, so my career the last 20 plus years has been and user experience research. I co-authored a couple of editions of a book on user research methodologies and the the human centered design or human human computer interaction has really been focused on understanding individual humans.

And that is critical. We need to think about what are the contexts of use, what are people's values, study how that technology is impacting individual people. So really honing in on that rich qualitative data. But we also have to look more broadly at society. How is this technology not just impacting individual? And this goes back to the point I made earlier, where some individuals may benefit greatly, but others may really be negatively impacted by what you are creating and so understanding, what are those different segments of the population, understanding who are the most vulnerable? They may be a small part of your overall user base, but we know that the intersection of bias is played out time and time again against the same people. So we can say, well, this is only two percent of the population that's going to see this negative impact. But it's that same two percent over and over and over again that pay the greatest cost of whatever it is that you're developing. And it's simply not fair.

It's not right. That's that is segment of the population is always having to pay the greatest price, always have the greatest risk. So we have to combine this user centered approach with a broader society centered approach so that we understand at both levels where these impacts are being felt. And only then can we better understand the data that goes into training the A.I. Is there systemic or historical bias happening here? And if so, what kind is it? We're never going to be able to say and I is 100 percent bias free. There's always going to be some types of bias. What you can say is this is the type of bias we looked for. This is how we measured it. And this was what the the measurement was that we decided it was low enough that we're going to release this into the world. Then also understanding through that user research and that society based research. Now, again, how is this applied? Are the mitigation strategies we put in place to help prevent harm? Were they successful? Because we can we can have ideas of what might be guardrails and safety mechanisms. But if we don't study what we've created in the world, we have we can never know if those mitigation strategies were successful or not.

There seems to be kind of an infamous example in machine learning, and I especially that, you know, there's a model that has 90 percent accuracy and then you ask, well, 90 percent for whom, and they say, oh, 90 percent for white males. And so all of the inaccuracy of the model is targeted towards, you know, people of color and women. And and then it's clear that there's a problem with that. But if you try to change and fix the model, then the lowered accuracy maybe lowers user retention on a platform and maybe lowers revenue for a company. And so since you're situated in an industry, I'm wondering, how do we convince the people, particularly the business people in the tech industry, that these things matter even if it lowers the bottom line?

I remember early on in the user experience or usability when I first started working at Oracle or titles where usability engineers and we would always have to do these calculations of ROIC words, the ROIC, if we do usability testing before a product is launched rather than the launch and iterate, let's fix it later. But we've got to get this out to market immediately. And so there are all of these painstaking calculations that would show it's much more expensive to go back in and fix things. It's the same thing with ethical debt and that is what we call it internally at Salesforce. This is ethical debt that you are accumulating. And too often the harm is not just lower productivity. It takes someone longer to to use your product or they're more prone to making an error, which is really some of the things that we talk about from a usability perspective. If we are talking about A.I., that is making decisions about who is creditworthy, who should get bail or parole, those are life impacting decisions. And so that ethical debt has real impact on humans. And we have to find a way to be able to understand what that is before that ever gets out the door. So we have to think about what are those negative impacts and address those earlier rather than later.

And so one one example is when one of the teams I was working with, they were developing a sentiment analysis model and early on a certain type of bias was identified by an engineer. And so the team was able to go through and find that by doing a very small fix.

It was a very it was a very clever fix that I hadn't I hadn't thought of. They actually decreased their accuracy by only two percent, but they were able to address this much more harmful bias in the process. And when I was talking to the PM, he said two percent. Yeah, happily trade that off. So when you have a culture at your company internally that protecting people doing the right thing, so trust, customer success, equality, all of those things being at the core of what drives your decision making, having to make a decision between two percent accuracy versus, you know, addressing this really harmful bias, those decisions become no brainers.

One of the things that we've seen in talking to folks is, again, this question of principles, of having principles and values and then making them actionable and then also translating them to multiple stakeholders in the system, including users, but also actual investors and things like that. And my sense of salesforce in your work is that you've done a really great job of embedding those values into the actual worked into the day to day. And I'm wondering if you had advice for folks out there who might be in, you know, multinational companies who are trying to bring those value statements or maybe even like, you know, their seven principals or whatever into the day to day actionable things?

Absolutely. There's I think starting with whatever your core values or principles, every company has them somewhere. Some companies are much more prominent in sharing them than others. But you can map whatever your company's core values are onto a set of EHI or ethical tech values and principles. There are a lot of principles and frameworks that are. Out there already, you don't have to recreate the wheel, you can find these various versions and see which ones map well. So start by being able to talk to your company about, hey, working this way, it just reemphasizes what our core values are as a company. The next thing I would say is with your any kind of education you're trying to do throughout your company, we want to make it context specific. When I first started giving talks throughout the company, it was very general. I was showing some of the most awful examples of A.I. and I could see the people in the audience are gasping and they're looking absolutely horrified.

And afterwards they would say that was such an amazing presentation, I had no idea. And then walk away and nothing. And I would follow up with different teams and say, hey, I haven't heard from you. Like, what are we going to do to make ethics into this particular product or feature? And I'm like, oh, well, this has nothing to do with me.

We're not creating a parole recommendation software. We're not doing predictive policing as a company. We don't do facial recognition. We've never we've never allowed it in our acceptable use policy. So we don't have to worry about that. We're just doing sales software. We're just doing customer service software. What could possibly be risky there. So making any of your education and conversations very relevant to the particular product or feature that the team is developing and then specific to the role, what would you want a content writer to do versus an engineer versus a UX designer? So all of the education needs to be very context dependent. It's going to be a rare individual or team that you encounter that's going to say ethics, ethics. I just want the money so people will be on board. But you have to tell them how to come on board if they don't know how to implement what you are saying, it's just going to be a really nice, pleasant conversation.

So who is the one that's doing all this? Is this like the project manager, the product owner, the the architect of the ethics team? Is this the CEO of the company, the individual engineers like who's who's actually responsible for making sure that this happens in order to be successful?

It really takes a village. Honestly, we have a we have a pretty small team, but we are able to engage with people across the entire company.

The only way you're going to have impact is if the engineer and the PM and the researcher and designer and legal and your sales and distribution and all of these individuals come together and agree this is a priority then. So we work in agile product development. And so having questions, ethical questions during our release, readiness, planning and doing consequent scanning workshops, those were created by everyone. It's a mechanism for identifying intended and unintended consequences of the features that you're developing. And then you come up with mitigation strategies for any consequences that might be harmful. We bring all the roles of the team in on those workshops so everybody is coming together to identify those consequences and mitigation strategies. And from there you can say, OK, this is a content issue. We need to make sure that we put some in app guidance here to help the user understand what they need to do or this is a backend problem. So the engineer needs to add in these guardrails. So it's actually not possible for the admin to accidentally flip this switch when that wasn't what they what they intended to do. So it really takes everyone coming together to agree upon what are the issues, what are the solutions feel like there are certain identities that we might project onto the Silicon Valley world and even the responsible tech world right now.

Fortunately or unfortunately, I like if you look at the you know, the voices that were represented in the new movie The Social Dilemma or things like that. And for me, I had a mother who or so have one mother who broke into tech in the in the 90s and the early 90s and 2000s and moved from being an engineer to then being in ethics and compliance at Intel Corporation. And it was just very interesting. As her son to to watch her go through that, especially in being a female engineer, then trying to do ethics, and I'm trying not to read her story too much into your story, but there are certain similarities. And so I'm wondering for you, as a female engineer now doing ethics work, how maybe parts of your identity play into the work that you do?

Oh, there's a million different things that are racing through my head that I could I could tell you.

So my mom is a single mom and she raised two kids by herself. She worked two and sometimes three jobs. So my brother and I had to be really independent and taking care of our stuff ourselves, often not even having enough money to be able to buy the 15 cent pint of chocolate milk for for lunch. A lot of those things, those moments still stand out to me. So issues like shaming of kids who their parents haven't been able to pay for lunch is just absolutely appalling to me. So that issues of social justice have always been really core to me. In graduate school, I spent a couple of years doing research on adult illiteracy software and the number one source of adult illiteracy. Education is in our prison systems. It was true over 20 years ago when I did this research and it's true today. So I spent time visiting prisons to be able to do this research.

When I left Georgia and moved to California, I had the biggest Southern accent you might pick up on it every so often when you when you hear me talk, sometimes when I'm tired or I'm particularly worked up, you'll hear it come out really thick.

Or if I'm talking to my mother on the phone who still lives in Georgia, you'll hear it come out really thickly. I had to change that.

When I was working at Oracle, I was surrounded by a lot of older white men, sysadmins, DBAs, and they weren't taking me seriously. I would try to do usability studies with them and they would speak very slowly and they would use small words and they wouldn't fully answer my questions.

They assumed I was a diversity hire and I wouldn't be able to understand what they were explaining to me. And so in order to be taken seriously as a young woman from the South, there were very few things that I could I could change, but my accent was one of them. So I worked really hard to change that. And then it had a very different problem when I started working at Google very early on, where I was actually much older than the rest of the population and I was one of very few pregnant and then mothers working at the company. And so having to argue with bro dudes that kept using the mother's room to take phone calls or take naps when I needed to duck in between meetings to pump and having to explain to them why I needed that room. So it's been a really interesting journey of cultural clashes and trying to match what my values might be with the folks that I've worked with and being in this role as an ethicist.

One of the things that I've experienced is it really takes a very calm personality to have conversations about ethics because it touches on people's values.

And as soon as someone's values feel threatened, you get a visceral response and the conversation doesn't get very far and you just end up burning relationship capital.

My co-worker, Yoof Schlesinger, I think, is one of the most amazing individuals. I've watched him in those kinds of conversations where you see people in the room like physically raise up and their voices begin raising.

And just in a couple of sentences, he brings the whole room back down, back into agreement. And it's it's a really amazing thing. So when we when we're thinking about ethics and values and how do we decide what is right, what is wrong?

Where are the red lines, just being aware of all of these different cultures, all of the baggage we bring to the table and not judging each other is a really huge part of that.

It's interesting, as the host of the Radical I podcast, we talk about what it means to be radical a lot. And I think a lot of people tend to not jump to being calm and being rational and being tranquil. And so hearing your perspective on some of these radical issues is perhaps something new that we haven't really heard before. So, Cathy, to you, how would you define the word radical? And do you think that your work is situated in that space?

I think for the tech industry, unfortunately, it's still a radical point of view to put society first in your decision making way too often. And I experienced this, you know, going from company to company where you are measured and rewarded for what how much revenue your future is generating, how much stickiness and engagement, how long can you get people to stay in your product, clickthrough rates? How many times can you get them to click on these different things? How much of a of a person's field of view can you cover in ads before they start entering seizures?

Like all of these things are very, very twisted incentive mechanisms. And so as an industry stepping back and saying we have to find a new incentive and it has to be focused on society as a whole, as a whole, how can we make society better while still being profitable? And one of the things I'm really proud about and one of the things that attracted me to Salesforce was Marc Benioff stance of the business of business is doing good in the world and businesses can do well and do good. You don't have to choose one over the other. There are too many false dichotomies that people often try to force you into, like privacy versus safety, profit versus overall good in the world.

We don't have to choose one or the other.

You can have both when we've been trained either in industry or in the society that we're in, that these dichotomies exist.

And even when we like no, we should move past them or we know there are other options. We can't always get there. I'm wondering from your perspective as an ethicist, if you do have any advice for people who are struggling with that, who like they know what the dichotomy is, they know where they want to move, but they just can't get to that.

That next step I think about as you were talking, one of the things that popped into my mind was a study of social media content moderators where they are subjected to eight hours a day, day in, day out to some of the most extreme, harshest content over and over again.

And they found that at the beginning, when they were first hired, they didn't believe in some of the conspiracy theories like pizza gate and flat earth and things like that. But after being exposed to this content in these videos over and over and over again, it took only a couple of months where they started to say, yeah, actually, I think there is something to this. And so we know we can say out loud, there's no question that these bubbles, these echo chambers reinforce what we believe.

And as we move towards closing of this interview first, Kathy, thank you for bringing up the importance of discomfort. That's something that Dylan and I like to really incorporate in this show, especially as we talk about radical good ideas, because it's so true. It's sitting in the feelings of discomfort and the feelings of things that maybe make you squirm a bit in your seat. It's important if we're going to progress forward and make good change. And so I'm wondering for you, as maybe a closing thought, what is something that you are sitting with in discomfort that you think is helping build change for good in our society?

I think one of the hardest challenges, just following on what we talked about a moment ago, one of the hardest challenges is figuring out how do we deal with disinformation.

So whether it is people posting conspiracy theories or people posting altered images and videos, it becomes this game of of whack a mole that is very similar to the security industry where hackers are constantly trying to find new ways to break into systems and do harm from an ethics and democracy and information standpoint.

Synthetic content is, I think, one of the greatest challenges that we are facing right now. And every time I read a report that or a research study that shows different ways that they are able to pick up on if something is a synthetic video, for example, the last one I read was I being able to detect somebody's heartbeat in their skin in synthetic videos.

They can't get that right. A real video. The eye can detect that heartbeat. I mean, that's banana pants. You know, that's so amazing. But as soon as I read that, I was like, dang it. Now the hackers know, like some other way to get around this. And so it's just this nuclear arms race of how do we protect what is truth, not alternative facts, what is truth in the world, so that we have some way of grounding ourselves in factual decision making. And you can disagree. You can say that might be true, but I still don't think that's the right thing to do.

That's where we should be. We can we can say I still think X should be the decision, but if we have no way of all agreeing on what is factually true, I don't see how we can come together as a society and find common ground. So that for me, I think is the is one of the biggest challenges that we need to address. And I don't know how to do that.

Kathy, for folks who want to possibly follow up on this conversation or anything else we've discussed today, where can folks find you or connect with you online?

I am on LinkedIn and you can follow me on Twitter at Baxter, KBE, and you can read some of my blog posts at Einstein DOT.

I slash ethics and we very much recommend those blog posts for anyone interested in any of this. And Kathy, thank you so much for joining us today and for all the work that you're doing for ethical ehi out in the world and at Salesforce.

Thank you so much for having me. It was wonderful to be able to talk to you.

We want to thank Kathy again for joining us and for her perspective on responsible eye and how to build it out in technology industry, and that includes, as I just said in our intro, how we take these theoretical concepts of responsibility and ethics and actually build them into teams on the ground, into the folks who are building out all of our high tech and all of our technology in general. So just from this conversation, what did you take away in terms of how we take these theoretical concepts and actually make them happen on the ground?

Well, Dylan, I'm glad you asked, because the first thing that was sticking out to me in this conversation was just how pragmatic Cathy's approach is. And, of course, this probably comes from the fact that for decades she's been actually implementing these concepts into practice. And it's different for us because coming from the world of academia, we're used to just like talking about stuff, but not actually doing it. Not that everybody and everything is like that in academia, but like that happens a lot. Right. We talk about what an ideal ethical I might look like, but very rarely are people actually implementing these things. And so with Cathy, one of the things that she mentioned in terms of actually implementing this is to try to make what the language that we're using specific and the conversations that we're having specific and contextual based on the teams that we're talking to. And I loved her example and her story of, you know, she gives this great, amazing talk about A.I. ethics and everybody loves it at this company that she's working at. And then she goes and checks in on them and they're like, well, we're not making a predictive policing A.I. software. So, like, this isn't relevant to us. But we loved your talk. And so it's great that we have these examples of these like dystopic technologies that are awful when it comes to ethics. But if we only use those same examples over and over again and we don't actually get specific about what people are actually working on in their teams and what their actual roles are and what they can actually do day to day, actually feel like I said that a lot, then we're not going to make any change, right?

Yeah, absolutely. Just and I think that really what I kept thinking about in this conversation is communication honestly is like what what when we're talking on this theoretical level. So like when we're over here and like the ivory tower and as you know, Ph.D. students are as professors or as people who are like publishing on these topics, like what language are we using and can it actually get heard? So we may have like the smartest, best idea for the best model for ethical A.I. that's going to, you know, solve everything in industry forever. And it might be true, but it doesn't mean anything unless we're actually able to communicate it effectively. And same thing with industry. Right. Like if industry folks are trying to partner with people in other sectors, whether that's in education or non-profits or in academia, it's just so important that we think about who we're talking to and we start to change and shift some of our language so that we can, you know, talk to each other instead of talking past each other.

Yeah, and I'm actually really glad that you said that, Dylan, because something that I've been struggling with in the responsible tech field, within the academic bubble is the ways that we can be really hyper critical of industry, which I mean, I think it's really good that we're critical of industry, but I think that sometimes the language that we use to critique is not really productive or effective. And it doesn't really actually give the engineers and the managers and the designers and the executives at the tech companies that are being critiqued, anything to work with, you know, so like we tell them, like, hey, don't make this. I make it less sexist, make it less racist, fix all these problems with your accuracy and error metrics. But we don't really say how or what or why this matters for them and their bottom line and their day to day jobs. And so I really I just appreciate that. Cathy, she's so passionate about this stuff, but she's also on the inside. And so she kind of knows, like what works, what doesn't work and how to actually take this passion that we have in the academic bubble even and put that into practice.

Yeah, and I appreciated how she almost put a face on the people who are doing the work. And sometimes I do think that I would work when we're critiquing whether from an academic standpoint or just as like someone who's publishing an op ed on Wired or something, we're like the A.I. industry needs to fix this or industry needs to fix that. And really, I think it is important to remember that, you know, not all industry is the same. It's not a monolith. Right. There's a million different actors out there within this umbrella that we call industry and they have something in common. But there are a lot of like individuals within those systems. And I think that's I guess. Second point, which is that, like, I think Cathy's witness is important because it allows us to separate out these systems from the actors within them, and obviously that's a hard thing to do. We can't do that across the board because they're within the system. But the amount of I guess what I'm trying to say, and this is the amount of personal attacks that we have on individual engineers or even with facial recognition technology, that we assume bad intent from the people that put that together, which is also. Right. Another fine line of there needs to be some culpability and a responsibility and accountability. But even if we want that, it's not going to be functional for us to critique the personhood of those people who are creating those technologies in the first place. We have to come up with a more nuanced strategy for seeing them as a whole person, part of but also separate from the system they're within.

Yeah, I mean, Kathy even said it herself, right, when she was saying when I asked her who's actually responsible for this, she was saying, well, you know, it takes a village. And I think that's so true. So maybe when we're thinking about our critiques of these technologies, instead of attacking a single person, even if they are like the executive of a large company, it might be better to attack the system and the village that is creating this and seeing what ways can we actually effectively start to incite systemic change to help fix some of these problems instead of just promoting, I don't know, cancel culture on specific engineers who might be trying their best in the first place?

Yeah, although we also have to make room for the fact that there are other engineers, just like in any field who are not doing their best right or are not trying to instigate or institute like ethical standards, partially because they don't see the point. Which again, I think is Cathy's argument, too, is that like we need to get better at describing why the stuff matters, first of all, what it is and then why it matters. And I think, yeah, in terms of the the village argument, the sense systemic, that's the tough part. Right. It's like it's easy for us to say, yeah, let's get systemic. And then it's another whole other level to conceive of. Like, OK, well, now we're in these teams, like, how do we actually build out systemic change if we're not just saying get rid of the whole thing, which we're not. We're saying we have this thing now. How do we make it better? And there's just so much minutia in that and like cultural factors that we need to take into account. And that's why I think we've moved to this interdisciplinary space. And sounds like Kathy wants us to move there, too, because it's going to take a village to address the village. That's fair.

Lots of villages working together here. Yeah. I mean, it's making it's reminding me of the phrase if you can't beat them, join them. And I think this doesn't mean everybody who's passionate about ethically I should go join a large tech company. But I think people who are within tech should consider joining academic and research circles. People who are within academia should consider joining the large tech circles. And we should all kind of at least envision what a world would be if we started to work together a little bit more nicely in the sandbox when those systems and those villages need to make room for those conversations to.

I mean, there's a lot of places, I would imagine that would be toxic to an academic coming in and saying we need you need to talk about we need to talk about ethics. Right. So how do those companies, including startups, including, you know, all these people that make up this tech sector, how do we build out those spaces and in those companies? In those systems?

Oh, and great ending point here. Brings it back to what Cathy was talking about in terms of the kinds of people who effectively talk about these issues and how it's better to instead of acting emotionally and maybe irrationally, acting with a little bit of tranquility and calmness and being kind to one another and empathetic and understanding that at the end of the day, maybe everybody is trying their best and it's all we can do to just at least attempt to come together and work together on these issues to try to solve them and work towards a better, more just future.

For more information on today's show, please visit the episode page at Radical ECG.

If you enjoyed this episode, we invite you to subscribe and review the show on iTunes or your favorite podcast to catch our new episodes every week on Wednesdays. Join our conversation on Twitter at Radical iPod and as always, stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix takes transcription to a whole new level. Here are five reasons you should transcribe your podcast with Sonix. Better audio means a higher transcript accuracy rate. Sonix's automated transcription is fast, easy, and accurate. Transcription is now more accurate and more affordable. Automated transcription is better when you can easily collaborate and share the transcripts; our powerful permissioning system makes collaboration a breeze.

Create better transcripts with online automated transcription. Sonix accurately converts most popular audio file formats (like WAV, MP3, OGG, and AIF) to text. Transcribing by hand is no longer necessary; put away those headphones. Convert your audio to subtitles and fine tune the timing with out advanced subtitle editor.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.