Transparency as a Political Choice with Rumman Chowdhury and Mona Sloane


Transparency Politics.png

What is the relationship between the government and artificial intelligence? To unpack this timely question we interview Rumman Chowdhury and Mona Sloane.

Rumman Chowdhury studies artificial intelligence and humanity. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI.

Mona Sloane is a sociologist working on inequality in the context of AI design and policy. Mona is a Fellow with NYU’s Institute for Public Knowledge (IPK), where she convenes the ‘Co-Opting AI’ series and co-curates the ‘The Shift’ series. She is also an Adjunct Professor at NYU’s Tandon School of Engineering.

Follow Rumman Chowdhury on Twitter @ruchowdh

Follow Mona Sloane on Twitter @mona_sloane

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Rumman Mona_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Rumman Mona_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

And.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan and Jess, and welcome to our fifth and final episode in our series on technology and democracy. And Justin, I are recording this introduction and outro literally while the votes are being counted in the U.S. 2020 election. And I don't know about you all out there. If by the time this airs, we don't even know if we're going to have a winner or know who the future president of the United States is going to be. And so you can imagine there's just a lot of different types of energy in us and possibly even in you, depending on when you're listening to this. But this is just such a special interview for us to share, especially right now in the midst of the history that is happening around us both for our country and for the world. And just tell us about why this episode is so exciting and important.

Yes. So in this episode, luckily, we will not be covering and talking too much about the election, because I'm sure you're all well aware of what's going on around us. But instead, we're going to zoom out a little bit and focus more broadly on the relationship between government and artificial intelligence at large. And this episode is super exciting because we had the opportunity to interview Mona Sloan and Ramon Chowdry when we've been looking forward to this interview for it feels like months, but it's probably just been a few weeks since we first spoke with Remon. And they are freaking brilliant, you know. So in this episode, we interview Mona and Remon. Mona Sloane is a sociologist working on inequality in the context of AI design and policy. Mona is a fellow with NYU Institute for Public Knowledge, where she convenes the Coopting A.I. series and calculates there the shift series. She is also an adjunct professor at NYU Tandan School of Engineering, Ramon Choudhri, who studies artificial intelligence and humanity. She is currently the global lead for responsible A.I. at Accenture Applied Intelligence, where she works with C Suite clients to create cutting edge technical solutions for ethical, explainable and transparent AI.

And this was one of the hardest things we've ever had to do in figuring out like what to put in these bios for the beginning of this episode. So if you're interested in their full CVS, just because these are folks who have done so much in so many different areas, including in government, and I please do see the show notes for all those the full bios, because the work that they're doing and the research that they've done is just so exciting and vital to everything to do with technology and responsible tech and artificial intelligence. And again, the list goes on and on.

And so now it is time for our final episode to conclude this series on technology and democracy. We hope you enjoy.

So we're on the line today with Ramon and Mona, and this has been a conversation that just I have been looking forward to for a long time. We've obviously admired both of your works individually. And the fact that you all are working together now is just like almost like a dream team for us, looking on from the outside. So it's just wonderful to get a chance to talk to you and just to kind of name where we're at right now. So it is Friday, October 30th, which is four days before the US twenty twenty election. And I'm definitely feeling some feelings around that. And our conversation today is somewhat related to that. And we're talking about government and I and so I just wanted to first check in with folks and just see like what's going through your mind right now as we get to this pivotal point in U.S. policy? Maybe that has to do with A.I. as well.

I'll put my input sermon. I just received my Texas voting card. Yay! I would like to say first, voter suppression is real. I voted absentee ballot for over a decade because I lived in New York and California. You are not allowed to in Texas unless you are over sixty five or disabled. Or you can prove that you are not in town during the entire early voting period and voting day. No to my card actually starts. It's active as of Election Day, so I can't vote early. I have to go and vote on Election Day. And third, it expires next year. That's odd. So that's very real. And then since this is airing after the election, depending on how that goes, either sorry I tried or you're welcome.

Yeah, I guess I'm going to take that that cue. This is Mona. Well, first of all, thanks so much for having us and me. I'm so excited for this conversation. I'm really looking forward to it. I'm not American. I'm European. I and I've had the privilege of witnessing everything that's been going on this year in this country. And the reason why it's a privilege is that I really do feel that democracy is happening here. People are really taking to the streets, are queuing for hours and hours and hours to cast their vote. The conversation is alive and kicking. It's you know, that's a lot to do. A lot is broken for sure. But I'm I'm inspired in a way by by what is happening and witnessing that. And just, you know, at NYU, we've been having really important conversations and initiatives around everything that's been going on with my students. I've been able to follow all of this happening into the classes that I teach at the School of Engineering as a social scientist, to really take those kids to think through what all of this means for the intersection of technology and society and vice versa.

So it's a really tough time, but it's also a fascinating time.

And I'm keeping my fingers firmly crossed for the United States.

Thank you so much for providing your feedback and your thoughts in this crazy time right now. And so let's talk about the intersection of technology and society and keeping with the motif of government today. What's the deal with the government? And I really what should we know about how the government uses A.I. today?

And I are working together a bit to understand specifically the government procurement process and all sounds really boring and snore inducing like, wow, vendor procurement so hot right now, but it actually is. So if we look back on the past few years of some of the biggest instances of the big aha moments in government of like, wait, you were using eye to what went wrong?

And I were critically thinking about it and digging into it. You know, we really started to see that there is this failure of governance. And what what we mean is that we don't have the right processes in place yet to understand the impact of these technologies. And it's it's quite different, you know, to use a technology in the public interest or for public use than it is to make a technology for a private company to sell to citizens.

So if I were to make a startup in the startup, let's say, well, I don't know, is a doorbell and identifies who's at your door. And I sell it to private customers. And I say, well, it works about 80 percent of the time.

And that's actually fairly good in terms of recognition. You're like, all right, cool, private use products. If I go and then take that specific technology and let's say sell it to the government to identify people on the street and it is only 80 percent accurate, what a there should be a discussion of why government is using this technology, identify people, but also be is 80 percent accurate good enough for something that's being used in the public use? And then in short, the answer really is no. So government is not accountable to their shareholders. They're not accountable to a paying customer client base who can just walk away, that they're accountable to every single citizen in their constituency. So whatever products policies they come out with are supposed to be for the benefit of everybody who lives there, not just for a particular subset. And that's really the core of the issues we're running into. And there have been some key cases where it's not even just that the oversight didn't exist. We have cases where the city council didn't even know that an invasive technology was being used. That happened in New Orleans in twenty eighteen. They had been using predictive policing technologies for six years without even the city council knowing it, let alone citizens knowing it. And it was because the vendor volunteer offered it as a pro bono initiative to the mayor's office as part of a clean up New Orleans type of initiative. And that's how they got their technology used.

Yeah, I just just can chime into that. And sort of second that if you allow me in, kind of maybe zoom out a little bit and create a bit of a backdrop.

So what we've really been seeing on a large scale, not just on a city level, the local level, state level, federal level, but on the on an international level, is the formation of a narrative of artificial intelligence as the gateway for a prosperous, safe, secure future.

And so we've seen and we do see national governments really heavily investing in that space through their national artificial intelligence strategies, for example, but also all kinds of other sort of interventions that are public, that are private, that are public private partnerships and so on. And let's not forget that governments are actually investors themselves into innovation, technology. Mary Kate has written a wonderful book, The Entrepreneurial State, where she explains that actually the national government is responsible for funding the DPF, the iPhone and so on. We're seeing this sort of large narrative that that is forming in the positions, artificial intelligence as a global arena for power.

And what's really interesting to see is what Ron said, how it trickles down to a local level and creates huge problems on a local level that need unpacking. So the question that we're asking is, how can we address this? How can we address this on a local level with local issues?

Because let's not forget context, but how can we be mindful of sort of the bigger the bigger narrative that that's been forming and also society as a whole and the large scale societal phenomenon issues that we're seeing among them, of course, inequalities that that have been through the roof, increasingly so, racial injustice over decades and decades and decades, the climate emergency and how it's. To all of these, so we need to think these together, but on the ground, that's really the challenge that we're trying to address with a project that we're putting together.

And to think through why governments are even using a I mean, the case seems to be pretty clear, frankly, we know that most governments are strapped for cash. This is always a problem. It's really difficult to provide services for constituents. We know that often there are people whose needs are not addressed because it's really hard to make a blanket policy or a blanket program that addresses everybody. And also, we know that there can be biases and human bias is very real and too, to an untrained eye. A lot of folks have adopted the mantra of like, oh, that's what the data says. Or if the data says so, it must be true, et cetera. So it seems like it's algorithms and creating specific models would be a more objective way of doing things like different types of government processes. And in California, actually, since we're talking about the election, the ballot, I guess we may also know the outcome of this on on Wednesday when this airs billion Prop twenty five, which would replace cash bail with with algorithmic, basically with an algorithm to help decide bail decisioning.

And it's really fascinating to see that the Democratic Party is largely in support of this and the Republican Party is largely against Assoc. isn't necessarily a partisan issue. It's really an issue, as Mona said, of thinking through what the needs are of the constituencies, getting all the people together and having an informed discussion. But it's hard to be informed when a lot of the people who are in the room who are either making the decisions or who would be impacted by the decisions, don't really have visibility into how this technology works. So specifically to go back to it. And I are working on we're trying to identify what are key gaps in in the vendor procurement policies. So like what is vendor procurement? It's the process by which the government figures out what companies to work with to provide these technologies. So when if if a law passes and the state of California Institutes algorithmic bill decisioning, they're not going to create their own models to do this. They're probably going to hire companies. How do you pick and choose a company? And of course, they have processes for assessing different vendors. If you've ever worked for a small company or had a small business of your own, you've gone through the vendor process. But it never asks you about things like ethical use or responsible use or whether or it definitely would never ask you about your technology and your algorithms if you're using them and to provide the output. So we're really focused on artificial intelligence specifically, but also like what are the processes that governments can do to assess people trying to use technology that's often been built for private consumption in the public arena, looking maybe historically, who have the stakeholders been?

And how have these decisions been made on procurement? I guess who is who has been in the room? And based on your research in your work, who should be in the room in order to make these decisions, especially ethically?

I think a big part of the procurement process and this is sort of what's been highlighted by so many of the examples of the public use of technology is that often citizens are the last ones to hear. So one thing that I think a lot of folks would like is a more proactive, standardized form of oversight. So often in the space of like ethics and technology and responsible use of technology, we're in a reactive position where something harmful happens. And then we're all running around like, oh my gosh, this bad thing happened. A, let's inform people that this bad thing happened and then be let's do like disaster mitigation or harm mitigation and what we're trying to do and honestly, what the purpose of often these procurement processes are, is to think about what are the risks, what are the risks that could be introduced. Is this a risk we want to take as an as a government organization entity? And do we have Buy-In and approvals from the right kinds of people? And this would also likely include some sort of public input or maybe city council hearings or town hall meetings where this is openly discussed and debated. What's problematic is if decisions are being made behind closed doors, frankly, not even without elected officials knowing about these about these technologies.

If I can just add to that as well, coming back to sort of the the business models and the procurement process and the ways in which these sort of situations unfold. I just want to point to Virginia Eubanks wonderful book, Automating Inequality, whereby she basically says while these kinds of technologies are kind of embodying a neoliberal narrative of scarce resources and are deployed as such, and there are about scale's. Jail is something we should really bear in mind here. There are about scaling up decision making or the automation of whatever process we're dealing with here. And there are also used strategically to to disguise stories and sort of Virginia out of this wonderful quote where she says, while numbers of stories and the way they get these get hidden in these systems is really problematic and is deeply entangled with the liberal narrative of scarce resources, that's going to become acutely important as we sort of try to navigate the impacts of the pandemic in this recession.

So that's, you know, maybe think about that as well when it comes to sort of bringing in the people who were affected by these systems. There's a lot of interesting and important work going on. But there are also really innovative local governments who are taking a decisive step towards making creating transparency just in terms of who they're working with. For example, two weeks ago, the city of Amsterdam and the city of Helsinki created a register of the artificial intelligence systems, data science solutions that they're using in their government. So that's that's a really interesting step to see, to work just a way in which the conversation can be started. And I think we don't talk nearly enough about these kinds of relationships between companies and local governments and how this is related to romance as literacy, literacy among key decision makers, where to go, what to do when they face a certain problem or just a narrative whereby they feel pressured to automate something.

That's a really good point, Mona, and you're bringing up this, the notion of power dynamics and that's that features a lot in the work you do when you think about institutional power dynamics. When we introduce algorithmic decision making, let's say, to make Medicare and Medicaid decisions, which has happened or where I am now in the city of Houston, it was actually being used to determine public teacher promotions based on student performance metrics. The power dynamic is such that the person on the receiving end of the algorithmic decision is not often empowered to protest against the output. And what it adds is another layer of difficulty, another layer of bureaucracy to go through. So let's say if I imagine this actually is what happened, teachers in in Houston were being either not promoted or being reprimanded, etc., there obviously there was some sort of decisions being made about their performance based on this algorithm. They did not know or did not have the resources to push back. How do you question an algorithm when you're not trained data scientist? And on top of that, these are private companies.

You're not going to be granted full rights and access to their data and their models. I think the one of I think a lot of people's first exposure is to algorithmic inequalities and biases in and how it impacts the public. Was ProPublica an amazing piece on campus, the Northpoint algorithm that did parol decisioning? And the thing is, the only reason we know how how the campus model works, that the questionnaire behind it was because there actually had to be a court case and and those documents that were used had to actually be put into the public record. Otherwise we'd have no visibility. And this is not to say necessarily that the public needs to have visibility into every single aspect of things, but definitely the individuals making the decisions should be able to interrogate these systems meaningfully. So what is the way we can think about meaningful interrogation and understanding, either by framing the appropriate role, by a framing the appropriate kinds of questions and then being able to actually ask those questions of the system that's being built?

Why are people doing this? And I guess more specifically, like, why are companies wanting to work with the government in the first place, like what de Northpoint and Palantir have to gain from putting predictive policing in? What do different companies have to gain from putting algorithms into schools to have great teachers? Is it like efficiency, perceived neutrality, money? What what's going on here?

I mean, I think it is a it is a good sector to be in, frankly. And also to go back to the point that Mona made the history of technological innovation is actually the history of government. Right. And often specifically the history of the military. So the very first computer that was created in the UK would not have been able to have been created without a grant from the British government. The backbone of the Internet comes from DARPA. It is unsurprising that companies go for public funding because it is often a place where you can get unfettered dollars. Another way to think about it also is often when a private company gets investors and investment, it is with an expectation of return. So if I were raising money for my company, I go to a venture capital fund, they give me money. But there is an expectation, right, that I will deliver on that return. They will get some sort of money back. If you get funding, let's say, from the National Science Foundation, they're not going to ask you for that money back. And actually, the NSF has extensive grants for startups that can go into the millions of dollars.

It can actually mean free cash in some of these cases. Also, why would a company do, for example, pro bono work? I think it's a good way to sort of illustrate the capabilities of the technology you're building. I don't want to necessarily ascribe, like, malicious intent to every single every single entity. I mean, this is the thing about responsible and ethical use of technology. If this were about malicious intent, we'd all be working in cybersecurity. This is actually about unintended consequences. People who don't have they have they have the background to create technological tools. They don't have the background to implement said technological tools in society. Those are two two different skill sets. So I think there is also this feelgood moment of, wow, we need this technology and it helped improve this problem, this perceived problem that people seem to have in the world. But again, going back to the literature of all the folks who work in the ethics and technology space, we solve the problems that we see. And we we we think we're we think we're solving problems. But really, it's the problems of people who are just like us.

A Yes, I would I would definitely agree. It's a good sector to be in. It's absolutely save money. It's it's a you know, it's the crown jewel to have a government. And let's not forget that the large tech companies that we're that we have now, many of them are actually on large government contracts, not research funding, but government contracts. So that's a very good way of getting your business solid. The other thing is that working even pro bono with governments and it's just, you know, small local government is a really good way of understanding that sector and developing bespoke solutions that are then there to stay in.

The one example that comes to my mind is not actually a current one, is one that I teach in class, which is the the collaboration between IBM or the German subsidiary of IBM and the Nazi government, whereby it was an explicit business strategy to work with German government officials, with the Department of Statistics, close research collaborations to develop bespoke solutions that were deeply entangled with the larger ideological project of the Holocaust.

And they saw potential that because they saw it would be scaled. So that is that same mechanism is still in place.

And it's so interesting because you've got of scale in amazing contacts twice, Mona, that even this notion of scale isn't really it doesn't really work with how we think of government. We think of government as an entity that's supposed to provide for me specifically. Frankly, I can make the case that government isn't supposed to scale government. Is it supposed to be hyper, hyper efficient entity? I mean, it's such a common refrain in the tech space or actually increasingly a mantra being adopted in in the government sector that innovation needs to be regulation and law and needs to keep up with the pace of innovation. I actually don't know if it does. I don't think it's intended to. And like literally as a scholar of political science, if we look into not just constitutional law specifically, but like democratic processes and what leads to longer lives, democratic institutions, one of them actually is that it is not so easy to quickly change the basic laws of your country because otherwise it would just suffer at the whims of whoever, whichever party is in power, which leads to significant instability.

If you want democracy, what you're actually going for often is stability. And it's obviously it needs to be stability that that balance is within people's voices being heard, et cetera, because obviously a good form of quote unquote stability would be some sort of crazy authoritarian regime right there.

Definitely staying will have the same leader in power for 80 years.

But that's certainly not a democracy. But, you know, I always wonder if people really, you know, really understand what they're saying when they say they want to have regulation and law and government move at the pace of innovation, because I don't actually think it should. And I don't even know I don't think those people would want it to.

Transparency seems to make sense to me.

Even when I think about as a citizen of the United States, even the fact that my tax dollars are going to public funding of some level.

And I would assume that a lot of folks would be at least unlike the 30000 foot level, be like, yeah, transparency. This makes a lot of sense. And I'm curious. What are the barriers to that like, why why don't we have transparency in these systems? Because I think it is easy to go to the point where we're saying, oh, there's malicious actors out in the space who are blocking X, Y and Z. Is that really the case or is it more of the systems that we're in, as you're both talking about, are at a level of scale where transparency just is very difficult to achieve?

Yes, sure. I think, frankly, transparency. Is a political choice. I do think it is feasible. We talk about the so-called black box problem very often where we hear of the algorithm is so complex that it's a deep learning technology that adapts the contacts. We can't actually tell you why it got to the decision that it got to. And and we're sorry for the for the negative impact it created on you. And then Frank Pasquale has written about in this book The Black Hawk Society, how this is actually not just that, but it's also the bureaucratic sort of box that is built around these algorithms and the legal protections, these technologies be proprietary or the algorithm being proprietary and so on. I do think all of this can be remedied. I do think we don't need to know the ins and outs of the particular algorithm to understand the potential social implications. This is what a lot of wonderful critical tech scholars have been saying for a very long time. Safiya Noble, Dive-bombing, Lini, Campione, Unmarried Brassard, Sazuka, so many we don't need. These issues have been going on for a long time. I also think that we maybe need to think about transparency as a sort of multilayered issue or something to aspire to that has different steps to it. A first step can be like we've seen in Helsinki and in Amsterdam, just making public what technologies are being used. Then we can work on what what they need, who maintains them, and we can work on things like procurement woman and I are doing. But there's no silver bullet to fixing the transparency problem.

And the other thing I'd add, I think so like to take a step back, like why do we use this for transparency? And the one that gets thrown around a lot is explained ability. And so interesting because it's rooted in the general data protection regulation, which had nothing to do necessarily with the use of artificial intelligence, had to do with data protection. And it was also in Europe. And it's really interesting when we think about government and law, how much of it it's often like a flag planting exercise. So because the GDP mentions transparency and explain ability without actually, by the way, defining those two things. But in regard to the use of Automated Decision-Making systems, it those to those two things became things we talked about. So it wasn't as if we had always talked about transparency and splain ability. You know, GDP came along and it sort of became like part of this this narrative. So it's really interesting to sort of think about why we talk about these terms in the historical flag planting exercise, because it does actually route back into government. But the other thing I'd say is, you know, transparency, really interesting term, as is explained, ability. And again, going back to power dynamics, I can have full transparency into how a process is, how a process happens, how a decision is made.

But I if I'm not granted any agency or any ability to redress or recourse and who cares, to be honest. Right. There's all this talk about making transparent credit decisioning. And I'm like, that's good. But let's say it doesn't go in my favor. What do you have any processes in place that I can do something? I just supposed to know that the algorithm screwed up and kind of that's that. But also, when we think about transparency, we do actually have some of those protections built in in the United States. So some of the earliest successful litigations of algorithms have been on the grounds of a lack of due process. So we do have a right. And this is actually how the Houston public school teachers won their case and a few other cases in Medicare and Medicaid, if it cannot be sufficiently explained how a how a decision was come to then is actually in violation of due process. And I'm not a lawyer, so I'm not going to even try to explain this any further. But that is my political scientist, social scientist understanding of it, which which is in a sense a great and kind of leads to another sort of a related topic where we don't necessarily have to create brand new laws. And again, this is why Mona and I are thinking a lot about what are the existing procurement infrastructures and how do we support them and build them out. But like what exists today and then what can we use to bolster those? So it's actually really wonderful that we have a right to due process because right now we can get a foothold into if algorithms are being used in a way that's harmful to us because we wouldn't have anything else otherwise. So great given we have that. That's good. What else do we need to ensure the right kinds of protections for the average citizen? And also legman also touched on this. Transparency also ends up relating to literacy.

Once you put information out there, we're actually not sure how people are going to understand it or if they will, because I think one of the assumptions I think we're all making is that, oh, we're going to put this information out there and then people will then understand or realize like. This is something we should be worried about, people could very well just look at it or not look at it frankly, and be like, OK, I guess. And then in a sense, it would be the same sort of nation where government entities or other parties can say, well, we put it out there. No one seems to care. Anyone who's worked in privacy will tell you that this is what they've been suffering until literally last year, maybe where even in tech they even called the privacy paradox. And the narrative for so long has been, oh, people absolutely are willing to give up their data and information. They don't care. As long as you give them a new app, they're more than happy. And even, look, they all signed end user license agreements. We're fully transparent in our end user license agreements. They all agree to it. What's the problem? We did all the things. So education, literacy, et cetera, becomes really, really critical here because none of this is of any use unless people know what to do with the information.

So maybe this is a really naive sentiment. But I guess I'm wondering if this has to be something that is a legal solution because of the problems that companies have created by choosing to not implement transparency in their design process. Because I feel like when we talk about transparency and how important it is for the people in the end users and the general global population, it's clear to us that this would be an easy solution. But I don't understand why companies aren't doing it. And does it have to be something that they're forced to do legally, or is there a way to, I don't know, maybe do some norms setting or some sort of like value motivation here to get companies to realize this is important and good for them? Like what? Where is the disconnect?

I mean, in a sense, there is like companies want to protect their intellectual property. And if I were to put out there how my technology works and what it is, there's really nothing to stop a bigger competitor from just building the same thing. Right. And in a sense, it's kind of why things like the patent process exists. So the patent process literally exists so that you can put your innovation out there in the world and other people can use it. They just can't make money off of it and you have some sort of rights to it. So it's a really cool exercise to think about what might be the groups and bodies to enable that kind of transparency. But also, again, like when you think about who who would have this sort of transparency and visibility, ideally there would be some sort of trusted entity that would have this knowledge and information to be able to, in a sense like do this for us. And there are a lot of folks like Lilly and Edwards who do work on data trust. And the whole concept of data trust is like, you know, it's similar to like going to a credit union or you go to a bank and you trust that they're going to kind of do the right thing with your money or whatever it is.

It's a data trust would be like instead of having to work with every single app and telling every single Apple your preferences, are there sort of a third party body and you would align with the body that has the right, the kinds of values that you want to put into your data. And then they sort of act as that data broker, a data entity for you soon have to worry about it. But it also sounds that all sounds really great in theory. But in practice, it's like who who's going to go do this? Who's empowered? And we don't really have the bodies. The United States. It's really interesting because just this week there was a there's a report from the information commissioner's office. So there that the data protection body in the in the United Kingdom, there's a whole report on how Experian in particular, Experian, TransUnion, like all the big credit unions, were actually repackaging and selling people's data without their knowledge. So in the UK, they do have a body that protects rights. And so this was in violation of GDP, which I mentioned earlier, and certain data protection laws in the UK.

And it's an interesting mirror when I think about the US and what are the government structures that we have and what sort of protections to US citizens have. So in California, there's the California Consumer Protection Act, which is kind of a mirror of GDP in some ways. So we do have more and more protections and laws coming up. And it's been really interesting to see how so many of them come from the state and local level. So they kind of come from the bottom up rather than the top down, which at least to me is a political scientist, is a really interesting exercise in democratic institutions and democracy. But also back to Mona's point about scale. It's interesting because scaling is actually about creating the most generalizable possible application of something. Right. That's like what scale is. It's not just making it bigger. It's bigger means it has to be the most generalizable possible. And yet, as we're seeing and as the phrase goes, all politics is local. So all of the stuff that we care about, frankly, is about our local lives. It's not necessarily about these globalized general human needs. It's often just about the things that impact our day to day.

I think what is also important to note on that one is that, you know, we hear that the tech industry cry, not crying, but kind of saying, well, we need more regulation. Why don't you regulate the ball's in your court. Why don't you do it if we want to be regulated? We've had so many officials, state official calls from tech leaders saying we want regulation. Please, you know, please tell us what to do. We can't possibly do it on our own. And I think, again, this kind of links back to a silver bullet kind of mindset whereby the idea is perpetuated, that one single regulation can fix all these problems. And we've had sort of these hopes put on the GPS and the GPS does a lot of things, but it does not do everything. And it's it's been really interesting to see how it's been unfolding in Europe, but also, for example, American companies operating in Europe and and so on. And one of the things that are really important to look at with the GDP are to see, you know, the problem with these we call them omnibus regulations is the notion of consent. And Berman has just sort of alluded to that. The question in what ways can can an individual user give meaningful consent, given that there are things going on, such as data trade, data brokering and so on, in a context whereby it is very simple, not clear what the technology is that you're actually using when you're signing up for it for the first time you're exploring it or whereby you just get tired of clicking consent.

So the mechanisms by which protection is an act, for example, through the GDP is something we also need to think and talk about. The other thing that we need to think we need to think about and talk about is the notion of data being the new oil. We've heard this so often and you know, we've heard Cichon and Zubov wonderfully last year come out with her book Syrian's Capitalism, and really describing, well, how we're we're being bought into a context where our hybrid technology experience is captured for a market dynamic. And we're all everybody's trading what she calls behavioral futures. But when we talk about oil, we talk about extraction. And that's a very violent process. And that's a process that is sort of violent and and damaging in the long term. So what I'm trying to get at again is sort of what I started with, which is let's think about the narratives that enable these kinds of harmful practices, because these may very well end up being the ones that we come up against when we try to come up with meaningful regulation.

That's a really good point, Mona. And actually to to Zubov book the thing I worry about a lot with the privatization, I call it the privatization of our public infrastructure, our public digital infrastructure. And oddly enough, I wrote my dissertation in political science, not about specifically tech, but essentially about the same concept with regards to the military and how essentially military cities, their their public expenditures, their social capital and their infrastructure are entirely shaped by the fact that they are military cities, like their political economy, shapes everything about them. Because I went to University of California, San Diego and San Diego is an artifact of of significant military influence. But we see the same thing with technology as well. And what's interesting is, I think for a lot of, you know, the average person like you don't see these things.

These are not physical, tangible things. I can go to San Diego and I can tell you exactly why. You know, the the the beaches in particular areas aren't as well built and why the airport is located, where it's located and why there isn't good public transit. And you can point at very particular things. And the difficulty with a lot of technology is that you don't really see it. You don't you can't visualize an algorithm. So if I were to say, you know, our highways and roads are now owned by Tesla and Tesla is in charge of making sure that the roads are OK and anything that happens on those roads, you got to take it up with with the Tesla folks. I think people would be horrified because that is considered to be a public good and we would demand certain public entities be in charge, et cetera, to ensure equitable use and care and accountability. But that's kind of what's happening. Our our digital public infrastructure is being is being privatized. It's kind of scary to think of. Oh, well, when we think of, let's say, a smart DMV, that sounds great. Wow, wouldn't it be wonderful if I didn't have to go into the DMV and I can do all the stuff online and blah, blah, blah, and they can check my biometrics to ensure it's me, so I don't have to go in person and try to figure out where I put my Social Security card.

But when we think about the fact that it is actually a private company that has built that infrastructure and now has access to that information, that's really critical. So when we think about public use of technology, it's really top of mind now because of covid like how do we not talk about covid right now? Right. These are all private companies building technologies and selling them to to the public sector. And because there's an immense amount of pressure to adopt a solution and we don't have the infrastructure in place, that gets kind of a dangerous combination. And what you're now getting are private companies that have access to really personal information about people. And sometimes actually in all of it, almost all of these cases, health information, which is traditionally PII, which is a protected personal information. But now all that's being broken down, because if I am selling a health tracking application to the government, they can't then say, well, you can't have access to health information for your health tracking app. Of course, they're going to have to relax that law.

And now this private company owns this information about me. And it's also worse in places where these things are being made mandatory. So, again, like, you know, this is the difference between private use and public use and private use. You can say, well, don't use the app and don't buy the thing or whatever, but you can't actually do that in the public sector because I can't not live where I live. If my government requires me to download a health tracking app, I actually have to do it by law. And again, going back to this idea of agency and accountability, that I can't go to anyone and say I don't want to participate in this or where's this data being sold or I want my data back. I don't necessarily have rights.

I think this is a humongous problem, the ownership of public infrastructure, private data and so on. It's a problem for the long term for sure, because we are locking ourselves in. But I want to link this back to how we started this conversation, which was on the election. And I want to share some research experience from my time in London. And when I was at the London School of Economics, where I did research on a public housing estate that was sold to a private developer and then was mandated to put in a public park. But that was on private land. And we were working with the local community there. And to our surprise, what we were hearing was that people were quite happy with the development because they had lost all trust in their local council, that they would be able to maintain that infrastructure, that they would be able to provide the the services needed for this to flourish. And they had way more trust in this large corporate developer than they had in their local government. And so the issue of trust in government is really something that trickles down to the bottom and that enables the kind of shifts that woman has just described. And this is something we need to address and think about and talk about and change when we talk about the harmful impact of technology.

And I will disclose, as you were both talking, I was thinking about my time at the DMV last week and how easily it was for me, like how easy it was for me to fall into, well, why don't they just automate this? Right. And that's kind of wrong. Place of privilege is also coming from a place of just like not being critical about some of the downstream impacts of it.

I mean, but it's also a really valid one to write in going back to like the value proposition of why technology is so lucrative, like you to to government. These things are these things still holds true. Government lacks sufficient resources. The technology can actually be used to improve. And we should actually be able to get our driver's licenses without having to wait online for 18 hours and go through a painful process and right now potentially expose ourselves to a disease that is actually a very much OK thing to want in life. The issue here and where the gap is is just having the right kinds of protections. Right. And and I think money is such a great point. One of the things about the pandemic that I haven't seen as much discussion about, but it was kind of like a fear of mine early on in the pandemic was that, you know, we would see a colossal failure of government and we would see a colossal win for big tech companies because our survival in the pandemic is enabled by all the technologies that we have. Right. We're talking now over Zoome because we're all in different places, but also because we can't even meet in person.

So even if we were in the same city, we wouldn't we would not be doing this on Zoome. We? Have our supplies, those of us who have the privilege to because of companies that do delivery services, whether it's like food cetera, or just like everyday needs, you know, so so like I really do think about to Mona's point, like we do have a lot of trust in this whole valley. And tech companies really did build a lot of trust capital. We have been provided presumably for free, but we know not for free with so many really innovative and revolutionary technologies. But now I think, you know, especially when we think about its encroachment into our public lives and also its invasion into our homes, there is no difference between work and home anymore. There is no difference between school and home anymore. And that public private barrier, maybe we had some sort of rights to privacy in our private homes like those are all broken down now. So we do really need to think about what sort of rights and protections people have, because obviously the use of this technology is helpful to us and we should use it to improve government services.

As we move towards closing, I'm going to take us a slightly different direction, so previously in my first career as a minister, we used to talk about this idea of calling and we used to talk about calling as following what makes you come alive. And one of the things that just and I always appreciate about the two of you, when we see you talk, we're talking to you right now, is that when you're talking about these issues, there's a certain aliveness in in your voice and your inflection and in your eyes, et cetera. And I'm just wondering for you, all right, now, like what? What is making you come alive in this work? And maybe a little of your story of getting to this point?

I'm a political scientist by background. I'm a quantitative social scientist. And the book that first got me into society and technology was the book The Filter Bubble. And it happened because I was teaching a class at Grossmont Community College in El Cajon, California.

And I always really loved teaching my community college kids because they they're always really honest. I mean, for those of us who are always good kids in school and I was also always a good kid in school, you would do your best to make sure you set the answers. Teacher wanted you to say, but I am when I'm on the other side of the table, I want my students to challenge me and my community college kids never disappointed and I love them for that. And I once had a student tell me I don't believe in climate change. And I really thought of it because I certainly respected her opinion. She was smart and I thought one to me was just such an incongruous sentence because I'm like climate change, the science. It is fact, it is not an opinion. I'm like, what do you mean you could not believe this? I mean, you can believe or not believe whatever you want. You know, I could not believe the sun's going to come up tomorrow, but it's going to come up at science. Right. Just how it works. But then I really start to think about what would make her think that way.

And then I'm just sort of doing some research and kind of fell into learning more about filter bubbles and social media. And especially, you know, again, we were talking about the election. It's really important to think about human confirmation bias, like our unconscious confirmation bias. And this is a long standing tradition in political science. We learn all about confirmation bias. We can be presented with two different pieces of media, one that supports us that is spurious and one that is well researched and is against us. And we will believe the one that is spurious. And, you know, like now it's like a constant topic of discussion. And we think about the use of algorithms, et cetera, and how technology impacts even our perceptions of the world. So that was actually my first entry into technology and society was actually thinking about the impact of our our perceptions on politics, society, and especially we think about the elections today, the role of social media in impacting our understanding of what's even happening around us.

What makes me come alive in this space is all the mind blowing, the brilliant people who are in it, who are leading it, who have been leading it for a long time, who whose work inspires me. I already mentioned a couple of names in this conversation. But, you know, the usual suspects are Benjamin Alondra, Nelson Campione, Safiya Noble. Obviously, we did a baby to give birth to so many. Ruman is one of them. I was actually a fan girl in before I met her in person and finally met her at a conference in Oxford. And I am very fortunate to work with her on this one. I also really appreciate is that there is a real commitment in the space that we're in, in the scholarly space, but also in the sort of more space to facilitate change. In that sense, it's really tied into the atmosphere that I'm experiencing here in the United States that is very vibrant, very alive by the something in the air that I find inspiring. And the way I got into this space of I'm a sociologist by training, I've been working on developing a sociology of design and inequality more broadly for many, many years. I've worked in architecture, worked on cities and technology, have worked on electric lighting, social housing. And I actually came out of my party and interviewed with a big tech company for an ethics job. And I was interviewed by a philosopher who asked me what kind of values I would recommend the engineers code into the machine. And that really rubbed me the wrong way. And I thought, if this powerful company thinks about the social in that way, we have a problem. And I jumped into the field and the what I've been working on before is just a really neat fit for what is going on with. Specifically, so that's that's how I got there, I got upset in an interview, I could see that happening at Moanin and I mutually vandevelde over each other.

This was the One Hundred Women in Ethics conference that was at Oxford last year. I mean, I think that entire conference was like everybody fan growing over literally everybody else. It was also one of those amazing moments.

Speaking of social media, where it's like you met all the people that you loved on Twitter and then you realize that like they liked you, too.

And it was like it was like a digital equivalent of like third grade passing a note like, do you like the check? And then you realize, like Lillian Edwards knows who you are and you're like, oh, that was the series.

Like an incredibly, incredibly fun conference. And it was really great. And I'll eckermann like one of the things I love the most about the ethics and tech community is how a genuinely collaborative space it is. And it's interesting because I realize when now that it's such a popular field and there's more and more people coming into it and then more and more people from other disciplines. And when I talk to people who are other than the open source community, which also another pretty open and collaborative space, a lot of people are really surprised at how much people share their research they're doing, share their work with each other, are constructively critical. I think like whatever it is, whatever magic it is that built, this community has built it well. And I hope that it it sustains, to be honest, we we have something really wonderful here. And I think it's unfortunate how sometimes the people who work in ethics and technology and responsible technology are kind of painted as like this wet blanket or the naysayers or the pessimists actually just wrote an intro, the Manual Ethics Institute just released their State of I twenty twenty report. And I wrote the intro to the future of A.I. Ethics Components, kind of what I wrote about. And I'm like, you know, we're often framed as the pessimists. And I argue that we're actually the true optimists.

We're the people who see the technology performing at the bare minimum and say, no, no, we can do more. It can do better, because if we were truly pessimists, we would just not be doing this at all. We would be living in cabins in the woods. Right. But we choose Tasmania said like jump into it and try to improve it. And sometimes that means like pushing people to face uncomfortable truths. And that doesn't make us bad people. They're negative people. It frankly makes us the most optimistic people because we actually believe in the human condition and we believe in the potential of the technology and we believe that it can get to a better place.

Yeah, and just to like round that even a little further, I want to give a shout out to my students. There's a professor at NYU. I'm so privileged to work with fantastic students every semester who really amplify that energy, who soak up everything that comes out of this, the critical tech space and who are really, really ready, at least attending to hit the ground running and do something. And I'm I'm surprised, as you can see from my Twitter every every semester about midterm and finals, how well they're doing and how ready they are to take this on. So when we say this generation, it's not political. That is absolutely not true. And I can't wait to see them graduate and enter the workforce and move things around and shake things up.

I love how this conversation went from critiquing government and I to fan Girling about ethics and the critical tech community, that's always our favorite turn to take. But Mona and Remon, thank you so much for coming on and being two of the people that we really look up to in this community and for doing all the work that you both do to facilitate change in this space and in many other spaces in this field. So thank you for coming on. It's really been a pleasure.

We want to sincerely thank Ramon and Mona for a wonderful and very timely conversation and for this episode, in lieu of our normal banter and debrief and conversation, we are going to let their words speak for themselves.

We recognize that at the time of us recording this episode, which is the eve of November 3rd and the time that this episode is airing, which is the morning of November 4th, there is a lot going on in the world right now. And whether or not you are a resident in the United States or are even associated with the election in any way, there's a lot of hardship happening around us.

So we just want to sincerely express to all of you that we're in this together and we hope that you're doing whatever you can to take care of yourself right now. For more information on today's show, please visit the episode page at Radical I Dug.

If you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite podcast to catch our new episodes every week on Wednesdays, join our conversation on Twitter at radical iPod. And as always, say the radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Create and share better audio content with Sonix. Easily share and publish transcripts that were automatically transcribed by Sonix. Let powerful computers do the work for you; automated transcription in minutes. Quickly and accurately convert your audio to text with Sonix. Audio to text transcription just got more accurate. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone.

Better audio means a higher transcript accuracy rate. Researchers better analyze their interviews by transcribing their video and audio recordings with Sonix. Create better transcripts with online automated transcription. Automated transcription for all of your company recordings; Sonix is built for the enterprise.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.