Mar Hicks is an author, historian, and professor doing research on the history of computing, labor, and how hidden technological dynamics change the core narratives of the history of computing in unexpected ways. Hicks's multiple award-winning book, Programmed Inequality, looks at how the British lost their early lead in computing by discarding women computer workers, and what this cautionary tale tells us about current issues in high tech. Their current project looks at resistance and queerness in the history of technology.
Kavita Philip is a historian of science and technology who has written about nineteenth-century environmental knowledge in British India, information technology in post-colonial India, and the intersections of art, science fiction, and social activism with science and technology. She is author of Civilizing Natures (2004), and Studies in Unauthorized Reproduction (forthcoming, MIT Press), as well as co-editor of five volumes curating new interdisciplinary work in radical history, art, activism, computing, and public policy.
Follow Mar Hicks on Twitter @histoftech
Follow Kavita Philip on Twitter @techno_kavi
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.
Relevant Resources Related to This Episode:
How to buy “Your Computer is on Fire”:
Buy from a local indie bookstore near you: Indie Bookstore Finder (Indiebound)
MIT Press: Your Computer is on Fire
Additional Links:
Programmed Inequality by Mar Hicks
Facebook, Twitter, WhatsApp face tougher rules in India
Simone Browne (author of Dark Matters)
Transcript
Computer_On_Fire_mixdown.mp3: Audio automatically transcribed by Sonix
Computer_On_Fire_mixdown.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
And.
Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information. We are your hosts. Dylan and Jess.
In this episode, we interview two of the editors and contributors for the new book, Your Computer is on Fire, that launched yesterday of March 9th from MIT Press. As you may have guessed from the title of this episode, we are so excited to be speaking with Mark Hick's and covid that fill up Your Computer Is On Fire is a breakthrough collection of essays which explores how we can begin to fix our broken high tech infrastructures.
This book includes reflections on topics such as decolonizing the Internet, sexism as a feature instead of a bug, the knowm neutrality of robots and much, much more. We had the opportunity to read the collection before this interview, and we have to tell you, we cannot recommend this book highly enough. So if you haven't already, go read it and go by it. You can find a link to get the book in our show. Notes now to introduce our guests for this episode.
Marnix is an author, historian and professor doing research on the history of computing labor and how a hidden technological dynamics can change the core narratives of the history of computing in unexpected ways. Mars multiple award winning book, Programmed Inequality looks at how the British lost their early lead in computing by discarding women computer workers and what this cautionary tale tells us about current issues in high tech. Their current project looks at resistance and queerness in the history of technology.
Cavitat Phillip is a historian of science and technology who has written about 19th century environmental knowledge in British India, information technology and post-colonial India, and the intersections of art, science fiction and social activism with science and technology. She is author of Civilising Natures and Studies in Unauthorized Reproduction, forthcoming from MIT Press, as well as co-editor of Five Volumes Curating New Interdisciplinary Work and radical history, art, activism, computing and Public Policy. This is one of the episodes where the guests on the show are scholars that we've looked up to for quite a long time and have always wanted to interview individually.
And so the fact that we get the opportunity to interview them together about this exciting new collection was just very meaningful for both Jess and I.
And one last thing before we start the interview, Dylan. Do you know what is happening this week?
Is it a certain conference that has to do with fairness, accountability, transparency and.
That's all the conferences that's all got. That's actually the name of the conference is fairness, accountability and transparency.
It is the fact. Twenty twenty one conference that is currently still happening when this episode airs, which is which is amazing.
I mean, that the speakers there, we've had some honestly, it's been really cool for us because some of the people that we've had on the show, like Yushi Milnor from Data for Black Lives and also Mary Grey and some other folks, it's been really cool to be like we talk to you now. You're keynoting it. Fact. This is amazing.
Yeah, it's a little bit unreal. And also it's a little bit full circle because the reason why fact is so important to us and to the show is because me and Dillon met at fact last year in Barcelona, back when it was called that star, before they did a timely name change.
And if it weren't for this conference, this podcast wouldn't exist.
We traveled all the way to Barcelona to meet each other when really we were 30 minutes away here and always how it works.
But it's very exciting for us to be able to celebrate. It's not the anniversary of the show. We'll do a big celebration for that in April.
But it is cool to celebrate for this episode, which is such a big episode for us to celebrate the anniversary of the beginning of this journey that became radically.
And now that we have been successfully gushy in this intro, it's time to dive right into our interview.
We are on the line today with Cavitat, Phillip and Marnix, two editors from the new collection, Your Computer is on Fire, which was just released yesterday on March 9th. Twenty twenty one. So, Kavita Anmar, welcome to the show.
Great to be here. Thanks so much for having us. Of course. And let's start off by just asking the first question about this collection, and I'm going to throw it over to you.
What is your computer is on fire.
Well, it's kind of a weird title, so let me explain what it is. It's a book that's meant to get attention for so many of the problems that we're facing right now. But problems that have been a really long time in coming. They've been fulminating. They've been sort of making ripples before they made waves. And these problems range from things like biased day to racism and sexism in the high tech industry and all sorts of other things that we've known were big problems for a while. But it wasn't until recently we really started seeing our chickens coming home to roost, so to speak, and realized just what tremendous impacts they were having at all levels of society, not only impacting the groups most directly affected, which is not to say that that isn't very important. It is, but also impacting really everybody impacting all of our major institutions, impacting our democracy itself. And so the reason the book has this kind of hopefully catchy title is because it's really pitched at an audience that is interested in reading about learning about these problems and thinking about solutions. And that audience wouldn't necessarily be just an academic audience.
This audience would probably be everybody from undergrad students to recently graduated folks who work in tech to even people who are not so, you know, recently in in school or who never went to college, who are interested in trying to figure out, OK, where do we go from here? Because for a long time we were undergoing this period of what got coined the tech lash. But for a long time it was unclear what the ways out of the mess were. And I think now we have enough expertise. We have enough of a discourse about, OK, not only do we have to do something, but here are specific actions we can take. And there's actually a really long and well-worn set of historical precedents for the sorts of things we need to do when industries overstepped their bounds and get overly powerful in this way. So we do have we do have a playbook. Not to say it's going to be easy, but this book is part of hoping to be part of the solution to some of these problems that we're facing right now.
So a couple that audience and context that got us thinking about the problem.
Everybody notices the contentiousness in discussions about tech today. Right. One of the things that book tries to do is take a step backwards and assume, at least for a moment, good faith on all sides and give ourselves as writers, as historians, anthropologists, sociologists, activists, give ourselves a task. How would you explain this problem historically, but with an eye to its solution, not only with an eye to understanding history for its own sake, but to understanding the society in which we now live.
So with the audience, context is made of an intern who works for Google or or a tech executive who's trying to figure out how to get into emerging markets, so-called. What are the ethical historical questions we'd like them to think about before they get into that big project. Now, if the audience is contentious, on the other hand, the context is amazing right now. The context, could it be better for a book like this? We work in the context of incredibly supportive colleagues, crosscutting, productive, catalyzing conversations. So in addition to an audience of people that we hope will read this in good faith, we also work with the people not in the book. Right. Incredible activist groups all over the world, the data justice lab whose knowledge people who are working in activism in labor and tech with Whitaker, Lilly Evony, Simon Brown, the work of Benjamin. So even as we're citing so many people in the book itself, we're incredibly fortunate to have gathered such amazing authors. We're also speaking to the people who made these conversations possible in the first place.
That's something that's really struck me about this collection, too. So I have a personal story about this collection, which is that after we were talking about possibly doing an episode on this, I started looking through the authors of the collection and I. Someone by the name of Dr. Andrea Stanton and I was like, I just did for you. So she's a professor of mine at the University of Denver. And it just I started looking through some of the other authors, too. So she's in religious studies, but you have folks from all these different disciplines coming together to talk on these issues. And I was wondering if you could talk a little bit about how you curated the collection and then also the importance of having all those different voices represented.
Oh, I actually wrote an afterword. We have two introductions and two afterwards because we have four editors. We decided that might be more useful to the reader than having one introduction. And so I tried to talk about why we need a difficult interdisciplinary conversation.
Right.
This is why we have people from so many different fields.
It's because we're trying to stage the conversation we'd like people to have in their own context. As you pointed out, that professor is a very university. Perhaps when you trade with them, you didn't have this conversation because the topic was, I guess, religious history. Right.
And what we say is that there are a number of errors underneath the conversations we fail to have.
There are certain errors. And I kind of list the errors that we have we often make when we're trying to have interdisciplinary conversations.
And often when Steve Jobs says, oh, we need humanists to design computational interfaces. Right. What are the kind of things we're assuming about why we should look to the humanities?
Right. And I made a list of four things we assume we should appreciate the great books, which means usually the canon of dead white men. We should find mediating brokers to dumb it down, which assumes a dumb audience, which we refuse to do. So we should mix and stir, which assumes hermetically sealed, static bodies of knowledge in different fields.
We can just mix up and stir together. We don't assume that. We assume the fields of tech and humanities fundamentally change when you think them together. And for we often assume we should, as they say, move fast and break things. We refuse to do that we move slowly and we try to fix things or to think about how we can collectively fix them. And so those those general principles led us to look for the people who would help us have those conversations.
Yeah, I really like that story and agree with everything that Kavita just said. And it it strikes me, Dylan, that what you experienced was sort of the same thing that we were experiencing as we were trying to write this book and assemble the volume and the people in it. And that's that this is not a problem that is confined to any one field. So it's not confined to see us anymore. It's not confined to the STEM fields. It's not confined to any one or multiple humanities fields. And that's why you see people all over working on these same problems, because essentially what we're facing right now is we're not facing a technological problem. We're facing a problem of infrastructure. We're facing a problem where our techno social infrastructure and even our economic infrastructure, our political infrastructure, it has been so fundamentally changed by the way the technologies have scaled within the past 20 to 30 years, that this is an issue, that now it's everybody's business to to be talking about this and thinking about this. And that's not to say that everybody has the same insight, certainly not or equal, you know, kind of equal contributions. But it is to say that, you know, we are seeing some incredibly sharp thinking and solutions coming out of fields that I think previously, maybe a decade ago, we wouldn't have even thought about as, oh, no, that's where we have to go look for solutions. Or maybe some people would have thought that's exactly where we have to go look for solutions. But the popular discourse was, oh, leave it to Silicon Valley, leave it to see they can they'll work out the bugs eventually if you just give them enough time. And we see that that was a very, very flawed understanding of how you get good infrastructure for a very large and diverse population.
In terms of where we look for solutions, I'd love to read a paragraph at the end of Paul Edwards chapter platforms. Our Infrastructure Is On Fire.
That's his title. And he says this is where we should look for solutions. Quote, If Africa is the Silicon Valley of banking, perhaps we should look for the future of infrastructures there as well as other parts of the global south.
Yet he says despite the glory of its innovations and genuine uplift it has brought, this future looks disconcertingly like a large scale, long term strategy of the neoliberal economic order by enabling micro transactions to be profitably monetized while collecting the also monetize of old data exhaustive previously untapped populations.
These systems enroll the, quote, bottom of the pyramid in an algorithmically organized, device driven, market centered society, end quote. And so we also want to say the solutions are where you might not think to look. But when we look there, as in the various parts of Africa that have had French, British, Dutch for all kinds of colonialism inflicted upon it, and that has come out from a period of decolonization only to be slammed by this new thing we call neo liberalism and to have data driven monetization now define their futures. What do we gain from that space that teaches us about the future of tech, not what do we gain from Palo Alto alone?
And Kavita, you actually wrote a chapter in or I guess an essay in the series that's titled The Internet Will Be Decolonised. And so a question I have for you is, what does it mean to decolonize the Internet?
Great point. And here I took the words out of the mouths of activists and so forth.
I want to say I approach these questions with humility as an academic who learns from people in the field who fight these battles.
And what I noticed is there was a conference, just as we were working on this in South Africa called decolonizing the Internet. And so I looked into it. And in fact, I have an image of the slogan from that conference. And I start out by saying, why are activists using the slogan Decolonize the Internet? And so I want to think about why activists need us to decolonize the Internet rather than what academics might. And I think that gets us around the problem of jargon. I mean, so many people find it difficult to approach humanities theory because we seem to use so much self-referential jargon. And I'd like to think of decolonizing the Internet in terms of what the Internet promised, the utopian solution of knowledge and infrastructure everywhere to everyone, what it in fact, delivers. Right. Very uneven information in an unevenly distributed future. And then think of how the kind of degree of imperialism and imperial institutions shapes the way we access infrastructures, and so for me, decolonization is attention to infrastructures, attention to who's getting knowledge where, and attention to all of the levels of communicative and physical infrastructure in between the law, policy, activisms, state governments.
And as an example, in the last week, the world has been watching as India rolled out its new Internet rules that call it rules of the really about intermediary intermediary liability, which which means if you're a big tech company providing infrastructure or an ISP, are you responsible for the content that people put on there?
So these are highly technical, infrastructural and legalistic questions that have to do with how the domain name system works and whether a government is allowed to break the domain name system in order to quote unquote, keep its people safe from non sovereign acts. Right. And so to understand what it means to be a citizen of the global south, to be a person living in the wake of imperialism, we have to understand how these infrastructures work.
Mara, before we turn to your contribution, I do want to start kind of at the beginning. So I think we've gotten a sense of like some of the topics that are covered in this collection. But at the top of it, you do have this introduction that says, when did the fire start? And I'm wondering if we could take a step back and talk about that. So we've laid out some of the problem. But like, how have we gotten into this mess in the first place?
Yeah, that's something that I talk a bit about in my introduction. And my introduction talks about this course that I teach at my university, which is called disasters. And it's a global history course. That's basically a history of industrialization, if you want to boil it down to its less kind of exciting title. But I teach it through the lens of disasters because it helps students see how these problems of technological scale that we're running into right now with computerized technologies are nothing new. These are things that we've seen in the realm of many, many other technologies, from industrial manufacturing to even things like public health, like sewage systems. We start with the London cholera. In fact, as a case study and throughout the course, you know, we we see echoes of the same thing, which is that people have to do a lot of sort of literal and figurative shit eating when these new infrastructures come into place and they're essentially being beta tested on whole populations. And as Kavita very eloquently talks about in the book and also has spoken about here, there are these huge differentials of power between nations and these these differentials of power are explicitly leveraged to, in fact, get technologies to scale up more and to become more profitable and to gain more momentum. So, for instance, when you see the harms that I don't know, for instance, Facebook is doing in other nations, that isn't a coincidence. That is not just a bug in the system.
That is actually how these technologies start to snowball, start to gain more and more power because they're deployed in one context and not really thinking about what the potential causes or the potential effects or the potential harms might be. In another say, let's say, national context, it makes them tremendously dangerous. But that doesn't mean that what's happening isn't part of the design to get more and more users, to get more and more power, to get to a point where that technology is infrastructural and indispensable. And what Kavita was just speaking about regarding what India is doing with new I.T. rules and what we saw with Australia doing, trying to push back against Facebook to protect news, essentially in the Australian context. These are all examples, I think, of how, as Sarah Roberts puts it in her chapter, which is called your EHI is not human, how these technologies have kind of run a game on us saying that they're neutral, that they're platforms, that they're intermediaries, that they're passed through for information in particular. And that has never been true. So there's always been content moderation, and there's also always been very specific decisions made for how to best approach markets in a way that is anything but neutral so that they can get the most ad dollars and and not necessarily have to be responsible for any of the harms they're protected. As Sarah Roberts Dr.
Roberts talks about in her chapter the protections of the nineteen ninety six CDA and Section two 30, and it's really, really interesting to see how that legislation has caused a lot of unintentional harms as technologies have scaled.
Mara, follow up to you to bridge into your chapter in the book as well. You just use this metaphor of bug versus feature and your chapter is called or your essay is called Sexism is a feature, not a bug. Would you mind sharing a bit about that?
Sure. Well, my chapter is called Sexism is a feature, not a bug, because it's about a situation that looks at a nation that was computerizing the UK and was doing so pretty well early on. And then everything kind of goes to hell. And there was for a long time this discourse that that was due simply to American competition, that essentially IBM came in and beat the British computing industry. But when you look at it a bit closer, as I do in my chapter and I did in my book Programmed Inequality, what you see is that sexism, sexism was actually a really, really important flaw in the system of setting up a technological labour force. And I won't go into that too much here because I've you know, you can read about that in the book, but I want to draw some connections between that chapter and a couple of other chapters that are making similar claims and have similar insights about different areas. So in one of the chapters, Dr Halcion Lawrence looks at voice recognition technologies and accent bias. And it's called Siri Disciplines. And it talks about the ways in which voice recognition technologies discipline users into speaking with essentially one of three sort of what they claim are mainstream US accents, even though the numbers of people who speak with those accents aren't actually as high as the people who speak English with different accents.
So Australian United States and British accents and Dr. Lawrence shows how people who like her, who she's from Trinidad and Tobago and she has to speak with essentially a false accent code switch in order to be understood by these machines. Compare that, for instance, with Dr Sophie Noble's contribution to the book, where she talks about how robots are kind of reenacting these racist and sexist power relations. But because they are inanimate objects or rather not conscious objects, there is an attempt to ignore the fact that we are rebuilding all of these really problematic power differentials in a new sphere and getting people, you know, real human people to interact with robots in a way that extends these really, really problematic assumptions and stereotypes. And a lot of ways, especially when these robots are used to do things like replace workers, for instance. So the title of my chapter, it's it's kind of a through line, that theme in the in this book that a lot of times things that are supposedly errors or bugs, they're actually really fundamental parts of how the system works. And maybe they didn't intend for that to be, you know, as negative as it was. But taking that out isn't a simple matter of patching about. It's actually it's a feature. It's the way that the system holds together in a lot of ways or the way that, you know, certain users get a lot of value from the system while certain other users are completely left behind by it.
I'm going to latch on to something that you mentioned earlier, Ma, and this perceived notion of neutrality that we tend to have for our technological systems. It seems to come up throughout the collection and something that Dylan and I found really just hard hitting to start off the collection in the introductory chapter. Was that the the title Your Computer Is On Fire is stated as a manifesto. And I found that really interesting. So I was wondering if Markovits, if you could speak to why the word manifesto stood out to the editors and why you went with that language.
Well, you probably noticed that have to structure the title structure is X is Y, right. Every single chapter follows that structure for its title.
We did work quite hard on that with the authors. And I would say for all of us, that was the most challenging part. And also the most exciting part for me is translating things that we already think we know. But we know in the context of a classroom or a research seminar. How do you translate that into a kind of two? You list for a public at the same time keeping the complexity of the history that we bring to it, not making that To-Do list a kind of overly dumbed down list of things that supposedly anyone can do. We do think anyone can do it, but it will change you in doing it. You're not going to stay the same. Anyone who started it. This is not the to do list like a laundry list that leaves you the same at the end of the day. This is a manifesto because it calls on all of us to be willing to risk our sense of self, our sense of autonomous self sovereign production and realize that we are made along with those robots and along with those low paid workers. At the other end of the world, we are all constituted as collective relational subjects.
And to that extent, this is a call for people to reimagine themselves in a different kind of world, one that we give you some outlines for. But it's by no means complete. The reader is part of that process.
Yeah. And I think the title just being what it is, we really wanted the title to be both approachable and urgent and to make it clear that the people who are reading this book aren't alone. You know, it's your computer is on fire in a possessive sense, but in a very collective sense. It's all of our computer. And we have you know, we have a responsibility to fix it. And we have we have the skills and power to fix it as as a collective or as groups of collectives who come together to do this and to get away from the sort of atomizing people into neo liberal individuals subjects and saying, well, make your choices by the personal choices you make, especially as a consumer. That's something that we've seen doesn't work. The free market is not going to solve these problems. Tech corporations can't police themselves.
So now it's it's a collective problem. It's a communal problem. And so that's why the title is sort of a call, a personal call to action in that way, but a call to action that's about acting in concert, not acting simply as individuals. One of the areas, I guess, this collection critiques is not just neutrality of technology, but also something called techno utopianism. And I'm wondering if it's one of the taglines of the book is Techno Utopianism is Dead. And so I'm wondering if first if you could define what techno utopianism is for folks who don't know. But then also if we're no longer living in this techno utopianism world, what world are we living in?
I can give you a couple of examples to think through that. So techno utopianism is the idea that technology gives us tools to fix thorny political and social problems.
And so technology there appears, is this God from the sky. They use ex machina.
It drops down without any of the social historical problems that people have because technology is supposedly separate from people. And now you've already got that hermetically sealed dualism that helps us think that there's a utopian hope coming from outside humans. We suggest humans and technology, things and words, histories and futures are coproduced and therefore we have a more complex process by which to think of what kinds of futures we want. It's not yet set. So here's an example of techno utopianism.
Maybe you'd look at the history of imperialism and say, oh, that was terrible. We really messed up the Third World and now we're going to fix it in this new era and we're going to bring digital technology to these formerly colonized subjects. And they're all going to be good workers in this wonderful global economy right now.
Srila Sakar in her captors skills will not set you free, says, quote, popular skills programs in the Third World.
And she she studies a program in Delhi, mainly produce employment at the lower rung of the information economy that is temporary, gendered and vulnerable to exploitation.
So there's a place where we can look at a project in the developing world that claims to bring a utopian solution to suffering and exploitation. And then we can look at histories like Mars programs, inequality and say, well, we've seen before the production of supposedly meritocratic jobs in the global economy is in fact a redistribution of sexism and in many ways classism. Right. And so then we can ask different questions of these skills courses actually doing something other than a kind of charitable donation of skills to under skilled populations. Is it, in fact producing a new low wage population that will serve the interests of big tech as they penetrate what is an unimaginably huge global market, one point three billion people waiting to be. Reacted with somebody dataplan. So you see how we have a simple ethnographic investigation that takes on different kinds of questions and global ramifications because we read these in tandem with each other.
Yeah, and I love that, you know, you brought up shrillest chapter because it shows that so, so clearly and again, as you point out, there's a through line in other case studies in the book. So while you were talking, I was thinking not just about the connections that my chapter has to Thrillist, but also, Janet, about a chapter in this book called Coding Is Not Empowerment. Thrillist chapters called Skills will not set you free. So even in the title, there's some similarity, but they're looking at very different cases. And Janet's looking at coding camps and sort of reskilling programs that have been pitched in particular towards black women and girls and black people of all genders in the United States to say, look, you can retrain and you can essentially live the Silicon Valley tech worker dream if you just learn to code. And what Dr. Badday shows in her chapter is that, in fact, again and again and again, for really decades this has been going on. And not only does it not work, it's also somewhat enacted in somewhat of a predatory way, a lot of times. So people end up in debt in a lot of cases to these coding schools or working, you know, like 80 hour weeks for for far less than they would have gotten had they gotten, for instance, some degree at a university initially. And then the other thing that happens in this is something that's a through line in my work is that, as Kavita alluded to, once, you are able to make these skills essentially more common, they become less valuable.
So the people who benefit from being reskilled are up skilled. They exist in a small group at a very particular moment in time. But in general, who benefits from the proliferation of these skills isn't the labor force. It's the corporations and the management who are trying to get more of these workers at cheaper rates and more easily so that they're more replaceable as a workforce. And so that if they try to flex their their might as workers or say, unionize, as we're seeing more and more tech workers try to do right now, that it's easier to simply replace them. You can't do that if the skills are a very scarce and valuable commodity. So there's a very, very important historical pattern of labor deskilling that we really need to be attentive to, because the people who are the the highest rung of our skills economy, it's not because of their skills. That might sound counterintuitive, but it really isn't. It's because of everything that's going on around them. And to a large extent, as Kavita and Srila and Janet point out, it has to do also with who they are and the prestige that attaches to them as people. The, you know, the privilege that attaches to them as people, not simply, oh, you have these skills or you're doing this. And so you're rising to the top of a meritocracy. That's really historically not what's going on. And it's not what's going on now either.
And where we're starting to see that more clearly, I'll add a couple of things to that wonderful elucidation of the links among the other chapters that really speak together.
For instance, Andrea Stanton's Broken Is Word, which, as you can imagine, is meant to read. Also right to left, word is broken and Tom Blainey's chapter typing is dead.
Both of them point out that, OK, maybe a reader might think you're just talking about people exploiting people. But my actual objects are neutral on the typewriter, keyboard, the machine I work on the server cable that links me across the Atlantic. Right. Surely these are neutral.
We've already seen that Sophia Noble's chapter tells us no. Well, robots are not neutral. They embed histories of sexism and racism. Andrea tells us that because typewriters were designed with a left right script in mind, quote, Arabic script is seen as a particularly thorny challenge. And she unpacks that to tell us why, in fact, misconstruction of Arabic is Thorney and even, quote, pathological as a script originates in kind of Orientalists understandings of the language itself.
Right. So we see ways in which a 30 year history of the study of Orientalism, the representation of Middle Eastern cultures as backward and pathological, come to rest within the design of keyboard's. And then we see. Who tells us through his work on the typewriter that, in fact, the Anglophone keyboard is what is an Afro centric assumption, that making that universal is the problem of design that lies at the heart of how we understand machines today.
So we've introduced quite a few fires or quite a few problems that this collection discusses in detail. And you also mentioned earlier more about how there is mention of the collective power that we can have to help solve or at least start to solve some of these problems. So without giving anything away. I'm wondering if maybe we can have a little bit of a sneak peek of what some of that solution space looks like.
Sure. Yeah, I'm happy to speak to that. And I'm happy to give away as much as I can because, yeah, I would really like to see these problems solved. And you've gotten hints of some of the things that we need to do so far, especially through everything that covid has been outlining regarding the the history of colonization, the history of imperialism, the ways in which these global power differences have not just you know, it's not just that they're not passed yet. They're actively reconstituting a lot of the power relationships that are getting built into our technologies. So one of the things is to look for political solutions. And the book talks in detail about some of the ways to do that. And this chapter about decolonizing tech, decolonising the Internet and, you know, really looking to activists and learning from activists I think is so important. One of the things that I'll say for my part is that I'm very heartened. You know, you said, well, if we're not in a techno utopian moment anymore, what moment are we in right now? So we could be very pessimistic and we can say we're in a dystopian moment or we could say, you know, we're we're in a dystopian moment, but we're we're sort of activating to recognize that we never had a shot at utopianism because that's that's not a real thing.
And so one of the things I find really heartening right now is how labor forces are coming together and more and more realizing that white collar labor forces have to organize labor forces that are working for tech companies who are working really, really terrible, terrible conditions like an Amazon warehouses or driving for Uber. They are finding ways to organize more and more. I think labor organization is absolutely essential to this moment. Looking from the US perspective in particular, we need to have two things or at least two things happening at once. And one of those things is the sort of bottom up grassroots activism and labor organization and then pressure on our elected officials to do top down regulation of these industries, to break up industry or break up corporations that have become honestly so powerful that they have become pseudo governmental almost.
They are affecting the political contours of the United States and the world in ways that you might expect from from a government, from government agencies, not private corporations. And that's incredibly dramatic. And then the other thing, too, is, you know, we've been talking about the political context in the economic context. I'll just flag that. One of the really critical things that connects, of course, to both is our environment, our context of, OK, what where are we going to get clean drinking water? Who is going to be allowed to have clean drinking water? There are people in the United States right now who have been without clean drinking water for weeks because essentially of infrastructures being privatized. And so they they just don't have drinking water now in a disaster. And Nathan Ensminger chapter, The Cloud is a factory, I think is really good on this issue of, OK, let's really think about how these cloud technologies or how computing technologies that seem relatively clean, they seem somewhat ephemeral. And a lot of cases when we're talking about software or we're talking about information, how they're anything but, they're rooted in very similar industrial logics, in similar ways of leveraging and honestly abusing the infrastructure of our environment.
And it's making things much worse in sort of exactly the ways we don't want at exactly the moment we don't want. So if I were going to give one catch phrase for what kind of a moment where and where in a moment of resistance, where in a moment of maybe approaching something like a computing revolution, you know, we never really had a computer revolution. We use that term a lot, but the power structures didn't change. That's not a revolution. So maybe we're approaching that moment, I don't know. One can hope, right?
Yeah, I love that we're almost out of time, but I'll underscore again how these X is wise to chapters are not complaints as much as they are pointing to work with revolutionary solutions.
For example, Korina Shlomo's in her chapter Agenda is a corporate tool outlines how IBM's Thomas Watson managed IBM like a quote unquote family and what that does. And here's a line from Carina's chapter. Watson's model of equal individualistic purpose meant that workers always confronted management alone. This weakened their position and deprived both men and women of the same in IBM's corporate office. And so this reminds us that we're looking at people as individualistic, atomized workers doesn't empower them.
It is underscoring the way that collectivity empowers all of us and that these kind of gendered race, class based, imperialism focused analysis actually are not just complaints. We're not just a bunch of people whining or people trying to figure out how to work this together.
So I'll submit from my final chapter where I say, OK, if we shouldn't make things move fast, makes sense.
So what should we do? And I see what we can as as historians, sociologists, anthropologists. What we can tell you is that language, history and politics matter. We can tell you what we think matters. And then it's up to us to build this conversation and practice together. And so I say, quote, To navigate the complexities of power, we should pin our hopes not on axiomatic rule based ethics, nor on the hope of finding value neutral data. We should look instead to the conversations we need to make, I say, among technologists, political theorists, activists and academics. And so this revolution comes in conversation and exchange.
As difficult as it is, we urge you to take this book to your own workplaces and have those conversations in ways that might be completely different from ours.
But we'll be in dialogue with the histories that we summon up to the title of that last chapter, which is how to stop worrying about clean signals and start loving the noise is just such a powerful metaphor. And I'm wondering, as we close this interview for you both, I can say something about what do you hope the impact that this collection is going to have maybe in 10 years, looking back, like if you had just like one thing that you'd like this to change in the conversation, what would that be?
Well, I want to say to people, don't be afraid of complexity. That axiomatic rule based ethics has been so powerful.
If you look back to any supposedly Thorney period in science and technology, we seem to come out with a set of rules and we think we've solved it. Right. So human subjects, research, OK, I've got my IRB permissions. I'm going to go be ethical. OK, great. But are you really having difficult conversations in the context that you're working in? We had LCB ethical, legal and scientific impacts of the Human Genome Project and once again we had ethics commissions, but we didn't necessarily have some kind of difficult conversations that we could have had.
And again, I would say in this Proteau revolutionary context, we could have all kinds of discussions if we didn't flinch from the laxity of the influences and the possibilities of the multiple futures ahead of us.
And there are really inspired by my my youngest students. The undergrads coming into our classes are just incredibly fearless and they're not afraid of taking things apart and building it up together. So I wouldn't say I take my cue from the next generation and we're here to follow, not lead that conversation.
Yeah, I one hundred percent second all of that. And I'll say that really, I wrote this book largely for my undergrads with my undergrads in mind and the sorts of questions that they were asking in the classroom and the sorts of really thorny, difficult moral and ethical problems they were having there. Mostly engineers. They're mostly people who are training to go into CS and other engineering fields. And they would say, OK, what what do we do? What can we do? You know, they they saw the problems. And once they understood the complexities, they they really, really wanted to not live their lives as conscientious cogs in a broken system. And so if there's one thing that I hope this volume does, I hope that and we do try to do this. It's not it's not all a downer. A title notwithstanding. I hope it gives people hope and starts to help them realize their power. They have power. We have power to change these systems. And it's very hard to see that we do until we really start exercising that power. And so I would just hope that if nothing else, this volume gives people who are maybe similar to my students the idea that they are not trapped and they don't have to think, oh, you know, if I don't do it, somebody else will do it. That's not true. There are always ways, as Kavita pointed out, to do things differently or to make different, better futures. And we're going to we're going to stumble and getting. There were definitely not going to get there easily, but it's possible and we're seeing, I think right now how it's becoming more possible than it was in certainly in the 1990s, at the height of technophobia, even in the early 2000s.
You know, we're really seeing this is the moment to seize. And so I'm hopeful that this volume will maybe be helpful in this moment. That's what that's what I'm rooting for. Well, Martin Cavitat, thank you so much for seizing this moment with this collection, which, by the way, for those listening, if you want access to this or you want to buy your own copy, we have plenty of links in our show notes as well as information for how you can explore more. And covid does research and scholarship a little bit further. But for now, thank you so much to the both of you for joining us today for this wonderful conversation.
Thank you so much. Thank you.
Just what an awesome interview, an awesome collection, and I'm so excited that this collection is out in the world right now and it's a few days after we recorded the interview. And I'm still like I still have chills from some of the things that that Mark Kabita said. And I'm wondering first if you feel the same way.
Second, if you know what stood out to you in this interview now that it's been a few days since we conducted it.
Yeah, it's really awesome that this was the first interview we've ever done about a book that was about to come out where they actually sent us a pre-print version of it. I felt like very VIP when they were willing to give us access to it. And also it's amazing because now we can say first hand, go get this book because it's actually really amazing. And if you aren't captured within the first few sentences of the introduction, then I don't know what else to tell you because it's so well-written.
And which is to say you you will be.
And if if you are an amazing scholar, the responsible technology or ethics space and you have a book that you would like us to freely check out that we would have.
But again, thank you seriously, thank you to to MIT Press for providing a press copy for us to be able to prepare for this interview.
Yeah. And of course, we also would love to read the rest of your books, because now both Dylan and I have sections of A.I. ethics and responsible textbooks in both of our bookshelves at our homes, and we are looking to build the collection. So very happy to be adding another book to that collection and now onto the book.
So I think one of my first takeaways from this conversation was that when Marx was talking about their chapter on or I guess their essay on sexism being a feature and not a bug. And this was really interesting because it was actually one of the first times that I heard this framing. And since the interview, I've heard it like once or twice more. And so I feel like it's going to probably keep coming up again. It's like one of those people were like, once you meet them, you see them everywhere, and then you realize that they already were everywhere and you just didn't know who they were. And so I think for me personally, I've been thinking about things like sexism and racism and discrimination and like inequity as they exist in algorithms as bugs. That's that's what I've thought about in my mind when I think about the framing of the problem and then also the framing of the solution. And this like flipping of the narrative, this infrastructure inversion, if I want to put my theory a cap on of thinking about them as a feature and something that's explicitly put into the system is it's just a really interesting reframing.
And I've been thinking about that a lot since this interview.
Not even that it's put into the system, but that it is the system, like the system itself would not function without these isms underneath that sexism, racism, like they were built out of the same meta systems or social systems. Right. And the technology is not separate from that, which I think is just first of all, is just super overwhelming to think about. Right. But also is, I think, spot on. And for me, like, I just I keep coming back to that Chapter Capito's chapter essay about this titled How to Stop Worrying About Clean Signals and Start Loving the Noise, because after this interview and reading through these essays in this book, I'm like, yeah, this stuff is still so overwhelming.
And we talk about that and I like yelled about it before and some of our outros.
But maybe the solution, quote unquote, is not to find a solution, but is at least not yet, but is to first know what noise and what static we're in in the first place, because sometimes I think that we jump too much to that solution space, which is really just going to replicate those features and keep us thinking of them as bugs and not enough features. So I just love that that framing in that metaphor. I'm going to come back to that a lot. Definitely.
Well, that was one of the reasons why I loved also that both Capital and Ma encouraged or I guess they sort of explained that this was not an essay of complaints like this is not a book just full of people complaining about the system. And I think just to kind of highlight that thought for a second, because this is something that the A.I. ethics community, I think, gets a lot of flack for is the fact that we're constantly complaining. I'm using air quotes you can't see, but I'm air, quote, complaining about all the things that we are actually just critiquing. And I personally, I do think that maybe critique can go a little bit too far. And, you know, there is like a place, a time and a place for that.
But complaint doesn't really have to be seen as a negative thing. And honestly, I don't really think that the ethics community is complaining when when they're being accused of complaining. I think that as more Capito was saying, it's actually just an acknowledgement. And so maybe acknowledgements sound like complaints when the system is broken, and I think maybe that's a lot of what this essay is getting at, is acknowledging a broken system so that we can begin to start working together to take that acknowledgement and that awareness of what is broken, to work together towards building solutions.
And I think one thing that needs to happen in order for us to begin working together is that we need to continue to see leadership that we've been seeing, especially within the past six months with at Google and other folks who are speaking out against these systems of oppression and abuses of power in in these spaces. But another way to to lead is through what we saw MA and covid to doing in this episode, which is centering the work and censoring the voices of so many folks that maybe people listening to this program haven't necessarily heard of before. But you need to because they're real, real forces from a very interdisciplinary space, all talking about this topic of how do we transform these systems that are so deeply embedded with these oppressive tendencies.
And so I do want to give a quick shout out to the other two editors of this collection, Benjamin Peters and Thomas Malani. They did. I mean, really just check out the book, but they're there. So I think, as Mark just said, each of the editors took either an introduction or took an afterword.
And then in the center of the book, it was the other contributors and also some of their work. And I just think they did an amazing job of curating this collection and then again, centering the work that needs to be centered right now.
Have we sold you on the book yet? Have you have you bought it yet? Because we will. We can keep going.
But if you haven't bought the book yet and you're looking for a way to do that, you can do that in our show notes. And also, we wanted to make sure to mention, because this is important to the editors of the collection, that if you have the ability, please order the book through a local indie bookstore near you. And if you don't know where you can find a local store that sells this book, we have a link in our show, notes that will help you find the closest one near to you.
Yeah, and this doesn't just stop it at this book, right? Like, go go support your your local indie bookstores everywhere for any of the books that you are buying or not coming out against various companies that may have a corner on the market of online stores.
One company, this could be anyone. I'm using air quotes as well, but please do.
It's important, especially this year, in which indie bookstores have been hit so hard in covid and many have had to close. So please, even if it is a little more expensive or you have to go out of your way or you have to wait a little longer, it's so important.
And so in order to check out those links and also for more information on today's show, please visit the episode page at Radical I Dog.
And if you enjoyed this episode, we invite you as always to subscribe rate and review the show on iTunes or your favorite podcast, or catch our new episodes every other week on Wednesdays and sometimes catch our bonus episodes on Sundays and join our conversation on Twitter at Radical a pod.
And I feel like I've said and a lot here, but as always, no place the end of the. But but it's not. It's yes and yes. And as always, stay radical.
I'd like to follow the rules of improv here, recording everything we used in.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including automated translation, share transcripts, upload many different filetypes, transcribe multiple languages, and easily transcribe your Zoom meetings. Try Sonix for free today.