Episode 11: Robot Rights? Exploring Algorithmic Colonization with Abeba Birhane

abeba.jpg

Should we grant robots rights? What is moral relationality and how can it be useful for designing machine learning algorithms? What is the algorithmic colonization of Africa and why is it harmful? To answer these questions and more The Radical AI Podcast welcomes Abeba Birhane to the show.  Abeba Birhane is a PhD candidate in cognitive science at University College Dublin in the School of Computer Science. She studies the relationships between emerging technologies, personhood and society. Specifically, Abeba explores how technology can shape what it means to be human. Abeba’s work is incredibly interdisciplinary - bridging the fields of cognitive science, psychology, computer science, critical data studies, and philosophy.

You can follow Abeba Birhane on Twitter @Abebab. For more of Abeba’s work, check out her website.

Relevant links from the episode:

The Value of Machine Learning by Ria Kalluri

Towards an anti-fascist AI by Dan Mcquillan

Counting the Countless by Os Keyes

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Abeba Birhane_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Abeba Birhane_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess. Just as a reminder for all of our episodes, while we love interviewing people who fall far from the norm and interrogating radical ideas, we do not necessarily endorse the views of our guests on this show.

In this episode, we interview Abeba Birhane, a PhD candidate and cognitive science at University College Dublin. In the School of Computer Science, Abeba studies the relationships between emerging technologies, personhood and society. Specifically, she explores how technology can shape what it means to be human. Abeba's work is interdisciplinary, bridging the fields of cognitive science, psychology, computer science, critical data studies, philosophy and more.

Several questions that we explore in this interview. Should we grant robots rights? What is moral relationality? And how can it be useful for designing machine learning algorithms? And finally, what is the algorithmic colonization of Africa and why is it harmful? Without further ado, we are so excited to share this interview with a babe or honey with all of you.

We're here with a neighbor for honey. How are you doing today? Great. Thank you for having me. Absolutely. Thank you for being here. As we get started, we were wondering if you could tell us just a little bit about yourself, about who you are as a person and maybe as a researcher.

Yeah, sure. So my I start with my academic background. I guess that's the easiest. I am from Ethiopia originally and I first studied physics. And then I was doing really I was failing in calculus. So I moved to study physics, these languages. That was my first degree in Ethiopia. And then I moved then to Ireland and totally switched to a completely new area. And I studied psychology and then philosophy. And then I went and did a masters in cognitive science. And my mom is always complaining whenever I am on the phone. When will you stop studying? You are like studying fine. Yes. And after my masters in Cokes, I kind of took a year out. I worked for an advertising agency. I didn't like it very much, but I was advised to to kind of experience the real world, as they say.

And then came back to to do my pitch.

I'm on my third year now, and I'm really interested in how you start in in cognitive science. I'm also interested in your answer to that question about whether you'll ever stop studying.

Yeah, yeah. I totally and I guess I have a loving hate relationship with an academia. I love reading. I love studying. I love discussing research with my peers. But I'm also constantly frustrated as a black woman because there are so many barriers. Your work is way more scrutinized. You are made to feel like an impostor constantly. Your students are, you know, every every semester you walk into your class and your students are surprised you are there to teach them. I guess people are not just two young black women in teaching in classrooms.

Yeah, but but also generally, you know, the kind of ingrained structure that takes the western white man as the status quo, as the quintessential representative of academia. And, you know, directly or indirectly is constantly frustrating because, you know, you are being excluded, being questioned and you face various challenges. But despite that, it's I guess that's an ongoing tension. I still love reading research and I hate that aspect, but it's pushing for it. And hopefully I will find the good, the perfect balance, which itself is a moving target.

Yeah. And I yeah, I started I got into Coke, so because I guess this was a few years spot now.

I really liked the interdisciplinary aspect of it, especially in Dublin, in Ireland, where I studied my masters, where also where I'm doing my page. The cognitive science program is super interdisciplinary. You do more jobs from neuroscience, from computer science, from philosophy and from, you know, an anthropology. So it's a really kind of gives you a great insight into various fields.

And I really liked that aspect of that. The Masters program in Dublin, Ireland, is very interdisciplinary. There were a few more jobs in computer science where you did various sorts of modelling, cognitive modelling. But apart from that, it really is a collection of. Not just the collection. You also have to integrate them all into creating something coherent. But I got into through it through the master's program, I come to this kind of a narrow field in cognitive science known as embodied in inactive cognitive science.

You're probably familiar with it. It's kind of a pushback against traditional thinking, traditional cognitive science, which in a nutshell takes cognition as something, you know, separates or separable is something you can something individualistic. You can look at the brain or you can take the individual out of the ecology and put them in a lab and study their learning or their intelligence or their cognition. And it's a pushback against all that. It's like, no, you, the person you, is inherently and naturally interlinked with a wave of relations with either person, but also with the physical environment as well. So if you want to understand a person's learning process or intelligence or cognition, you really have to take account of the whole nalu. You have to study people as as active beings as they interact. So that's the core of these embodied in inactive traditions. So I was very interested in that. And I came to kind of. So when I've initially proposed to Dopy HDMI at my initial research question was, yes, the person is inherently interlinked mezuzahs. You cannot exist without others. And also the environment is crucial.

And but within that paradigm, the embodied in inactive traditions don't focus so much on the on the ditch file aspect of the environment. So I came in to look in-depth at how the digital sphere contributes to what it means to be a person to watch cognitions. So as I went further and further into into my studies, what I discovered in what was really grubbed gripping my interest was the fact that, yes, everybody's impacted by the digital technology, whether it's something you voluntarily use, upload or interact with or whether it is, you know, technologies that are out there installed and that you in that you come in to interact with involuntarily where you have no choice. It's everybody's impacted by that. And to some extent, it does affect how you react, how you behave and how you become. But what's missing, what is not explored is the fact that not everybody is impacted equally, that the higher you your position is in the societal hierarchy, the more agents who you seem to have to to choose what can influence you, what you can avoid that sort of stuff so that then ethics become incredibly unavoidable.

You cannot study these aspects of cognition and what it means to be a person and not look at ethics. So now, you know, near the end of my third year, I see myself more at the ethics side of colossi rather than at the corporation. Part of Cook's site where I where I started out.

A part of my research right now is even just trying to get a handle on the word ethics and what it means and some of these spaces. And I'm wondering for you either how do you define ethics or like what do you think is the goal of these ethical conversations and technology spaces?

Yeah, that's a really interesting question. I just so I on my Twitter bio, I used to have I still have hashtags like add complexity in body mind and Dallow Jism in ethics. And I recently took ethics out because it has become. Kind of vacuous. And it's because it means so many things, so many different things to so many people. So I, I find myself kind of avoiding that word.

And even though I generally I mean, in the scale of where my study is in, you know, in the landscape, of course, I do see myself more authentic side. But to pinpoint what exactly ethic's means has become really problematic. And I think the main reason to that comes down to at least for me to to, you know, various and it explored Sweezy in various take big companies where ethics has become just a PR stunt, or you have these very narrow field of ethics where people are intensely working on finding a good formulation or the perfect formula to Dubai as your data sheet or to find a way to to avoid discrimination, that sort of stuff. So it has become in that sense, it has become so narrow. It's for me that that yes, it's part of the solution, but it's it's me. It misses the point and kind of associating ethics to that NATO idea or to, you know, the general PR stunt being used by big companies. That's probably some of the reasons why I have come to stay try to stay away from that from that word.

And part of for me, what makes your research so influential is that you're bridging some of these disciplinary gaps. So like you're bringing in the cognitive science, you're bringing in the computer science, and then you're also bringing in the philosophy. And as someone studying moral philosophy, I'm curious about your take on, you know, what the role of philosophy is here. I know you've especially done some work with like Descartes and pushing back against some Cartesian ideas. And we don't have to go too far down that rabbit hole. But I am curious, just like in general, where you see the role of philosophy in some of these conversations.

Yes, I do have some philosophy background. I do have a little bit of computer science background. I do have cognitive science background. I do have a psychology background. But funny, I don't find myself as being, you know, a philosopher or a psychologist or cognitive scientist because I really am at the intersection of all these and. And yes, I yes. And generally speaking, my work is more philosophical than anything. But I would still find it difficult to fifteen to to take that that labeled a philosopher, because I whether my the kind of philosophy I do is not really doesn't fit well with your tradition on Western analytic type of philosophy. And it's it it works in many respects.

But for me it kind of gets too abstract. It's it's too high up on a kind of constantly trying to generalize or try trying to find some general up in the rooms. It really misses the particulars, the ground. That's where I find the philosophical thinking kind of inspired by, you know. System science embodied cognitive science and inaction much more much more usable, much more practical. You are not philosophizing or you are not trying to create some logical coherence through your thinking. But you really are talking about a particular incident, a particular event, and in a particular context, context is always there. So in that sense, for me, that type of philosophy is really important. And it has a role in it in ethics. This is what I know. I have a question for me on my robot rights paper. So this is the kind of thing we also mention in the paper a little bit, because when you look at, you know, the general field of ethics that is so heavily involved with robots rights, it really is. Up up there, way too abstract. It's about what is intelligence.

You know, at do thermostat's have intentions. That sort of, to me, almost sounds a bit insane. And you you. Yeah, it's good to to to terrorize. It's good to find in clear, logical coherence in your thinking. It's good to be rigorous. But when you are way far removed from actual happenings, you are actually doing more harm than good. This is how you end up arguing reser. Robots should have robots should have rights or not. When you actually have issues such as the very robots that we are discussing about are made by, they they operate on the backbone of micro workers and, you know, people constantly plugging in, labelling images. And and also the whole conversation moves that the discussion from what's happening on the ground. You have, for example. I think this was last year a case where a wheelchair user was blocked by a starship robot and she couldn't get out of her way. And it should be those incidents, those at, you know, actual events that we need to talk about instead of whether robots should have a right or not.

So I guess this is another example of how philosophy is important, but to analytic, too far removed philosophical thinking that strives to find a unifying theory or some generalizable principle to really I don't see it helping in in ethics.

I think I was just about to ask you, should robots have rights? For now? I feel like I shouldn't ask that question. So maybe instead of a better question would be to ask if we were to grant some sort of rights to robots, like pragmatically speaking, what would that look like and what are some of the possible negative and positive outcomes of choosing certain scenarios over others?

To me, granting robots is just extending rights to companies, speak to companies monopolies that already have rights. So by giving rights to their products, you are extending their rights to do as to to even further olbrich and abrogate their responsibilities. So it because on a philosophical level, there is no autonomous entity.

Not at the moment. There might be in the future. There is no autonomous entity that can be granted rights. If you look at, you know, the very basic, the very components of how robots operate and even how machine learning systems work, because there is no actual clear definition that demarcates machine learning systems from robots. If you look at how those things operate, they are never just autonomous systems. They are always human mission systems. So when we are talking about the flow at the foot, other philosophical level, how intelligent and the machine has to be in order to be granted righted, granted rights. We really are removing the whole backbone, the human in the loop. That is that is maintaining. That is creating. That is making making this operation smooth. So for me, as long as there is that there is no it's it's silly to talk about even the very idea of granting robots rights at that riser. It's it's much more important if we think we are being ethical, if we think we are involved in ethics, then look at what is behind the scene. How much are, you know, for example, Amazon's Mechanical Turk, that one of the lowest paid workers and they are they they are heavily involved in, you know, when from tagging raw data to maintaining the Baris, you know, whether it's machine learning systems or robotic systems.

So how. Let's talk about, you know, how how are we maintaining their welfare and how are we protecting their head and what systems do we have in place for them to have it, you know, write their rights, check 047 life. So that's much more important. Yeah, but also that is on a philosophical level. But when you then look at it from another live and what you find is at what what what you find is robots in machine learning systems are actually tools that are being used to harm people, whether it's hiring systems, whether it's policing systems or wizards' machine learning systems used in various spheres. What you find is the very development and deployment of those systems actually puts one troubled people, poor people, marginalized people at at a much higher disproportionate harm compared to, you know, you know, the status quo. So it's that's really the harm that's being caused by robots in machine learning systems is what we really need to talk about if we really are concerned about ethics. That's my position anyway.

One thing that we've heard from some other guests who are questioning the status quo is that they've received some pushback from industry or from other folks even in the field. And I'm curious if you especially around this topic of of robot rights or Centerin marginalized voices have received any pushback to your scholarship?

I haven't received any pushback from any notable industry or organizations. I am not that big. But when our paper came out, there was pushback, but from individual researchers who were really invested in robots rights. And I do sometimes wonder whether they actually really, you know, those people who work to make robots rights.

I think whether they are missing the point of what they are advocating for or whether they know and there still are, you know, going ahead and and working towards it.

Because for me, it just it doesn't make sense to to call yourself on ethicists. And if you kind of underplay or downplay the harms, that's the harm that's that's being caused by that the deployment of these systems. So there was some pushback, especially on Twitter, and it was a little frustrating, but also it may give our paper a lot of coverage.

So, you know, but overall, do you see a relationship or maybe a better word for that would be do you see a similarity between the way that humans function and work in the world and the way that computers and algorithms function and work in the world? And have those two played into the way that you view ethics?

I can I think I can interpret your question on on two levels. The first one is there is honest relation. We only exist. Humans only exist in some. Relation whiz, whiz and robotic systems or A.I. systems. So when for me, when I talk about, say, my phone or a robot system, it's a tool that humans have created that will extend my ability to do routines that really extend my cognition at.

But to think of that, think that that my phone or my robot as something that, you know, can have its own right or that can extend its own existence, is to almost think of, you know, my right as Harvey truck. It's like talking about giving my right, my my hand a right. So, yes, robots.

Robotic systems and systems exist in relation with it with humans. And they are part of our nalu.

But they are not things in and of themselves that can that can be intelligent, that can be fully autonomous. And.

And I think the second part of your question, I think if I got this right, is where's the complexity of things and the fact that we exist in a relation, in a relation of weps and.

So I think I look, I have a recent paper coming out, so I'll read. If I knew, I'd use that as a reference and talk about that and see if I can clarify your point. So, yes, coming from system science in an embodied could say, as I said earlier.

You see human beings and the very idea of cognition fundamentally existing with others only is, you know, in in the UN indeterminable relationship with others. So and also existing in that embedded in historical in cultural and social norms. So any understanding of me requires all. That is my background. So in a sense, if you follow the systems and the cooks, the cooks, I add thinking what you find is humans are always active, always moving, always changing, only suggesting to the situation, to their context. And there are infinite ways for ways and different forms. And for me to be in my next step there is I am indeterminable because how I react or how I might behave next is really because it's infinite ways. You really cannot say this is you.

And then you will act in this way in the next event or, you know, in the next future. And so that puts the very idea of human beings as really something you cannot predict, something that is contextual, that something that is continually changing. But on the other on the other hand, you have machine learning systems, especially those that are that try to predict social outcomes, whether it's in prison, whether it's in social welfare or whether it's in social interactions.

You have. You know, every day almost, you have this new system that is trying to predict the outcome or a social outcome of some sort, and it's and then you what you find then is. We are trying to employ machine learning systems to predict the inherently unpredictable. And in the process, even the very idea, even the real concept, the very attempt to predict something that kind of carries the outcome forward. So by predicting something, you make you Tarpeena innocence. And so when we are employing these machine learning systems, we are creating a certain type of future that resembles the past.

And we really are narrowing possibilities and narrowing opportunities, especially for those that are disproportionately impacted. So I guess my embodied Kosei and Systems background comes comes handy in kind of interrogating how the very idea of, you know, cognition and intelligence in learning is not something you define once and for all. And pinned down and then predict because it that's impossible, because we had complex adaptive systems. And that's the very definition of that complex adaptive systems. And then you then when when you are deploying these missions, what you find is you get into all sorts of and ethical problems where you have a discrimination's where you have disproportional harms.

But also at the general level, you find the general discourse of trying to think of machines as a solution, as a quick fix or as a way to for me formalize or as a way to kind of narrow down and simplify what has usually been, you know, difficult to to define or difficult to to understand.

So we get into the habit of or introduce this discourse of you thinking of machines as as a quick fix or as a way to to to almost liken them to give them the ability of a God, because we are attempting the impossible just because we have a mathematical, mathematically based machine learning systems.

It does seem that a lot of people tend to really trust the machine learning systems and machine learning decisions as if they are these divine predictions that are coming from this seemingly objective source in the world. And something that you mentioned that was really, I think, spot on. Is that machine learning systems or creating a future that resembles the past? And this really touches a law on the note that the data that's fed into these systems is a reflection of the past. I know you've done a little bit of work on data and objectivity, and I'd love for you to shed a little bit of light on that idea and really whether you think data is objective or could be objective or not.

I think many people have have been writing and speaking about these. Even the very idea of objective subjective is reminiscent of, you know, Western philosophy in Western science. It comes from the thinking God. You can put your intuitions, your background, you know, your feelings and emotions aside. And you can look at the world from a totally detached perspective. But that's impossible because we are humans. We are not gods. We can't do that. And so whether it's whether it's science, scientific investigations, whether it's data, the very idea of, you know, objective data or objective science really just reflects the status quo.

Because when we think we are viewing something from a view from nowhere in quotation mark, we really are adopting the status quo as the normal, as the standard, and we are measuring things from there. So the very distinction of objective and subjective is really problematic.

And a lot of people from a space, I think, since two decades back have been speaking about this. But the data I'm involved in teaching, we in here in Dublin, in the data science module, and there is even after so much work, you constantly find your students and people in data science in general thinking, you know, data just exist out there.

And you and you you go and collect them and you do your analysis.

And then today you find your your results. But that's far from from the truth. You know, you you even the very beginning of asking a certain question instead of other questions, you are bringing your subjective interest. And you when you say lift a certain dataset, you are excluding others by definition. And Dave, athletes are never clean and, you know, never complete. You have to. To do so much massaging so much cleaning of your data. In order for your data to make sense and you exclude some you have, you know, missing values. So you somehow compensate for that. And all that involves the data science, the person itself doing. All the cleaning and the analysis. And then how you want to interpret your results is also really reflects what your interests are, what you want to achieve.

So yet the very idea of objective data really is I think that is so yesterday.

Absolutely.

And I'm really taking to heart what you're saying about how sometimes, even without thinking, we're grounding ourselves in these classic assumptions of Western philosophy or our Western understanding. I mean, I was reading your blog post recently about the algorithmic colonization of Africa.

And I was wondering if you could talk more about, I guess, the politics of place and all of this so that my write my blog on the algorithmic colonization of Africa came from and in it said I have written it into a paper, so it should also come out very soon.

It came from this huge frustration, again, all of people, and totally, grossly overhyping the part of the part of A.I. and totally misunderstanding. To what extent it's a solution and.

I was I last year, I think in June, I was invited to one of the biggest conferences in in Africa.

It's called Sci Fi Africa.

I at first I was started by just couldn't believe I was invited there because it was at a loss of a lot of people from policy, government, governments, people, people from around farming deregulators, a lot of U.N. ambassadors and representatives of parties, big companies take companies.

And a lot of the AI researchers also went there. And I was super excited to be part of it. But as the day went on and as the conference progressed.

It's just the same thing again and again and again. People would mention it and bring up these are state of the art algorithm or state of the art tool that is being used. You know, I don't know in in Germany or in England or somewhere in the US. And then bringing it to health services to some sub-Saharan countries. Or and they are importing, you know, X and Y and Z to help and various women in that very small, very small villages in various parts of Africa and. So I was I tried to. Be critical in a constructive way. But as the day went by, I was really frustrated because there was no ear for critical thinking. People were just too excited to be up to to leapfrog was the term that was overly used leapfrog leap from the continent into development. I have also come to be super, super on object to that word now. And for me, I'm I'm not totally objecting the importing of various take into Africa. But I what I my worry was, is that first of all, you know, any take reflects the value in the interest in the problems of a certain society. So I think developed, say, following in, for example, in England, that really is developed for a certain purpose with certain interest, me certain philosophical thinking, background with certain sense, certain cultural norms. It is simply because there is, you know, the philosophy, the interest, the questions, the problems, the solutions. Totally, Browdy. I know this from, you know, coming from Ethiopia to living in Ireland, I I'm still trying to adjust to the culture shock.

People think differently. People find different solutions to a problem that might be seen as critical. Hearing, say, in England, for example, may not be a problem at all. In some sort of sub-Saharan country. So one of the issues for me was context matters. And the second one is when you are importing forest, take products, you really are importing, you know, whether something whether that take is going to be applied in in banking or or in finance, you really are bringing your idea, your wisdom idea and your thinking to be normalized, to be accepted as the standard. So that's really resembled colonial colonial power. And but only on a more cynical level is that if we are being frank at people that are exporting and importing, they really have one interest. That is where the accumulation of wins they want profits they really don't care about. It's really not about the unbanked women of Africa, you know. Can you imagine, you know, a group of CEOs sitting at a round table in the US worrying about African and Bankert African women? I can't at. So my that was my that's an article came out of frustration for lack of critical voice in when talking about technology in Africa. So I was attempting to to kind of like how the importing and exporting of Western tech really is, you know, a reincarnation of colonialism. But now it's in a digital form and because everybody's hyped up and people are not questioning it as much.

So what do we do about that? Is there something that we can? Is there something that we could do? Like how do we how do we change our thinking? Do we have to just, like, get rid of the whole system of colonisation that's been embedded over the last, you know, forever? And what do we how do we address this?

So, you know, some people argue that we are not over colonialism. And some people really object the very word post-colonial ism because we are not past it. And can we import take without colonialism? No. Because a lot of take is Buell's. It's colonialism is not a bug. It's the feature. So we can't. But for me at least, I see a lot of African Intercon nurse, a lot of local experts working from the ground up. So. For me that if people really want to help, if people want to do good and if they want to be involved. For me, the most straightforward thing to do is to support local experts, to support that in technology that is homegrown, that that comes from the concerns of, you know, local problems. And that's one one one way forward. And. But I know I say that there was there is there was very little critical voice at the conference, but there still exists a lot of kind of critical voices throughout the continent. So you see them organizing in various ways. So seeing that organization, which kind of gives me hope and I see that is also a good future way forward. If if anything, fence, at least in bringing about awareness, all of you know the underlying implication and intention of Western technology.

And as we move towards the latter part of this interview, something that we do, Dylan and I do as part of this project, the radical AI podcast is we're working to define the word radical as it exists in the field of A.I. and we are working towards a definition. But we're definitely very curious what you think that definition might be. So what the word radical means to you. And then, of course, how you situate yourself, your story and your research in that definition.

As I mentioned earlier, my background is in embodied science systems thinking. And from that perspective, you cannot define something once and for all if you define radical A.I. or what radical means. Now it's only for it only serves for now because what's radical moves in change in time was context. And so but I'm not saying we shouldn't define radical. I'm just saying it's a moving target. But I don't really have a definition of radical I myself. But I have a definition from a friend of mine. You might have heard of her at a react lower calorie as she is. She's doing her pitch day in Stanford and she is creating a community of radical thinkers. So her definition of their definition of radical is radical. Work begins with a shared understanding that there is a root problem. Society distributes power evenly growing from these roots. Radical I examines how a I rearrange power and critically engage with the radical hope that our communities can dream up different Humanae Eye systems that hate put power back in the hands of the people. So I think that's a good working definition. But again, it's will work. What is radical, Varis? So what is radical? So some people take my work as radical. And if you are if you come from systems thinking or is this you will find my work cannot. That's radical. But it would be a radical for the AA crowd. And if you look at my work and I bring it to the systems thinking.

You know that the system's crowd will find its radical, but not so much the right. So it really is what's right, what's what's radical release and contextual. But even given context and conditions, still, some people for me stand out as doing radical work. One, as I mentioned, is a reactionary. She is she for radical work for her is all about shifting power from the least powerful, from the most powerful to the least powerful. So this can happen, for example, by involving one people and people that are disproportionately impacted and harmed by a I into the decision making position by putting them out there as a key decision making powers, points, positions. Then we shift the power. We give them more power to decide. And I want is done maquila. He he's a he's a an experimental physicist by training. But the kind of work he does or so as far as I'm concerned, qualifies as radical in especially his recent work on fat non fascist Dahai. He likens the very idea of the A.I. project as something that aligns with his radical rightwing thinking. He explains how A.I. has allowed for thoughtlessness. We stop being critical. We just adopt the next big thing. And he goes through all the characteristics of A.I. that very well alliances with, you know, a right wing thinking. And for him, in order to get us, you know, an 80 cal or social good AI, we really have to dismantle.

We have to come to terms with the fact that I, as it is, is now something that aligns values ways, you know, radical left wing thinking. And for him, we work from there. And by, you know, organizing, by resisting and by various by devising various methods and developing terms. And I guess I should mention another one is or scheiße, you might have heard of them that they have written one speech. One piece especially comes to my mind, which is counting the countless at which, again, questions the very fundamental idea of what these are sciences for that they were asked. I think they explain at the start of the piece, which is in Real Life magazine, it's called Counting the Communists. They were asked to give a talk on how data science can be used for good to put to to help queer people in times people. And they explained through the piece and what they arrive. What they find is data science. The very existence of data science actually is a fundamental problem, is something that harms trans people, queer people, because queerness and in trance is contextual and by, you know, kind of pinning down a person in data points. You really forced them into some sort of category, so these are science, you know, waitering more harm than good. So that kind of work at all qualifies. All I own the paper for, I think is radical. And something inspirational and very it gives me vision and hope and.

Yeah. And of course, for listeners, we will include the names and links to those thinkers and some of those publications in the show notes for this episode. But as we look towards ending this interview, I'm wondering if you'd be willing to give us your own vision or hope, because there's a lot of folks that listen to this podcast who are just getting into this work and getting into these conversations. And if there's either one piece of advice or one particular vision you have of all of us coming together in this work, that would be wonderful.

I have just realized I have been super negative. So it's fair that I and I kind of outline some sort of hope and vision at. Yes. And. I as I said earlier, I have a paper coming out called In Defense of Indeterminable Lety. And it's all about how machine learning systems are forcing and core seemed determined ability to and narrowing possibilities and creating a future that that resembles the past. And towards the end of that paper. I outlined a few kind of ways forward in, you know, aspects that give hope. And I guess one of them for me is we can talk about policy. We can talk about, you know, devising data sets. We can talk about various tools to to combat surveillance systems. That's all good. But one central issue for me is one central thing. And vision for me is a system where, you know, radical work or work that empowers the least powerful is incentivized, where, you know, where we create a discourse or where it becomes the norm that doing a I work. That might give you very little profit or no profit at all. Because at the moment on I is based on profit and efficiency. So, um. That puts on that objective for, you know, gaining as much profit and greater efficiency aside and something that strives to to to empower the least privileged in the most vulnerable. And that's it's not that we are lacking that there are various people working on that. But we don't have a system that incentivize that encourages that. So my hope and my vision is for creating awareness and creating a system that makes that sort of work call and that's rewarded. That's I don't know if it's possible, but that's my hope.

Thank you for saying all that. And even now that we've reached the end of our interview for all of our listeners, if they would like to engage a little bit more with your work or with your online presence, is there a best place for them to go for that?

Yes, I can put in my email address, and I'm also very active on Twitter at TSA. I'm happy to interact and to discuss on Twitter as well.

Wonderful. Thank you so much again for coming on for this conversation. It's really been a pleasure. Thank you so much for inviting me in. Great.

We again want to thank a baby for coming on the show today and coming out of this interview. I am feeling pretty challenged overall, especially on this concept of ethics in general. I think one of the most important set of questions that we started asking in the middle of the interview is whether ethics means anything anymore or whether it's just a brand that we're putting out there. I think a baby use the term, you know, vacuous when she talked about removing hashtag ethics from her Twitter bio. And for me, as someone who considers himself an ethicist and for us has Taing, this podcast about A.I. ethics, that question of, you know what our ethics is, just is so important. And I never wanted to get to a point where we completely lose track of what is core in questions of ethics. But in order to do that, we all need to to know what we mean when we're using the term ethics in the first place. And if it just becomes a corporate slogan or something about compliance. Right. Just in the legal field, then we have really lost the heart of the project. So I'm feeling challenged to figure out, you know, how do we maintain standards and meaning and content in the bucket of ethics? Well, we're having these conversations.

You know, Dylan, I completely agree with you. I actually felt challenged quite a bit by this interview as well.

First in just questioning what the word ethics actually means and then questioning what it means to be an ethicist and asking myself if I am an ethicist.

I know you said you label yourself as an ethicist, but I was remembering the first time that you and I did our very first welcome episode together, and you asked me if I considered myself to be an ethical person.

Well, that's a good question. And I didn't really know how to answer that question. And I think that as an ethicist, that is a really important question to be asking yourself, probably constantly.

And a beaver kind of brought that up in this interview. At one point, she was talking about the harm of universalism. And if we create ethical standards for things or for technology and we just assume that one way can be the best way for everyone, that's super harmful.

And I just I find myself questioning first of all, it's standardization is even possible in ethics. And if it is, what would that standardization look like? And am I even someone who is able to make a call like that, who's just working on A.I. and ethics, who considers myself maybe an ethicist if I don't even really know what it means to be an ethicist and if I don't even know if I consider myself to be an ethical person or not? I, I don't know. I just I have a lot of thoughts coming up around this topic. And it's definitely making me question quite a bit about the field of ethics as a whole.

A lot of really great existential questions or f ethics central questions. As for me, I wasn't good. Right. So, yeah. And then some other things based on what you're talking about that come to mind, as is when a baby was talking about, you know, robot rights. And I never really considered myself a robot rights activist, but as someone who studies like different types of intelligences and questions about like, you know, general A.I. in the future and topics like that, I don't think I've ever really connected the topics of robot rights with capitalism and with the issues of capitalism and how a baby connected it with the rights of corporations.

And really just being if we think about robots as products, it being extensions of increased protection and legal support essentially for the corporations themselves.

And that's something that I have, you know, various thoughts about that I need to, I think think more about before before I share. But that in terms of like can this question of robot rights really just be I think as a baby said, a reincarnation of colonialism in a digital form is just such a pivotal question that I think really needs to be asked. And that's why it's so important that a is interdisciplinary background is represented in these spaces and why we have these conversations in the first place about social sciences and computer science coming together to have these conversations, because you'd need the historical background of colonization or to really understand how even the questions of robot rights might be playing directly into historical systems of oppression.

Yeah, there's clearly a lot of topics that we still need a little bit of time to digest and. To debrief together, but of course, we'll get much more deeper into this conversation in our many sode in a few weeks. And until then, for more information on today's show, please visit the episode page at radical A.I., Dawg.

And if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite pod catcher. Join our conversation on Twitter at radical iPod. And as always, stay radical.

I always like to play on Zoome.

Do you consider yourself an ethicist? Are you ethical? Take two very different bites. I consider. I don't know. We should talk more about the ethics soup. You wouldn't put ethics in your ethics suit. I would. Huff and others. And I just just more back to the carrots.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix takes transcription to a whole new level. Sonix converts audio to text in minutes, not hours. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Here are five reasons you should transcribe your podcast with Sonix. Automated transcription is getting more accurate with each passing day. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Manual audio transcription is tedious and expensive. Quickly and accurately convert your audio to text with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.