Episode 1: Welcome to Radical AI

In this first episode of the Radical AI podcast hosts Dylan and Jess explore what Radical AI is, why they created this podcast, who the podcast is for, and what listeners can hope to expect from future episodes. Specifically, Dylan and Jess review review what their backgrounds are and what Radical AI means to them. Please see the full transcript of the episode below and welcome to the conversation!

1. Welcome to Radical AI transcript powered by Sonix—the best automated transcription service in 2020. Easily convert your audio to text with Sonix.

1. Welcome to Radical AI was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best way to convert your audio to text. Our automated transcription algorithms works with many of the popular audio file formats.

Dylan Doyle-Burke:
Welcome to Radical AI, a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. I'm Dylan.

Jess Smith:
And I'm Jess. And as the first episode of our podcast, we figured, what better time than now to introduce you to what we are, who we are and why we're here?

Jess Smith:
So, Dylan, tell me, what are we?

Dylan Doyle-Burke:
We are a podcast. Is that right? OK, we're on. We're live. This podcast is a podcast about artificial intelligence, ethics. And really the idea behind this podcast was that there are stories out there of radical people who are doing really cool things in artificial intelligence ethics that don't always get the limelight and don't always get to have their stories told. And so the idea of this podcast is to give those folks a platform. Great.

Jess Smith:
So now that we've introduced what we are, maybe it's a good time to segway into who we are.

Dylan Doyle-Burke:
Mm hmm. So Jess, who are you?

Jess Smith:
This is a question I ask myself every single day when I look in the mirror. It's a big question, huh? Man Yeah. Well, I guess in the context of this podcast, I am what some people might call a technologist, though I honestly have no idea what that word means. So I'm not going to use that word. I would call myself a an ethicist. Surprising, right. On this show that we have created together.

Jess Smith:
Now I'll I'll talk a little bit about where I've come from. So my background is in software engineering. That's what I got my bachelors in. I just say computer science. They're basically the same thing. Don't ask me the difference. I forgot it. And I started getting into the ethical aspects of technology a few years back. Decided to start my p.h.d. Researching A.I. and ethics. And by A.I. I mean machine learning. But we're gonna dive more into what A.I. ethics really is later, so I'll leave that for later.

Jess Smith:
And right now I am currently getting my p.h.d at C.U. Boulder and I'm researching just that how algorithms and ethics interplay with each other influence each other. And I really need to be focused on right now.

Dylan Doyle-Burke:
Are you one of those people who has really been ethical your entire life? Like when you were a child, did you, like, dream of being an ethicist? Like when there's a cookie jar or something you, like, always told on the kid who stole from it?

This is a really hard question for me to answer honestly with a microphone. But, you know, I'm going to stick to like authenticity and openness in the show. And I'm going to say no. I'd like to think that I am a quote unquote, good person and that I try to not harm people. That might just be the consequentialist utilitarian person in me. No. Yeah. I'd like I'd like to think that I'm moderately average ethical person in today's day and age, hopefully a little bit above average.

Dylan Doyle-Burke:
And was this your dream then to talk to other people about ethics or did you want to be like an astronaut or something?

Jess Smith:
Actually, I did want to be an astronomer when I was younger. That's true. NASA is taking astronaut applications, by the way, if you haven't seen that yet. No, I definitely I wouldn't say that it was an aspiration. Maybe the last few years it's become one because that's where my career is heading. But I think the idea of thought leadership is really cool to me. I don't really like the idea of prescribing things to people. I don't think that I'm a very opinionated person. So I'm not one to really shove my opinions down people's throats. But I do love the idea of throwing out ideas and building awareness about what we do knows that people can kind of think and question their own thoughts and opinions and come to their own conclusions. I think there's a lot of power in that.

Dylan Doyle-Burke:
And as we're seeing, I mean, there's more and more opportunity to ask those ethical questions out in industry or in the academy and the research that we're doing or anything like that. It's important.

Jess Smith:
Definitely. I think it's important to get people thinking and asking themselves not only how we do things in technology, but maybe why we're doing them in the first place. So enough about me. Dylan, let's talk about you. Who are you?

Dylan Doyle-Burke:
Oh, now, now I have to call the existential crisis. So I actually come from a. From not a computer science background. I come from a religious studies background. So as I was graduating college, I was choosing between going to law school or going to divinity school to become a minister. And the last week I had taken the LSAT and I was choosing between these two programs and I ended up choosing to go to Union Theological Seminary instead of a law school. And Union Theological Seminary is a seminary attached to Columbia University with a huge focus on ethics, on racial justice. It's where black liberation theology started. And it really shaped the rest of my career for at this point in the next decade.

Dylan Doyle-Burke:
So from there, I started looking at psychology and religion. And from there, I started researching online preachers and how technology plays into how religion is spreading in a new way and the ethical considerations in that. I began working in the hospital as a as a chaplain, and I saw how technology could be used to say help intubated patients communicate with folks and just the power of technology to do that. And then also some of the ethical considerations there. Eventually, I got ordained as a Unitarian Universalist minister, but then started to make a career shift to really looking at these ethical questions. More specifically, although I will say that my ministry plays into all the questions that I ask now, when I consult or when I talk to folks in the ethics world, because for me, it's like the core of all of this are questions of meaning and purpose and what it means to be human in the first place and then what role technology and artificial intelligence plays into that. So that's a little bit of where I come into this conversation.

Jess Smith:
Tiny piece of the pie.

Dylan Doyle-Burke:
I should probably say that I'm also a phd student at the University of Denver and I'm studying human computer interaction. And specifically what it might mean to have a theory of mind for robots or for artificial intelligence or for machine learning. There are so many terms, it's sometimes hard to keep it straight.

Jess Smith:
We'll break it down eventually. It's going to take some time. And if you all didn't already notice something that's a little bit unique and nice about both Dylan and I being on this podcast together is that we have very different backgrounds and expertises, and because of that we have a pretty different approach to how we're going to tackle some of the issues that will be going over. So I'm going to have a little bit more of an engineer, computer scientist mind, and I'm trying to learn from Dylan's philosopher and ethicist mind.

Dylan Doyle-Burke:
I come from a moral philosophy background and know next to nothing about computer science. Although I am learning very quickly, so I know I know a lot about ethics, but in terms of actual algorithms or, you know, like about 50 percent of the words that Jess uses, I just I don't even know. So I have to go look up and create a glossary. And and it's great. It's great for my learning as well. But we hope that that energy and that learning that we're doing from each other will also help, you know, you learn and grow. And the questions that you're asking out there about what radical AI. can look like, which really does bring us to this question of of radical AI. I feel like the word radical sometimes gets a bad connotation in the world.

Jess Smith:
Radical.

Dylan Doyle-Burke:
Right. Well, I mean, really, like at the end of the day, what what is what is radical?

Jess Smith:
I'm not going to lie. I was thinking about this earlier. I was having a really hard time coming up with something. And I feel like radical is something that is defined based on the person. I think it's kind of unique because radical in itself is not really saying like, what does it mean for something to be radical? But it's more like, what does it mean for something to be radical to you? So, Dylan, I'm curious, what does it mean to you for something to be radical?

Dylan Doyle-Burke:
So this is like a choose your own adventure of radicality.

Jess Smith:
Yes.

Dylan Doyle-Burke:
I'm not sure if that's a word either. Maybe I should add it to my glossary. So I'll say a little bit about the about what I consider radical and what this podcast considers radical. So radical when we're talking about radical people, we're talking about folks that are underrepresented who might have a controversial background or thoughts in some way. So people that are not necessarily the status quo. And if we lift them up, it might not necessarily be to endorse their opinions, although we hope to bring folks who are bringing good things to the table. But sometimes ideas that push the envelope or that people are responding to are important for all of us to be in dialogue with. When we talk about radical ideas, we're really talking about ideas of fairness. So fairness, accountability, transparency, justice, talking about, you know, who's in the room with artificial intelligence, ethics about identity and identity politics within AI development, not only in terms of engineering, but in terms of algorithm development, in terms and all that. And when we're talking about radical stories, this might be the most subjective, but we're talking about events or actions taken that were radical in the spaces of AI ethics.

Dylan Doyle-Burke:
So you might think of, say, if someone was an ethics whistleblower, that might be a radical statement that they made through their actions. And we can talk about the ethics of even that process of whistleblowing. So that's that's what we consider radical. What I personally consider radical is like a Venn diagram with that. But for me, I think what's radical is what makes me think in a completely different way. So like for me is like a straight white man coming into engineering and the ethics conversation in AI. What really makes me think is like someone who forces me to take accountability in a different way or who switches some things that I take for granted in a way that I really have to readdress myself and my identity in that conversation.

Jess Smith:
Yeah, I really like that. That kind of that has something to do a little bit with how I was thinking of radical before too, because I think that radical tends to be something that people have a visceral negative reaction to, or they at least tend to slightly push it away when it comes to people, ideas or stories. Really radical kind of just means like out of the norm. And that's one of the reasons why I like this idea so much, because I think to me personally, radical means authenticity. And hear me out on this one. It's kind of, it's a little bit vague or not what you would really consider radical to be.

Jess Smith:
But I think in today's day and age, everybody is so online and they've created so much of a brand for themselves that it's really hard to come by how people are really feeling about things, especially when it comes to like social justice issues and ethics and values and like really digging into the heart of who we are and what it means to be human and what our likes and dislikes and goods and bads are in society. And so I think it is, in a sense, kind of radical to be authentic and open and vulnerable when it comes to a lot of the issues that we're going to talk about on this show. And I think that that is probably going to be one of the most powerful things that we're going to try to uncover as we interview with people in this space and also as we talk to each other and kind of figure out for ourselves what this means, because I think in a sense, we kind of also don't exactly know what radical or radical AI is quite yet. And that's sort of part of this journey as well making this podcast, right, is the sort of kind of figure out what that is.

Dylan Doyle-Burke:
Yeah, I was just about to say that Jess, great minds think going in that I think we can sometimes again take for granted certain elements of what's true or even what's radical. And I think it's clear from what we've said when we've given, you know, twenty five definitions each word is radical that it's still in process even for us as the experts of radical AI. So I'm going to trademark us as the experts of radical AI. But I think it's it's important. The project of this podcast is to is to help define what radical AI is. And in that way, also challenge, you know, the industry and the academy and the powers that be to really push themselves to look at AI ethics in maybe a slightly different way or through a different set of lenses through some of these stories.

Jess Smith:
Yeah. And I also like what you said before about how there might be times when some of these ideas are so radical that we probably won't endorse them. And I think that's important to talk about because there are going to be a lot of things and people on this podcast that will probably come up that might make you squirm a little bit in your seat might make us do the same. And I think that that's when we know that we're doing our job, because we really want to bring up thoughts and opinions and ideas and paradigms that we honestly don't really understand fully that we can question in ourselves and about society around us. And sometimes we might really disagree with the people we're interviewing. You might disagree with us. We might disagree with each other. And that's okay. I think that's that's also part of this process is to kind of just bring things out into the open so that we can all digest them and so that you can take a step back and let them just simmer and really think for yourself. What a lot of this means to you and by you also, this is something else that we are really passionate about when it comes to radical AI. Everyone should be included in this conversation. So if you as a listener are wondering, who should I be, who am I? Which is a great question to be asking. The answer to that really is everyone. Anyone. So something that Dylan and I noticed when it comes to podcasts about technology and especially about artificial intelligence. There are a lot of people that are being left out of the conversation, which is really a shame because we all are impacted by and influenced by artificial intelligence every single day. So we should all be a part of the conversation. Whether you're a coder, whether you're an ethicist, whether you're an academic, whether you're in industry, whether you're a high schooler or whether you're retired. No matter what your age or profession or interests, this is going to impact us. And already is impacting us every single day. And we should all be included in this discussion.

Dylan Doyle-Burke:
And one thing that I often hear in talking about ethics is the question of of why it matters. So, for example, someone in industry. Right. Why why is why why do ethics matter? What are ethics? What is what is an ethic? Perhaps would be the question they would ask. And I actually think that's a really important question for everything that you just said. Right. So AI in general is impacting us on a daily basis, whether we know it or we don't, you know, whether it's, you know, the my wearable tech in my watch or whether it's my phone or how I'm getting screened at the airport. Artificial intelligence really is everywhere at this point. And because of that, there needs to be a movement towards intentionality in how it's designed and how it's utilized. And the question of intentionality for me is the question of ethics. All right. It's a question of like, how are we living our morality out into the world? So that's my definition of ethics and why it matters. I am curious from your perspective, coming from a different background. If you have a sense of why ethics matter.

Jess Smith:
Yeah, I think when it comes to the realm of technology, I can't speak to ethics as a whole because I'm very, very new in this space. But I will say when it comes to technology and ethics, I think for me it's really about that awareness piece. It's about understanding what are the different values that we're encoded into our systems. What are the values that are being forced upon us by the systems that we use every single day? And why why is that? So it's like awareness, understanding. And then once the understanding is there, we can dig into what is causing this. What is the impact of it? Why? Why did designers of technology do this? Why are we so willing to accept it? And something else that's kind of interesting to me is the idea of ethics throughout time and space. So as we change as a society and as humans over time, not even just in our own lives, but throughout generations of people, our ethical systems and our moral systems, they change, as you know, as an ethicist. And not only that, but it changes and is very, very different between people, between cultures, between geographical location and a lot of the people who are making the big decisions about technology that everyone is using kind of come from the same place in the same area of the same bay in the same state.

Dylan Doyle-Burke:
Which all will go unnamed, all unspoken.Perhaps a bay area, you might call it, some might say. But even that is is changing slowly. But it is changing the international scope of where some of this technology is being built and circulated and then used. And that's something that I definitely hope that we get to on our podcast is that intercultural dialogue around ethics. So I'm wondering if it would be helpful for us to say a few things or give a few examples on what we consider issues of ethical guy like. Are there some examples that we would give that listeners might hear about in the future or when we say like ethical AI in particular? We're not just talking about ethics, right? We're talking about artificial intelligence specifically.

Jess Smith:
Let's see first few things that come to my mind, algorithmic bias. You've probably seen that in examples of like risk assessment tools when it comes to criminals can also think of in the same world of crime. There's a lot of things like predictive policing that have a lot of ethical concerns like surveillance technologies. Think about facial recognition, facial classification. You want to jump in here, whatcomes to your mind.

Dylan Doyle-Burke:
But but yeah, I mean, anything from surveillance to accountability, like in my context, when I was working in the hospital, there were certain, you know, artificial intel intelligence technologies that were being used in surgery where, you know, a millimeter of mistake could be fatal. And so if something goes wrong, who is accountable at the end of the day and tied along with that is this idea that artificial intelligence is still a black box in in a lot of ways. So it's not intelligible. We can't necessarily look at the algorithm and see how it made the decisions that it made. So if the algorithm is acting on its own, if you will, and we can talk about that language at some point, but but then then who who is actually at fault legally but also morally, who is morally responsible for that?

Jess Smith:
Yeah. And that that's a question that comes up a lot. I think the go to example for that is like self-driving cars and the future of autonomous vehicles, the future of autonomous machines. And another example that came to my mind in this world of A.I. and ethics is notions of fairness. So how are our algorithms treating different groups of people? It could be people who exist in protected classes like sex or race or this like socio economic class. Right. And how are these algorithms treating different groups of people differently? And is that okay? And another thing that also tends to come up for me in my research a lot, too, is this notion of bias. There's a lot of biases that exist in society and there is a lot of ways in which we teach these machines to learn these biases from us. And so a lot of where this ethical conversation comes into play is not necessarily fixing the technologies, but taking that mirror to ourselves, taking a good, hard look at society and asking where do these biases come from? Where does this unfairness and social injustice come from and why? And how are we putting it into these systems.

Dylan Doyle-Burke:
Also in what you said? I think there's a concept of accessibility and who has access to these technologies, both in terms of access to abuse these technologies and then also access in general, say who who can use Google Maps in the first place? Well, it's people who have access to a computer or to a phone. And so even those basic questions of who has access to these technologies I think is pretty key. And obviously we can go back and forth on these topics. There is a lot here.

Jess Smith:
There's a whole podcast worth. A whole series podcast, if you will.

Dylan Doyle-Burke:
We're not going to give away all our trade secrets on the first episode, but we give you a little, little taste, a little teaser on what we might be talking about in in future episodes. So going forward, what you can expect from this podcast in terms of format is that it's going to be a little bit of Dylan and Jess but much more focused on our guests.

Jess Smith:
And our guests are going to be from all walks of life. They might be from academia, industry or anywhere in between. And they are going to be people who have radical ideas, radical stories, or maybe are just radical in themselves.

Dylan Doyle-Burke:
And we hope that as we go forward, at least our vision and our dream for this podcast on this platform is that it can really be a conversation for all things radical out in the A.I. ethics world as much as we love to hear ourselves talk. We want you to be part of that conversation as well. And we hope to have opportunities for that going forward.

Jess Smith:
And we just want to thank you all for your support. And we're excited to have you along for the journey.

Dylan Doyle-Burke:
Stay radical.

Jess Smith:
We should keep that.

Quickly and accurately convert audio to text with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Thousands of researchers and podcasters use Sonix to automatically transcribe their audio files (*.mp3). Easily convert your mp3 file to text or docx to make your media content more accessible to listeners.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your mp3 to text, try Sonix today.

Previous
Previous

MINISODE #1 - Contact Tracing, Social Power, and a Thank You!