Episode 17: Science Fiction, Science Fact, and AI Consciousness with Beth Singler


43816617704_451a90327e_o.jpg

How can Science Fiction be used to get the public involved in the AI Ethics conversation? What are religious studies and how can they relate to AI? Why is it important to distinguish between Science Fiction and Science Fact when it comes to the future of AI? To answer these questions and more we welcome Dr. Beth Singler to the show. 

Dr. Beth Singler is a Junior Research Fellow in Artificial Intelligence at the University of Cambridge. Previously, Beth was the post-doctoral Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. Through her research, Beth explores the social, ethical, philosophical, and religious implications of advances in Artificial Intelligence and robotics.

You can follow Dr. Beth Singler on Twitter @BVLSingler.

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Relevant Links from the Episode:

“Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse

Beth’s Personal Website

Beth Singler_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Beth Singler_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan and Jess. In this episode, we interview Dr. Beth Singler, a junior research fellow in artificial intelligence at the University of Cambridge. Previously, Beth was the post-doctoral research associate on the human identity in an Age of Nearly Human Machines project at the Faraday Institute for Science and Religion. Through her research, Beth explores the social, ethical, philosophical and religious implications of advances in artificial intelligence and robotics.

We cover a lot of ground in this interview, and some of the topics that we discuss are what's up with A.I. and consciousness? What is religious studies and how does it relate to A.I.? How can science fiction be used to reimagine speculative futures and get the public involved in the ethics conversation? And finally, why is it important to distinguish between science fiction and science fact when it comes to the future of A.I.?

I want to thank both so much for coming on the show as Beth, as someone who is really leading the way at this intersection between philosophy, religious studies and artificial intelligence. And for me, as someone who's in a religious studies program, it's just really cool to see other people who are out there who are paving the way for folks like me to, you know, even have a career in the future.

And so I'm just so grateful for Beth, for her continue not only that trailblazing, but also her mentorship as I continue to discern my own pee research.

So without further ado, we are so happy and excited to share this interview with Dr. Beth Singler with all of.

We're here on the line with Dr. Beth Singler. Beth, how are you doing today? I am melting. It's very, very hot in the UK at the moment. Sorry if I just fade away. That's the reason.

Well, thank you so much for joining us today. And I guess let's begin at the beginning.

So if you could tell us a little bit about your journey to what you're doing now and a little bit about what you're doing now and what motivates you.

Well, I was born on the south coast of No. I will go quite that far back. So where it began?

Well, I am a research fellow at Cambridge at one of the colleges. And that basically means I am paid and employed for a period of time to look at my area of interest, which with this fellowship is artificial intelligence that follows on from my last post position.

But it didn't really start out looking at A.I.. So my background is in with a degree would be called theology and religious studies. But I'm always been more the kind of social anthropology of religion and new religious movements side of things. So I did all my degrees at Cambridge. They kept having me back. Just seems silly, but they did. And I focused on contemporary new religious movements that have a strong presence online. So I was looking at digital identity. So social formation, communities and groups. And then out of that, I got my first postdoc after my P.H. day. And that was the role that took me into looking at artificial intelligence. And I do see overlap in my work. I suppose there's a tendency, perhaps if you've done religious studies, to think of everything from through a religious studies lens. But I still think looking at the artificial intelligence ecosystem, kind of the collection network of concepts and ideas around this very nebulous thing, this object, this entity in this field, there are similarities with some of the work I was doing previously on new religious movements and groups that focus on technological answers to their problems. So there's an overlap, I feel, between some of the New Age groups I was looking at before and Judaism and Scientology that I'd look to as well, and modern forms of transhumanism and artificial intelligence, futurism and the kind of utopianism that we see in some of the discussions around A.I.. So I'm still I'm still primarily an anthropological person in focus.

It has taken me a little bit my current work into discussing the issues around ethics in society when it comes to A.I. But I don't generally claim to be an ethicist. I think that's too big a half for me to be wearing. But I'm very interested in the different discussions that going on and who is having for folks.

So I'm in religious studies program currently and it's taken me even a while to understand what religious studies is and what it isn't. And then also in mentioning new religious movements, and I'm wondering for folks at home who might not know what those things are, what those what you study. If you could just unpack them.

Yes. Religious studies is basically an approach looking at religion, using a broad range of methods and methodologies. So, as I say, I'm primarily anthropological and focused.

But you also get people who take a more of a historical lens to the history of religion's sociological lenses. You can you can find literature, studies, people. I think one of the wonderful things about religious studies, it's a very accepting of different approaches and methodologies. It's not the same thing as theology, although, as I said, my degree originally was theology and religious studies theology. Well, from my experience here in the U.K., it tends to be a little bit more confessional. So people who hold beliefs want to explore primarily monotheistic interpretations of God's Abrahamic faiths. So I'm not I'm not coming from that kind of perspective. I'm looking at religion as a religious studies scholar, as a social phenomena, a cultural phenomena, something that integrates communities and expresses the stories and ideas we have of the world. And I think I'm very interested in the diversity of those. And in my particular research, I kind of started to think more and more about how technology is a is a worldview as well. So I think entry for the new rich sorry, there were new religious movements would be others. There's a lot of discussion, as with the discussion of what the definition of a religion is, the definition of how new is a new religious movements. But what I relate to is primarily groups and new forms of religion that have merged online since the night late 90s. With the emergence of the Internet really is already public popular force. Some people doing religious studies will go back to groups that were formed in the eighteen hundreds. Nineteen hundreds. But for me, it's 20th and 21st century groups.

So Bob Dylan is the resident religious studies scholar. I am the resident, quote unquote technologist, though we still aren't exactly sure what that word means. So what I'm curious about is when it A.I came into the picture for you and if there is a particular moment.

Can remember where you realized that I was a part of your work?

Well, personally, I probably came about as an interest me through science fiction. So I've always been a geek. It took me a long time to be a self-proclaimed geek and to say that in more general society is when I grew.

I'm very old. When I grew up, it was a little less acceptable to say you liked things like Star Trek and Babylon five and all the various things I grew up watching.

So it did take me a while. It's kind of accept and take on board that personality and that persona. But yes, from the very earliest days of my television watching in my book reading, it was always science fiction and fantasy. But, you know, the A.I. element came through the science fiction. And yes, the tenant commander data was very influential. My original thinking about what we mean by A.I.. What is an artificial being in terms of my academic career? That was never really a strong through line. I did look, I did some work around J'étais and I you have some more bits coming out about real world debt isaam and obviously that science fiction slash fantasy with droids and robots in it. But it really wasn't until my first postdoc, my first position after my P.H. day that I was hired onto a project that was already going to look artificial intelligence. And they wanted basically a social scientist, someone who could do the ethnographic work, who was also a geek, great.

Me who could look at artificial intelligence.

And I sort of quickly kind of boned up on a I feel the interview to say I haven't really looked at this before, but I've been reading X, Y, Z, and that's the position that really brought me into the field of A.I. And that's why I've continued to do with my current role.

Yeah, I really appreciate what you said about science fiction and leading people towards geek dom, whatever we want to call it. And I noticed actually on your website that you have some fiction stories that you've written before. So could you tell us a little bit about the role that you think science fiction can play in artificial intelligence and imagining potential futures?

Yeah, I mean, I have got a couple of not brilliant short stories. I like that I call them old like doodles. Then, you know, these aren't my. This isn't me launching into a career as a science fiction author because I am by no means that good. But I do enjoy writing. My previous career before I came back to academia was as a screenwriter and a script developer. So that's something that I occasionally pick up and play with.

I think there is an interesting conversation to be had definitely around the role of science fiction and the boundaries between science fiction and science fact. So some of my more critical work when it comes to the A.I. ecosystem is about people who kind of push those boundaries a bit too far and are presenting things as science fact when actually they're more mythical made up.

And I have a couple of examples which are present trouble for mentioning, but I think it's important that the general public know when a presentation of A.I. is real and when it's science fiction. And I'm all for the multitudes of science fiction, obviously being a Star Trek fan. But when I sit down in front of the television and press play, I know that is me interacting with a fictional story. If I see a news report that says in many cases this is the most sparkling brand new example of A.I. and it's going to change the world and it's not what it claims to be, then I think that's terrible and disingenuous. So I have played around with science fiction in my own work, but I think there is a space for speculative futures.

But with disclaimers that these aspects of futures, it almost sounds like there's an ethical element as well. And I'm wondering I'm wondering where that line is and where I'm coming from is as someone who did my Masters of Divinity, which is a theological degree, and now I'm doing religious studies, which is more moving towards like anthropology and sociology and like the study of as opposed to that confessional element that you were talking about. And I'm I'm wondering, when it comes to something like Scientology, for example, which has been historically in trouble for some ethical decisions or not decisions, consent, things like that, where where that line is and how. How do we delineate it? And then if you can tie it into a I, that would be great to get.

And so I wrote a paper recently, unfortunately got rejected from the first general I sent it to, but maybe maybe I'll find a home. But it was off of the presentation I gave at the American Academy of Religion conference last year about this. This wavering line, this kind of blurring of science fiction and science fact. And that actually some of our attempts at telling stories about A.I. become more about manifested aspiration.

And I tried to tie this into a longer history of movements like spiritualism that if you know nothing about the history, spiritualism and the attempts to kind of manifest sciences, that they use the technology at the time. In some cases, photography like that, the burgeoning technology of the time to make real the things that they really hoped were real.

And I see elements of that with Scientology. I don't. Flat out refutes the ontological reality of Scientology as religious studies scholars. I don't think we should ever do that. Would I get concerned about, as you say, as the ethical issues and the potential abuses which run gamut through all sorts of different religious groups? But I think, again, it kind of comes down to this almost the red flag disclaimer thing of knowing the sources where they come from, having as much education as possible and think with groups like Scientology. Some of them limit education for their members, so they dissuade them from looking online to find out more about where some of these ideas came from, what they're connected to. And as we know with L. Ron Hubbard, he starts down science fiction. So then again, there's that blurring when his science fiction stories then stop being the prime texts for the group and all the things about the Seaton's and so forth.

So there has to be a kind of a general literacy around religion. And that's where I think religious studies can be so useful because we do dig around. We have historians, we have anthropologists and socials. We dig around and find out where these ideas come from, how they're connected to each other, and where in specific instances the ideas have no fictional roots and then get employed in different ways.

And then when it comes to a I think that's an area in particular where religious studies has not been specifically employed, but not to.

Again, as I say, you see everything through religious studies lens because that's my background. But I think, again, you see the same sorts of patterns. Again, everything is sparkling and brand new, but we're using actually the same sorts of language about A.I. that we have historically about theistic concepts. And that's where a lot of my work tends to go at the moment.

While we're still on this topic of science fiction and public scholarship and spreading information to the world. One thing that I've seen of yours that I really enjoy is your short documentary films called Rise of the Machines about A.I. and the future implications of A.I. and robotics. And I would love if you could share with our audience what motivated you to create those and what the response has been like, what what the entire experience was.

Yeah. I mean, that came about and I should make it really clear that those are a team effort. So, you know, they sometimes get called my films, but there were there were lots of people involved. And if you look through the credits and originally it came out of my first postdoc. So the institute I was at at the time, they through the project I was on, funded the first one in collaboration with the university. And then we got other funding involved. So absolutely, if you have a look like see who's been involved in the companies and so forth. So it wasn't solely off my back because I as I said, I'm not actually much of a technologist myself.

I didn't have the skills to go and produce films, but I, I could be there in the creation of the storyline of each film. So as I said, they started with my my previous postdoc. They ran over into this current postdoc. And really the aim at the very beginning for me was, again, this feeling that I was coming into A.I. is a relatively new field for me, had that kind of general geek interest, but no real technical skills or knowledge of the debates and the discussions. So making the songs in part was about educating myself as well. And then also, no, think quite how poor the the general kind of awareness of A.I. was recounts this story. And in trying to get funding once that I'd been in a taxi and the taxi driver very conscious, I said, how are you doing today? What are you up to? What do you do for a living? And I said, Well, I work in a I. And he said, no artificial insemination. That's very interesting. So there's again, there's this general populous understanding of A.I. either as quite often this hugely terrifying, dystopic thing or this overhyped, mythical thing that's going to come and save us all or for some people kind of in a third party, don't really have any kind of understanding of what the main issues are and why there's a big discussion at the moment. So having short ish films, we we tried to keep them under about fifteen minutes each. And that was always quite humorous to me when we got to the fourth one, which is about consciousness going right. Which is going to deal with the whole of consciousness in about fifteen minutes. And we got together some really great expert voices and that was kind of again part of my learning process, that there were people out there that you hear over and you like, can we actually get an interview with them and find out what they think and how they can explain the subject both to me and to the audience. And I think they did pretty well. The films, they haven't, you know, blown up massively virally.

But the people who've seen them have been very positive. And we won an award for the first one from the HRC here in the UK for Best Research Film, the year award. I got to wear sparkly dress and I got to go to a BAFTA and pick up an award. It was brilliant.

But the main thing again was just to think that there is there are so many ways we can connect with the public and get them involved in this discussion about A.I. because otherwise they'll remain kind of fed on a diet of these headlines about killer robots. And what they should be worried about is more the invisible killer robots of the. Systems that are behind the scenes and already starting to make decisions about their lives. And they have no awareness that this is happening. So having publicly accessible, engaging films is one way into helping people into that discussion.

What are the things that I really admire about your scholarship and your work is that you're such a master storyteller.

And I one of the things I preach about this conversation is that we're talking about the stories that get told about artificial intelligence.

And again, coming from the theistic lens, the religious studies lens. I'm curious about what stories you're seeing being told about artificial intelligence in terms of those almost. I do want to say godlike, but I feel like sometimes artificial intelligence gets either elevated to God or like denigrated to the devil. And I'm wondering if that's an accurate statement and kind of what you're seeing in those theistic modalities.

Yeah, I definitely I think we we respond more to stories at those kind of very binary ends of the spectrum. So the hugely dystopian or the huge utopian and the impact power of things like the Terminator series, that every time there's a story about A.I. in the press, they tend to in the UK, certainly. And the press I'm looking at, they tend to illustrate it with pictures of the Terminator. It's very evocative imagery. It immediately tells the audience, the reader, what they should expect or they get the complete opposite, the tone pianism of you know, I will solve all the problems and it will end up basically being this superpowered superintelligence super being that it seems to be fitting more, more into the space that some people might argue has been left by the death of God. I'm doing quotation marks there because I'm very skeptical. If you read any of my stuff about the whole secularization of the West, I think this is a this is metanarrative that some really great people have written about this metanarrative, the death of religion, basically. And it gets very tied up with A.I. narratives. More broadly, this idea that humanity as a whole is becoming more rational. A.I. is a part of this increasing rationality and therefore, like religion as an irrational thing is going to disappear. But actually, with the stories we're telling about A.I., it seems increasingly that the enchantment remains to be used like a librarian time. We're not becoming disenchanted by any means. We try and compare ourselves to other nations and locations and say, well, they're much more superstitious and irrational.

And we went through the Enlightenment with serious thinkers now. And A.I. is a serious project. But actually, if you look at a ways of discussing A.I., they're still very enchanted. They're very liminal. I discuss in many places how A.I. is a liminal entity. It's it's somewhat like a ghost or a chimaera creature that can fulfill so many of our different desires and aspirations. And we place so many expectations upon it that it's always this changing entity. So I think it's quite interesting the way we have these variety of views and some are more dominant than others. Like I say, the Terminator and Retreat does tend to get more clicks for newspapers. So they go with that sort of thing. But you see both of them.

And in the more utopian side, recently, I've been looking at the kind of theistic interpretations of the algorithm as being in control and the blessing us or in some smaller cases, cursing us. So people finding that they're content, that they're producing and they're uploading on YouTube if they have a particularly good day and they get lots of hits and likes how, you know, I be blessed by the algorithm and using, say, stick religious colored language to explain what's happening, because what seems like a very obscure process is benefiting us, or in some cases not benefiting us.

Seems to we then seem to sort of fit into existing ways of talking. We pull on religious terms and language to explain what's going on. And so that's tongue in cheek. But I've written previously, in the case of Judaism about how things that can start a quite tongue in cheek can end up being quite serious and part of the more general kind of populace's conscious conception of things like A.I..

Beth, I don't mean to put you on the spot, but I'm going to put you on fourth. So you're telling stories about utopian futures, dystopian futures with A.I.. Do you envision the future of A.I. as a utopia or a dystopia? Or are you an A.I. optimist or a pessimist? And why?

I'm an annoying anthropologist and we tend to remain. We try to kind of practice this sort of methodological agnosticism. So I'm neither a true believer nor true like a fearful, scared person. I think the truth is probably a lot closer to what William Gibson said when he said the future's already here.

It's just unequally distributed. The good aspects of A.I., the things that we could leverage into something like a utopia, are going to be here, but they're going to be here for certain people and the bad aspects are going to be here for probably more people. That sounds on the whole. Optimistic, but we still have the opportunity and the choices to make to ensure that as many people as possible receive the benefits. What I'm also very cautious about is sort of telling a logical view, which everything I just said could be just defined. Does it tell you a logical view? I'm saying this is going to happen, but the assumption that technological progress can only lead to artificial intelligence of a superior kind. And this is the thing that will change our society and this is the only way to go. To my mind, seems quite limiting. If you're familiar with the Foundation series and Asimov, this I did that. This this is the direction. This is the kind of escape. The future that we want perhaps is a completely different future that we could look for. But if we get too tied into this one technology and whatever benefits it could bring, we might not be able to see those. We might become blind. Now, I'm not enough. I'm not good enough of a feature suggest what other technologies might become more prevalent. I think in the last 10 years it has become this more, more, more mono focus on artificial intelligence. Without the scope to actually say, well, maybe we don't.

I appreciate your comments on on teleology because it's also kind of how we've gotten into these oppressive structures. Right. If you look back at a philosophy of like Hegel Khan, like. Oh, it's it's it's coming. And I can use that to justify colonialism or slavery and all that.

And I'm sure it's about this concept of technology. And like in general, like do do you see eye to eye?

How do you define technology? I guess in your work and is like the technology of, say, a wheel the same as we're seeing in A.I. or those different. And if so, why?

Yeah, so I would I would define technology quite broadly.

I like to make a distinction perhaps between small T technology, big T technology, because that's how that mean that there are different ways of responding to the future brought about by technology. I think for some people it is more of a big T thing, I think. Yeah. I think absolutely. I accept any argument. This as we've always used technology, there isn't actually a way to distinguish the human from the technological and actually the human is made. As the tool said, we have never been modern. I'd also argue we'd never not been technological as well. But there is this this more recent narrative of big time technology. There is this thing that we can look back probably I need to like the 20th century and say computers better, computers better and better. Better. And that's that is what we think of as big technology. That's the thing that is driving what's happening next. And I think that's that's a worrying terminology with that, certainly. And I don't know where that ultimately would take us. And that's where my agnosticism comes in about what the future may bring. But I think it's interesting that we've we've kind of reified a certain aspect to technology and to go back to a ised, whether it's a distinct form of technology. What I think is interesting about it, it's probably probably and someone may disprove this for me is probably the first technology that we've ever tried to personify quite as much as we've happened.

It's the discussion isn't just about, oh, computers will make things better and easier. It fosters the computers, A.I. machines. How are you going to frame it will be something new and separate to us. So like the emergence or must of an alien life form. And that's a narrative that you don't get with pairs of glasses on your face, even though they're a technology that change humanity or the printing press or the spinning Jenny or any historical example of a disruptive technology. It's only really the Internet gets into certain amounts. Some of the early discussions from a more metaphysical direction about the Internet was about whether this was a new form of consciousness, whether we would see emergent properties from it's kind of more the fringe ideas. And I love the film Tron and its sequel in those much derided sequel. This idea of emergence of life and new forms of being on on a digital platform, fantastic sci fi. And that's the distinction I think I make as well. With this big, big form of technology with A.I. as a part of it, is that it's actually taking us to conversations about personhood and being and intelligence. However, construes and that's not something you necessarily see with the water wheel or fire away. She flies an interesting example. I suppose that's slightly different because there were theistic conceptions of fire. I didn't think there were as many conceptions of it as a person.

But when we talk about conceptions of robots as people and personification and this keyword here consciousness, this is something that I've seen come up in your work and in the last of the documentary series Ghost in the Machine, and it talks all about a consciousness. So could you tell us a bit about what it means for an A.I. to be conscious, at least as of what we know now? If you think that's ever possible.

Yeah, we'll go going back to what I said about our ambition to do consciousness in 15 minutes. I mean, that's it. It was, again, a part of the learning process to say, what is this big thing and this big thing that we have a big conversation about. How can we pass that for the general public and even for myself, that this is not a subject I feel I have any kind of authority to states will consciousnesses.

But I want it as an anthropologist to say what what what are the dominant conversations? And the dominant framing tends to be that there is something dualistic conception of mind and body. And we come up again and again with the expression that in the West we believe and this is placed in contrast to in the east, there's a lot of quotation marks I do at this point. And that narrative is very interesting to me, this distinction between mind and body and also the assumption that other people in other parts of the world don't make that distinction in the same way. And again, going back to what I'm saying about the kind of teleology of our enlightenment, assumptions about rationality, this assumption that if we seed the mind and body in this way, we're actually more rational than people who see it in a more holistic direction. And that's characterized as being N-E.

So having the conversation about consciousness in the fourth film, again, to me was about playing out some of these discussions, engaging with some of the more serious academics who've spent their entire lives talking about consciousness, whereas I've come along good. I need to understand this. And I don't think at any point we felt comfortable saying there is a solution to this question. I don't think that could possibly be one either. I like the fact that we're having this conversation says a lot about all concerns of where we think we want to draw boundaries between groups. And one of the things I wanted to bring out in both in the fourth film and more generally is that these conversations are obviously not new. Historically, we white, privileged Western folks have gone to other places and gone. Does that have consciousness? Does that entity have consciousness? If we ever did encounter aliens, we'd have the same conversation again. It doesn't behave in the same way that we do, doesn't have consciousness of these boundary workings. Our women, as well as the other primary example of the internal conversation. Do women have the same consciousness as men that led to so many discussions and fraught incidences and tensions throughout history? And we're still not resolved on that one.

All seems still not resolved on the rights and more rights to behave in certain ways, different groups. And this will keep happening again and again. And A.I. and robotics has kind of fallen into this discussion. For some people, that's a problematic thing. Any discussion of the consciousness of robots for some academics is a distraction away from the the rights and the responsibilities of the corporations who are using A.I. and robots. So if you start to about the personhood of robots, you're not looking at the corporate structure behind it. And they see as a fudge, basically a sorry, a way of distracting away from what we should expect corporations to do. And I think that's that's a fair comment. But it also the fact that we're having these conversations again about personhood just should remind us that we've always had these conversations about personhood and hopefully we become more progressive and treating other humans better in a speculative future. Does that mean we should treat robots better again? I always remain agnostic.

And you you've alluded to an answer to this question, but I'm curious if you can say more about, I guess, why this all matters is the question.

Because it's I feel like sometimes my experience has been sometimes especially in religious studies spaces. It can get very intellectual, like I can study Durkheim for a long time and then apply it to robots. And that's wonderful. But especially in this podcast where we're looking at ethics and some of the justice applications for folks who may be seeing this as a purely intellectual pursuit. I'm wondering if you can put a finer point on why these conversations, especially about consciousness and I matter.

Yeah, I think I think that points that some academics make about the corporate responsibility and the focus on robot rights being a bit of a distraction is is key. I want to also link that back to what I was talking about, science fiction and science fact that some of these people who are promoting particular robots, perhaps a female looking robot who sometimes appears on chat shows, they are giving a presentation of A.I. that's disingenuous and leads people into a conversation about personhood and rights that perhaps could be seen as a distraction from actually what's happening.

And the same thing, as I said, with the more dystopic interpretations of robots, that if you focus too much on the Terminator stories, you don't notice the examples of A.I. being used in parole services using basically digital physics. I can never say this was physiologically. No, I'll go with phonology is slightly easy to say, but you know what I mean. Looking at people's features and. People from ethnic minority groups and using existing biased data, which course you know about this, but using biased data and making assumptions based on people's appearances, that that account, that story doesn't hit the mainstream public consciousness quite in the same way as the Terminator does. And I actually think what's quite interesting about the most recent Terminator film is that they tried to tackle some of those issues, if you've seen it. No spoilers. But I think the the evolution of the Terminator film is something that the representation of Terminator is in news reports about A.I. doesn't take on board that actually we're still just seeing the original kind of T to Orny coming with the gun, whereas the most recent film I see had some things to say about the rights of minorities, about the role of women, particularly trying on despoilers. But in case people listening haven't seen it, but particularly about who gets to be the Messiah, like whose role is it? Terminator films all along the way have had quite faced Dick Narrative's quite G-day, Christian or Christian specifically narratives.

And that's something that I think has evolved over time. And I think it's important. Therefore, it's pay attention to how those changes and shifts are happening in the narratives that are going about the moment.

I have seen the new Terminator film and I will vouch for all the listeners to go watch it because it is very good if the only one I saw. So I I'm not like a Terminator aficionado. But can I can't agree that it's a great way to view AI in the modern day and age did not go far.

Oh it's is just a flip and go. And I didn't make the mistake on Twitter recently of listing my favorite Terminator films in order.

And then of course, everyone came back. No, you're wrong. You're wrong. Oh no. He's my favorites. Yeah. Fates. The latest one is up there for me.

Mm hmm. That's when you have to preface the tweet with unpopular opinion. So they say anything from you? Yeah. Well, Beth, you are on the radical A.I. podcast, and something that we like to ask to all of our guests is kind of a two part question.

So the first part is, how do you define the word radical and how do you situate your work in that definition? If you do at all.

So I get this a lot of thought. I had a bit of prep time on this one. And I think how I'd like to define radical is non algorithmic thinking. So one of the worrying things about this teleology of the direction of technological progress and the rush towards A.I. is that actually we're instituting algorithmic thinking systematically everywhere.

And this is you know, this has been happening for a long time before even I was muted. You know, we were trying to find ways to make businesses more efficient.

You know, systems theory, this isn't entirely new. But I think the more and more we instill machine learning algorithmic systems into our processes, the more and more we accept the algorithmic thinking is the only way forward. And the most basic form of the algorithm is data and data out. And if the data is biased, if the data is historically wrong, that just keeps us in the same like four for furrow line. Like on farming, you stay in the same forest. I think a good example of this is when the big conference neural I will get this wrong. Neural imaging, processing systems, nerves. Now Europe's was nips at. One of the problems with that acronym is that, yes, it reminds people of lady parts of nipples and also had cerebration, racist connotations. But when they surveyed their members and said, do you have a problem with this acronym? The majority who were majority male whites of a certain age said, no, it's fine. Like this is what it's always been. And we're happy with this. This is tradition. This is how it's being. Whereas the minority who has younger female from ethnic groups said, no, I see, we're not comfortable with this. Can we change it? So if you rely purely on that kind of survey data of existing biases, the algorithmic thinking will lead you into the same solution. And they had to basically go against the survey to change the name. And now it's NeuroPace. So I think to be radical is to try and find a way to get out of the comfortable zone of similar thinking of replicating historical data of bad data and bad data out. And the more that we use artificial intelligence without that kind of frame of critical thinking and just assuming that when the computer says yes or no, the computer's writes, the more that we as humans broadly constructed will continue. Existing path lines that don't help people that don't recognize difference and don't recognize the ability for change as well. It was the second part how I see my work as radical as that.

Yeah, I do, too, based on that definition. Do you link it into into it?

Well, I think I think on paper and probably I don't come across as hugely radical. I think there's the expectation of someone who has spent most of the academic career.

Cambridge is probably not to be that radical and different, but and the nature of academia itself. You know, the process of citation is about looking to the past and building on previous research while also trying to push the boundaries. And I think that's where my religious studies side is useful, because it's unexpected for anyone to be discussing A.I. with anything like a religious studies lens. Increasingly, there are people I meet saying, that's great. I'd be meeting people who are doing this and also just to be an arts and humanities scholar in the A.I. ecosystem. There are some great research institutes with people doing history, sociology, anthropology and philosophy and so forth.

But the majority of people who are doing the actual physical working on a I tend to be computer scientists or mathematicians of some strand. So having other voices coming in for the arts and humanities side is useful. How much they listen to does vary. So I've given talks at places like Amazon and others. And, you know, it's interesting to have that interaction with people who perhaps haven't thought anthropologically before and trying to encourage them to think about the human elements of their research they're doing. And obviously, there's lots of ethicists out there who are trying to do the same sort of push of saying, you know, the ethical concerns that we have about ostrich intelligence need to be in the discussion from the very beginning and seeing who's in the room to have the conversations. So I suppose, you know, I recognize my my Cambridge privilege and my the Cambridge stuffiness that suggests sometimes that we can't be radical, but also the history. Cambridge is full of radical people. So my college actually was founded and the very first people to vote were nonconformist Christians. Most colleges in Cambridge have a Christian background, but they know they want Church of England in the UK. They were dissenters. And I think there's a political history in my college that suits me quite well as well. We have connections with various different political figures in the 18th and 19th centuries in and 20th century. So I think that's a comfortable place to be a dissenter and sort of push against the assumptions, again, about what Cambridge is like as well.

So as we close the interview, we normally ask some level of advice question, and I'm really taken by your career as a screenwriter. And then our conversation about the Terminator and ways that people are spin these narratives about artificial intelligence. And I'm wondering if you had any piece of advice for someone who's writing a screenplay about something like the Terminator, writing a screenplay about telling a story, basically a fictional, possibly fictional story about artificial intelligence. What would your piece of advice be?

So, as I said, I recognize the value in science fiction that goes full bore into the utopian or dystopian. I mean, that that there is an audience for that. And I think it's enjoyable. And I will never stop going to see Terminator films or going to see dystopic films. You know, if there's a robot war, I'm there. Watch it. I would enjoy. And I think that's the equivalent basically of going on the fairground rides and like being scared witless, but getting off and going. Well, you know, I know that was the Wrobel, right? So I'm all for those I long may they continue. I also like to see the stories that do something unexpected that don't play into the obvious tropes as well. There's been a few lately. Have you seen Mother? That's quite interesting.

You know, it does some of the dystopic things, but, you know, it's a different form of relationship. And as I say, Terminator Dark Fate did some interesting things I wasn't expecting. So I think there's certainly space for speculative futures that turn the audience's expectations on themself. That's not always see the terrible robot war, but also have a space for that. I'm just saying just keep writing. You know, there's so many stories can be told. I'd love to read and see them all.

Well, Beth, if anyone wants to engage with your work and your stories and your writing. Where's the best place for them to go for that?

I have my own Web sites. Beeville Singler dot com. I'm pretty Google. I am a Twitter holic. I'm often tweeting more than I should be. I should be writing more than I'm tweeting. Yeah, I'm I'm annoyingly around. One of the things about being a digital ethnographers that to do your work you need to be online. And that's always been my excuse for doing my HD. Evan said set it up, Facebook. He needs to write my. How do I do my research otherwise. But yeah, there's a fine line between researching and just tweeting everything that. In my mind, I know that, yes, I'm quite easy to find online, I'm happy to chat.

And we strongly recommend Bazza Twitter feed. It's very entertaining and informative. So we wholly endorse it.

I think it's changed recently. Like, I realize there's a still a certain amount of A.I. and robot stuff, but everything else that's happening, it's very hard not to be constantly ranting about my government at the bank. But yeah, I, I will try and ensure some some useful, humorous things as well.

But thank you so much for joining us today. It's been a pleasure. Thank you. I really enjoyed chatting. Thank you.

We want to thank Dr. Beth Singler again for joining us today for this wonderful conversation. And one of the first things that's coming to my mind after interviewing with Beth is everything that we were talking about in terms of multimedia and public outreach when it comes to A.I. ethics. Maybe I'm a bit biased because Dylan and I are currently doing public outreach and multimedia for A.I. ethics in multiple realms. So clearly, we are huge advocates for this kind of approach. But I really loved what Beth was talking about in terms of getting the public involved in the discussion on A.I. so that the general public is not just hearing headlines about killer robots and Terminator stories. Right. So everybody's getting informed about what she called the invisible killer robots, which is the algorithms and really the heart of the matter here.

Yeah.

My jam and all of this is like narratives around A.I. and artificial intelligence, and that's like utopian visions about A.I. and then the dystopia visions and then the reality and between all of that.

And I, I just love best scholarship around this, especially when she starts talking about the Terminator and all that stuff. Like I remember watching The Terminator when I was like going to date myself here like eight years old, I think.

Anyway, I was eight years old, but I watched I don't know, I was coming out of Mexico and I just remember being so terrified.

And that has had such a lasting impact on the way that I think about robotics and both like good and negative ways, like even someone who's like in the inside baseball area right now of being in the academy researching this stuff.

These narratives persist. And they know just how many people that we talked to who are like, you know, I got into this field because I watch Star Trek. Right. As a kid or I got into this field because of this story about science fiction. And it's I think it's really important for us to chronicle and codify some of these narratives that are out there and to understand that the way that we tell these stories have real consequences, even in fiction.

And then, of course, the other reason why I love talking about this, because as I mentioned in our intro, as a religious studies PGD, I just I love the work that she's done with a new religious movements and all these big questions of purpose and meaning and where A.I. plays into these big questions of what it means to be human. Because, again, that's that's that's my thing.

And I'm like, oh, someone else is restructuring that. That's great.

And doing such an amazing, amazing job, too.

Well, I think what you brought up about science fiction and both sides of the coin is actually really important to talk about here, because we have the positives of science fiction in our ability to reimagine speculative futures for A.I.. And I think that science fiction, especially out of any other form of media, is just such a beneficial tool that we can use to actually take ourselves out of the reality that we currently live in so that we can think about a future that we maybe can't even fathom in the way that the world currently exists. And so science fiction can be a great tool in order to actually think about the unintended consequences of technology, especially when it comes to A.I. ethics. And this is something that even my advisor, Casey Baesler, is doing quite a bit with things like the Black Mirror Writers Room in the computer science classroom. And then on the other side of the coin, though, we also have science fiction being used almost as this like thought weapon to scare society about A.I. and the future of A.I. And that's where we get these Terminator stories. And a lot of organizations in the tech industry are using the platforms that they have to kind of create whatever narrative that they want. And it's it's almost science fiction in a way, and without informing the public about what is really going on. That's when science fiction can actually, you know, be detrimental as opposed to science fact, which is what Beth was talking about quite a bit in our interview, which is much more beneficial for at least Democratic deliberation.

Yeah, it's funny that you mention what was a thought weapon, because it's such a good science fiction term, right?

Like, I go immediately to like nineteen eighty four and like just to like, double think and things like that.

And all of that is.

It's alive, right, it's all alive in this lake, cultural mill, you of of soup, but it's also why it's so important that these questions of representation are not just questions of.

You know, how do we better do representation in the boardroom or in these narrow ways? Right. That's very important, right.

But there's also like representation of how I get seen in popular media as well, even like what science fiction authors we share and we know about.

Like, I know that my reading lists say in high school, when we did our science fiction unit, it was mostly white male authors.

And that's not the. There are plenty of black or non-white authors of science fiction out there who have written some incredible things and are currently writing some incredible things. And they don't always get the same amount of airtime. Probably because of the systemic racism that's embedded in our society. But it's something that I think we really need to be intentional about as people doing ethics, that even these bigger stories, not just the people that are coding, but like these bigger stories that we're telling about what we're working on, a representation matters and who gets seen matters.

Yeah, that's so true. I mean, this also is just touching on the fact that storytelling and narrative building is a tool for power and it's a way to express power in whatever platform that you're telling stories about. And we talked about this a lot with different guests on our show, especially with Lily Irani and with Karen, how storytelling can be incredibly powerful. And building a narrative can build a movement or dismantle a movement or entirely define a field like A.I. and and what it means for the future and what we want A.I. to be in the future and what we don't want to be in the future.

And I think I think one of the most important parts about that, and I think that's interview gets to the heart of this, too, is that we're always writing a narrative to some degree, like we're always telling a story. And that means there's always power involved. I know I probably said this before at some point, but I really believe it.

Yes, still. And I mean, we clearly love talking about power on this show. You know, algorithms are power, data is power, storytelling is power. Narratives are power. A.I. is power. Time is power. And speaking of time, we are out of it. So for more information on today's show, please visit the episode page at radical A.I. dot org.

And if you enjoy this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pop culture.

Join our conversation on Twitter at radical iPod. And as always, stay radical.

You included the suit metaphore. Again, that's kind of like today. That's my goal. Keeps coming back. Include the suit had a for as much as I can because it's such a good bet. I spent Metaphore Medscape and that SVO. Wow.

This is a symbol Elway's.

You write a paper on the metal soof that we're all swimming in about ethics. The ethics, Metsu. I drink it.

Let's get.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Create and share better audio content with Sonix. Quickly and accurately convert your audio to text with Sonix. Automated transcription is getting more accurate with each passing day. Sonix converts audio to text in minutes, not hours. Better audio means a higher transcript accuracy rate. Here are five reasons you should transcribe your podcast with Sonix. Manual audio transcription is tedious and expensive. Do you have a lot of background noise in your audio files? Here's how you can remove background audio noise for free.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.