Indigenous AI 101 with Jason Edward Lewis


Jason Edward Lewis.png

What is Indigenous AI and how might it drive our technology design and implementation?

To answer this question and more in this episode we interview Jason Edward Lewis about Indigenous AI Protocols and a paper he co-authored entitled “Position Paper on Indigenous Protocol and Artificial Intelligence.”

Jason Edward Lewis is a Hawaiian and Samoan digital media theorist, poet, and software designer. Jason also founded Obx Laboratory for Experimental Media and is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well as a Professor of Computation Arts at Concordia University, Montreal. Jason directs the Initiative for Indigenous Futures, and co-directs the Indigenous Futures Research Centre, the Indigenous Protocol and AI Workshops, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design.

Follow Jason on Twitter @jaspernotwell

If you enjoyed this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Indigenious_AI.mp3: Audio automatically transcribed by Sonix

Indigenious_AI.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information. We are your hosts, Dylan and Jess, and

Speaker2:
Welcome back to our regular episodes. So this is technically season three. Is that right, Jess? Season three of the podcast.

Speaker1:
Technically, though, there isn't really any clear demarcation between what makes each season a season. But we are

Speaker2:
On the third. One time doesn't exist, but we do, and we are back for this arbitrary third season of the show. And basically, what that means is that will be coming out with monthly episodes going forward, monthly regular episodes. We will, of course, still have some surprising and secret bonus episodes and our regular measurements series that is sponsored by I Triple E, but welcome back to us after a summer off and welcome back to you all, and we are so excited for the lineup of guests we have this season and really we're just excited to be back.

Speaker1:
And speaking of all the guests that we have lined up for this season, in this episode, we interview Jason Edward Lewis to teach us all about the 101 of Indigenous A.I.. We actually originally crossed paths with Jason because we attended the Resistance AI workshop at the NAACP's conference, which is a machine learning conference back in the end of 2020. And while we were there, we heard Jason speak about a paper that he had recently authored. Of course, when I say we were there, we were virtually there. We weren't actually there because this was in 2020. And when Jason introduced his paper, it was called the position paper on Indigenous Protocol and artificial intelligence. Be sure to check that out in our show notes, by the way, because we have that and many more links as usual for all of you to digest. But when we heard Jason talking about this paper, he described it as a starting place for those who want to design and create AI from an ethical position that centers indigenous concerns. And after we heard him speak about the awesome research that went into this paper, we knew that we wanted to invite him on our show.

Speaker2:
Jason Edward Lewis is a Hawaiian and Samoan digital media theorist, poet and software designer. He founded the OPCW's That's Obie X Laboratory for Experimental Media and is the university research chair in computational media and the Indigenous Future Imaginary, as well as a professor of computational arts at Concordia University Montreal. Jason directs the Initiative for Indigenous Futures and co-directs the Indigenous Futures Research Centre. The Indigenous Protocol and AI workshops. The Aboriginal Territories and Cyberspace Research Network. And The Skins workshops on Aboriginal storytelling and video game design.

Speaker1:
So once again, welcome back to all of you, our amazing rye community to this third season, and we are so excited to kick it off with this episode with Jason Edward Lewis and to share this awesome conversation with all of you. We are on the line today with Jason Edward Lewis. Jason, welcome to the show.

Speaker3:
Thank you. I'm very happy to be here

Speaker1:
And we're happy to have you. And today we are talking about indigenous protocols in A.I.. And let's just begin with indigenous protocols. What is an indigenous protocol? What is a protocol and what does it have to do with I?

Speaker3:
So Indigenous Protocols is a very kind of blanket term that talk about the different kinds of rituals and ceremonies and sort of methods for conducting oneself that that you find in many indigenous cultures. So, you know, at the top, I just want to say that the term indigenous is a is a very abstract term, and it's useful because it helps us talk about a number of things that are common to a number of different indigenous cultures. But we have to be very conscious of the fact that each one of those cultures actually is different. And so as we go through the conversation, I'll probably kind of move back and forth between this more abstract indigenous layer and then talking about specific communities with which I'm familiar to some extent. So. Protocol is really kind of the the ways in which our cultures conduct ourselves so that we are conducting ourselves in the right way or the good way. And part of the way that that is often defined is how, what, how is what you're doing benefiting the your circle of relationships. So in that circle of relationships is certainly you and your family and your clan and then your larger community, but it also includes the animals and the plants and the mountains and the rivers, all the things in the natural environment that we are dependent upon in order to thrive.

Speaker3:
So. It's in some ways the way that I think about it. This is not how it's talked about within sort of within the culture, but the way that I think about it is it's fractal in a sense, in the sense that, you know, there's kind of protocols that happen at the level of just everyday interactions between you and the people around you and you and the other entities around you. But then there are protocols that happen and sort of larger settings like when you're when you're with your family or with your clan and then even larger settings with your whole community or say something like the Hood and Shonekan Confederacy. When you're in a kind of a large grouping of different indigenous nations together, you know, there's protocols that sort of dictate they don't and I think actually dictate is the wrong word. So there's protocols that guide you and how you engage with one another. And again, those protocols in general are sort of our ways in which you can you can get into alignment with the other entities. In your environment, that's how I think about protocols, and then, of course, you know, protocols is a term that's used in computer science and engineering to use in science and, you know, all of which have some flavor of that sense of, OK, this is the way that you should do things because it's been proven to be, you know, scientifically dependent or sort of dependable and engineering sense or safe in a scientific sense, or because we need everybody to use the same protocol so that we can then compare.

Speaker3:
What everybody's done, because everybody does it with different protocols, then it makes it really hard to compare things. So obviously, you know, those things are are kind of divergent from sort of that indigenous sense. But I do think at the at the core there is this question of how do we how do we engage with each other in a way that's mutually beneficial and it allows us to think about computer protocol protocols in terms of, OK, how might we how might we shape the ways in which our machines engage with each other and that we engage with them in ways that actually are more aligned with the idea that you should be? Sort of enacting protocols that are beneficial to all the entities in your environment. Jason, we first heard about your work with indigenous eye protocols through indigenous eye.

Speaker2:
And I was wondering if you could talk about indigenous

Speaker3:
Eye and then possibly bridge into this position paper that you all have written about indigenous air protocols. So I. The funny thing is, is that I don't often use the term indigenous eye when I speak, though I was just looking at the website we made it says Indigenous say I stop. And you know, when we were putting together the workshops that the Indigenous Protocol and I position paper was a consequence of, you know, we spent some time thinking about the about the name. And originally I I was talking about indigenous protocols and Angie Abdullah, who was one of the co organizers of the workshops with me, she's a trial will weigh woman from Australia. She in 2014, she talks about the need to look to indigenous protocols in terms of thinking about what they can teach us about pattern recognition for robotics, for instance. So we wanted to emphasize the the right way of doing things right. Didn't want to, necessarily because we could have called it indigenous epistemology. And I right? Or, you know, indigenous cosmology. You know, there's a bunch of different ways we could have gone about that or we could just call them the indigenous A.I. workshops. But there is this really conscious effort to be like, OK, so it's not what we're trying. What we understand from our cultures is. This real emphasis on doing things in the right way. You know, certainly there there's a centrality of how do we create knowledge and recognize things in the world as a subject to kind of knowledge discovery and passing on that knowledge? You know, we think about what does it mean to be Kanaka? What does it mean to be Mohawk? You know, and we think about where where we are in the cosmos.

Speaker3:
So we, you know, all those terms might have been used. But because of this, I think resonance with the use of protocol in science and engineering. We wanted to highlight the idea that, you know, that these protocols are are are our rules for living, right? They're not arbitrary. They're not they're not ritual in the Christian and the Christian sense. Right? Where. You know, to some extent, it's become so ritualized that I think often people don't think very hard about what it's actually trying to tell you about how to live. It's just this thing you got to do. And and also, I mean, there's just such a long history of using that kind of language to dismiss the knowledge that indigenous people have and to place that kind of knowledge in the past. Right. Back in the time when even our culture was benighted by theological thinking, determining the way that we understood the world. And so it's trying to like pull out of that context and really talk about how these these are rules for living now. So they may have been developed a really long time ago, but they've been evolving and they've been responding to the environment and changes in the world that we live in and that we should that we can draw on those when we're. Trying to engage with all sorts of things, including technology, and that's been my work of the last 20 years. And then specifically when it comes to artificial intelligence, because there's this because there's just this rhetoric and it's not just rhetoric.

Speaker3:
I mean, it's it's embedded in legislation. It's like everywhere, right? That sort of assumes that indigenous people are not contemporary, right? That that to be to act like an indigenous person is to act like somebody from the past. And so this is something I think that indigenous people working with technology are always fighting against. It's getting better. It's not as bad as it was 20 years ago when I started this work, but I think it's still there right where where people are sort of surprised when, you know, when people are like, Oh, well, no, you know, we know how to we know how to build and run a fish pond, right? We know how to take care of the ecology in this forest. So we don't end up with crazy forest fires every twenty five years. Like we know, we know all kinds of things, you know, and there's just a very long history of relegating that knowledge to subsidiary status. And so part of the work that always I feel like that always needs to be done is like, Look, no, you know, what we're talking about is not, you know, something that helped us live well 100 years ago or 500 years ago. It's something that helps us live well now, right? And by the way, it might help you live well to. Right. Don't know, maybe not, maybe not a good match for you, you know, or for your community. But we think it's actually a pretty good set of guidelines and you might at least want to take a look at it.

Speaker1:
Why don't we take a step back for a moment? Because Jason, you mentioned you started this work 20 years ago. What got you started in this work? Like what motivates you to do the work that you do in this space? And how did this all begin?

Speaker3:
Where to begin on that, so I found myself I count myself very lucky as as a youngster. I got a great education at Stanford, where I studied both philosophy and computer science. And one of my one of my undergraduate advisors actually was Terry Winograd. And, you know, he was at that time really thinking hard about and writing about, you know, where the kind of prevailing models of intelligence were just kind of falling down and not doing the job and trying to think about why that is. And so looking towards philosophy and cognitive science and linguistics and things like that to just get a better, more holistic view of what what is going on in our brains. And then also really important how cognition is embodied, right? So moving away from this abstraction where we're trying to think about intelligence, basically, you know, we're letting the tail wag the dog, right? So we're letting our what we've developed. We were letting what we had developed in terms of computation. Right. Sort of like lead us to make all kinds of what turns out to be bad assumptions about how we as embodied creatures actually function with the world. So and then he he facilitated me going to work for a place called Institute for Research on Learning and which was this interdisciplinary research lab in Silicon Valley. It's basically some people. It came out of Xerox Park and I was exposed there to an even wider range of disciplines.

Speaker3:
So physicists and linguists and sociologists and artists and like just a crazy array of people and was so, so professionalized, very young to do two things one to as much as possible, not get caught into disciplinary blinders to really try to think across disciplines in order to get a better grasp of how we are as humans in the world with technology, because that was the point. And then the second thing was, you know, really thinking about technology from a critical cultural lens. Right, so just just, you know, surrounded by people who just refused to to think about technology as an abstract, you know, kind of configuration of of affordances, for instance, right? Very committed to like, OK, these things are always being made by somebody used by somebody used in a particular context, right? Who does it serve? Who has the power, who's being subjugated, who's profiting right? And so that took me through. I'd spent about 10 11 years in Silicon Valley working in that research lab and then another research lab and then just compress everything down. I met my now wife, Sikonathi, who is an artist. She's a mohawk from just outside of Montreal, here from Kahnawake, and she started a project in the mid called Cyber Cow, and that's where she had had the vision. She went and done a workshop at a place called Studio here in Montreal to teach them how to do HTML right.

Speaker3:
And so this is ninety, I think ninety five, ninety four, ninety five. So it's just it's just happening at this point, and not many artists were jumping on it. Very few indigenous people were jumping on it. She went to that workshop. She sort of fell in love with this idea of being able to present artwork online. And so being able to connect to other indigenous artists in other places in Canada and across the world. So she started this project called project called Cyber Powwow, where that's exactly what she did. It was an online exhibition using a technology called the Palace from way back then, which is one of the first kind of like visual shared online spaces. And so when I met her in nineteen ninety nine, she'd done two two series of these and was getting ready for the third one. And we just started up this conversation about like, So what does it mean to be indigenous in cyberspace? What does it mean, you know, to be working in indigenous and indigenous arts, which even more so back then really had this heavy sense of like if you didn't do something, if you weren't working with buckskin and beads, well, then you weren't doing indigenous art, right? You know, what does it mean to take this technology that was developed by like a military industrial academic complex that was either either in and of itself or at the behest of governments, sort of had a long history of using technologies to kill indigenous people and to eradicate our communities.

Speaker3:
What does it mean to take technology that comes directly out of that intellectual genealogy and try to use it so that we can say the things that we want to say about ourselves? So, so yeah. So that's, you know, we we just done a bunch of work under the umbrella of something called Aboriginal territories in cyberspace since about thousand four, you know, looking at that question from lots of different ways building capacity in the indigenous community so that more people can learn how to use these digital tools and computational tools so they too can bend them into the directions that they they they want in order to represent themselves in the way that they want. And so then about probably around twenty, I started thinking more about, I guess, I guess. Just trying to think more at the systems level of what we're doing. Of how technology was being deployed. Right. So I was also lucky because, you know, I was working in Silicon Valley in the 90s like I was. I worked in the period where I think it was still possible to be genuinely optimistic about the technology, that it was going to increase human freedom that was going to allow humanity storehouse of knowledge to be shared freely that you know, all this.

Speaker3:
All this kind of rhetoric that came out of things and laid the foundation for like the is the Electronic Freedom Foundation. If we're getting the name right and some of the people I work with, my mentors were part of that dreaming generation of like amazing technologists, but also, you know, utopian hippies who are like seeing how they could bring these two things together and help transform the world. You know, in a way that went uncompleted in the aftermath of the sixties. So, you know, but 20 years later, 15 years later, it was like, Ah, you know, capitalism. Capitalism lots and lots and lots of of inertia is too impassive of a word, right? But capitalism just eats everything as we know and sort of thinking about how it, you know, capitalism et the internet, too. And that we this whole idea of it being used as a tool of freedom was becoming increasingly complicated, not evacuated, right? But sort of much more complicated than sort of in the early nineties. So I was doing some writing and I was also doing writing around this idea of the future imaginary. So how is it? What kind of work do we need to do so that we can actually imagine different lives for ourselves as indigenous people, indigenous communities? And so all that kind of came together in a series of papers that I started writing that really like, How do we look different? How do we really? What levels do we have to intervene at in order to make it so this technology is hospitable to our ways of living? You know, and when you do something like that, you just you end up going deeper and deeper down the stack, right? You're like, you sort of like kind of figure out what you might be able to do at the level of like creating web pages and training people to represent themselves on the web.

Speaker3:
And then you're like, OK, but there's a bunch of constraints that that happens within. Ok, so what's setting those constraints? So you go down a level right and you start looking at, for instance, the protocol layers, right? And you're like, OK, well, so how do the protocol layers get defined? Well, there's you know, there's there's basically a bunch of just a bunch of historical, there's a bunch of historical contingencies that have become ossified into fact. All right. You know, you find out, you know, the reason why SMS messages are restricted to 140 characters is because some engineer in Germany and I can't remember the story now or something like that, but there's just some totally arbitrary reason that came down to that one person, you know, who is in charge of some aspect of this being like, OK, this is the reason this this is how many characters it's going to be.

Speaker3:
And when you start digging right, you realize that stuff's all over the place. Right. It's all over the place. And so I just wanted to think I just started thinking about like, OK, so OK, this stuff isn't as actually as factual as I thought it was. Right? It's a bunch of contingencies. So where can we push on those contingencies to make things different now? And then around, you know, two, three, four years ago, you know, the A.I. stuff was kicking on off again for the third time. Right. And because I was in, well, mainly because I was, you know, Terry Winograd was my undergraduate advisor and also I studied with I took classes from David Rummel, heart right? Who was doing, you know, all the early, early neural network work. You know, I was conscious of the fact that there had been one A.I., you know, wave beforehand and that that was an AI wave happening right then and then it disappeared. And I like forgot about it. And then all of a sudden, you know, because I'm off in a very different corner of the world, I just start seeing, you know, all the stuff around machine learning popping off and become interested again, because by that time I was like, OK, going back to Terry, you know, taking classes with Terry and think and being like, OK, so actually, somehow we've gotten back, we're back to the we're back to this really brittle model of intelligence.

Speaker3:
Right. We're back to this idea that intelligence is essentially rational goal seeking. And I'm, you know, everything I've learned and sort of the way that I understand people working say culturally, that's not true. Right? There's a bunch of stuff we do that's a rational, irrational, you know, that we don't even know there's so much stuff that we do that we don't even know what's going on inside of us. And so to talk about it as rational seems really crazy, you know? But here is an industry that was selling basically statistics, right? That was selling itself as helping us discover or helping us get closer to what it means to be intelligent. And I was just like, Oh, this is weird, right? This is, you know, I'm not. I'm, you know, I'm not a fool. You know, and it took a while for me to learn a bit more because I'm like, I'm not, you know, full time practicing computer scientists. These are people who are much smarter than I am. You know, so let's go and look and see and you know, and then you look and see and you're like, Oh no, and then you talk to some, you know, smart people who are skeptical of it and you're like, Oh, no, it's statistical analysis, right? And it doesn't really have anything to do with intelligence. It's pattern recognition.

Speaker3:
So, so why is it that we have this whole field that's talking about artificial intelligence getting lots of money, right? So we're taking particularly public resources in Canada, for instance, we're pouring a bunch of public money into this. And, you know, I was like, I don't have a problem with we're pouring public money into statistical analysis. That's cool. We got to do that. That's a really great tool. All right. We'll report a bunch of money into something called artificial intelligence, but that's not actually what we're doing, it seems like to me. So. So just the short and all the rest of it anyways. You know, there's a number of indigenous people I've been involved with that conversation about like indigenous ways of thinking about the world and technology and digital technology. And one of them was knowIt Arista, who's a professor or was a professor of history at the University of Hawaii at Manoa, who I worked with on several video game workshops that we did in Honolulu. And she was she was trying. She was trying to figure out how digital media she could use digital media as an archivist, right? And so so as a as a as a Kanaka, as a Hawaiian archivist. So both dealing with archivist, you know, archiving, which is like super old school, right? And then dealing with it, dealing with archives that represent Hawaiian knowledge. Right. And so thinking about how do we how do we bring that knowledge into the machine in a way that we're comfortable with, right? Because also, there's a really long history of Western technologies being used to extract indigenous knowledge and then either profit from it or profit from it and actually make us pay for it or use it against us.

Speaker3:
Yeah. So so there's a whole lot of conversations that happen around that. She was at MIT doing some visits. She ran into Andre Ooal, who is a post-doc. I think he's a postdoc still. Or maybe he was a postdoc, or maybe he's finished Ph.D. Sorry, Andre, but. And they started talking about this because he does work around, you know, thinking through extending what's called extended intelligence, right? Like trying to expand our our articulation of what intelligence is. And he turned he turned her on to this resisting reduction essay and competition that Joey Ito before his downfall was sponsoring. And and they asked for essays to respond to this essay. So this essay that that Ito wrote. You know, that was a good essay. It was criticizing I field, right? And particularly criticizing civil libertarians. Right. So people who who are just waiting for the day when they can upload their consciousness to the machines like it's the Rapture. And but it was still very, very human centric. Right? The critique. And so we were like, OK, look, you know, we, our communities, our cultures are do not privilege.

Speaker3:
The human right, they do not privilege the man, right? And so what does it mean to come at these problems from knowledge frameworks that actually. Treat the rest of the world as as a kind of as a collection of entities to which we have some responsibility and and that have some responsibility to us. How do we talk about something like how do we talk about our technology generally? And I've been sort of edging into that in the essays I was writing? And then specifically, how do we talk about in terms of A.I.? And at the same time, I have a I have a PhD student who's now candidates, Suzanne Kite, an amazing artist and thinker who had joined me in, I think Twenty Seventeen is that when she did so, we were about a year and a half and we've been doing a series of directed readings around Lakota epistemology and indigenous epistemology in general because she's she was interested in How do I? She's a performance artist and a musician. How do I create performance instruments that somehow reflect Lakota epistemology, right? That are, in some sense, Lakota? Right? And so we were doing a bunch of reading around this question of like, what is Lakota epistemology? What is Kanaka epistemology like? What are these different epistemology is how are these frameworks you use to think about the world and look at the world? And it's just such an incredibly rich.

Speaker3:
You know, met a verse of these different frameworks that coming from a philosophy background trained as a good train in a Western tradition from pre Socratic, you know, all the way down to, you know, French deconstructionist. You know, having those moments of. Two things one like. How Anti-Life. In some ways, that whole tradition is. It's like sometimes you just read it as like a succession of attempts to get away from the body. Right? And then also how some of the more things that I always thought were profound actually existed in these indigenous epistemology. They just weren't put in this crazy language. That philosophy uses and. You know, had this we had this moment when she was talking about Lakota Lakota relationships with stones. Right. So that there's a very kind of rich and complex set of relationships with stones in her culture. Where we we were, we just realized like, OK, you know, so the stones, so you know, that extends to other things that come from the Earth, you know, and then we're thinking about like what else comes from Earth? And so like all the materials for our computers come from the Earth, right? So all this technology is being made from materials, which we have a relationship with. Right. It's just that we don't by the time we get them in our hands, we don't see us ourselves as having that relationship.

Speaker3:
So but what if we start really thinking through that? And so what does that mean? So. So what does that mean if some of that material gets created into entities that are starting to sort of act human like? Right. So I. All right. So what? What happens when this technology that we are, we are giving agency to? That we're letting we're we're we're letting make judgments about us. Right, which is not what we've done with our technology before, I would argue I would love to be in a conversation with some people who really know the history of science and technology. You know around that particular question, like, is that part for me? I feel that's the part that's really new, right, is that we can find all kinds of precursors for lots of other things that's going on. But that thing of really like across at large scales, handing over agency for making decisions to the systems. And so then it becomes even easier to see them as entities acting in the world with which we should have, which we do have a relationship. And actually, what we're obligated to do is recognize that we have a relationship and conduct that relationship mindfully. Instead of just assuming like we always do in the Western context, that nothing other than the human has agency and nothing other than the human really matters. And we don't have relationships with anything other than humans, except for maybe some pets. Right. And some plants.

Speaker1:
So now that we have heard the history of where all of this comes from and a lot of the motivations really for why this work is important and honestly, why this work is just so interesting. I would love to hear a little bit more about some of the nitty gritty, specific details of this indigenous protocol paper and just methodology that you're talking about. And you also you spoke a bit about like epistemology of different indigenous cultures and also epistemology is of just research in AI in general. And so for our audience, could you just kind of briefly explain what you mean by epistemology and then get into how indigenous protocols impact our methodology as well?

Speaker3:
Yes, so epistemology is there's a couple of ways you can describe it, so one is, you know, it's how we know what we know. It's the study of the nature, the region and you know, and kind of context of human knowledge. So it's really about how do we go about understanding the world is epistemology. And it's often contrasted with ontology. So an ontology is the sort of the question of being like, What are we right? What does it mean to be human, right? What does it mean to be an entity in this in this world? And then cosmology, which is what is our place in in in the universe? So epistemology. Your understanding of how you know what you know. Is going to determine what you know, so if. Part of your epistemology is that, for instance, a river is simply just. A collection of water atoms, you know, constrained in some physical, you know, some geographical feature. And it's there for you to just use however you want. That's going to be different if your understanding of that river is that that river actually is there to teach you something. Right, the river is there, the river contains information that is not just about itself, right, but is actually about its environment and is useful for your living. So and. There's lots of different ways to articulate epistemology. There's the classic kind of western philosophical way of doing it, but there's but it's one of the things that protocols do right to go back to earlier in the conversation is one of the things that protocol does and ceremony does is it embeds knowledge.

Speaker3:
It's how knowledge was transmitted and retained. And so it's not just about, you know, in the in the western tradition, Christian tradition, it's like, Oh, these are just about your soul. Right. That's not the case with the indigenous epistemology. I know about, right, it's like it's OK. No, actually, this ritual, these protocols are telling you how to live in the world, right? In a way that works with all the other things. So the the indigenous indigenous protocol, an artificial intelligence position paper, came out of two workshops that we did in 2018 19 that were held in Honolulu, Hawaii, where we brought 35 people from across North America, the Pacific, Australia and New Zealand. Mainly three of us were Indigenous, five non-Indigenous people. We brought them together to talk about. The question that come up in an essay that I had published with three co-authors previously called Making Came With Machines, which is, you know, should we be thinking about artificial intelligence in turns and in terms of kinship relations? Right. So we felt that, you know, indigenous cultures, which we were familiar with, you know, place a very high premium on understanding your kinship relations. And again, those kinship relations are not just to the people in your life, but to a number of the other entities in your life.

Speaker3:
And so what would it mean to bring a AI into that circle relations? What does that mean? Does it mean anything? I mean, does it mean anything, really? Is it something we can? Is it just a metaphor, or is it all just metaphors? Or is there actually something concrete? Here we can. We can sink our teeth into. And so the first workshop was two days of just this amazing conversation of, you know, people talking from Kanaka perspective, Mohawk Maori, Cheyenne, Coquille, Australian Aboriginal, etc perspectives trying to understand how to root their engagement of technology in their particular culture. So this is the key, I think, right? It's like it's about. It's about recognizing that our cultures all have long, long, long engagements with technology, right, that we all have languages and protocols that tell us how to create technology in a good way. Right. And there's no reason why we should not be bringing this to bear. On digital technology, computational technology, AI technology, other than the fact, like I said earlier, is that we've spent hundreds of years being told that we are not technology innovators, we're not technology developers, right? But we, our cultures and our languages retain those capabilities, you know, so so that was what was really fun, right? Is because, you know, as a professional, I'm not often in a room, you know, more often than not, I'm having to explain myself as an indigenous person to the other people in the room if I even bring it up right and particularly in a technology spaces, as you can imagine, right? And so it's really great to be in a space for two and a half days where we just we just everybody we didn't have to talk.

Speaker3:
We didn't talk about indigenous, right? We talked about specific communities and and dug into those different epistemology. Those communities have the different protocols that come out of those epistemology. So then about two months later, two and a half months later, a group of about a dozen of us came back to Honolulu and we did a writing retreat for a week where we're like, OK, have these amazing conversations. We all went back and we talked about it. We talked about it with people in our communities. And there definitely seems like something that's it's worth capturing here. What is it? You know, and that's where things get. Things get challenging always. But we, you know, one of the big things is we made a commitment to heterogeneity, right? So starting with the Making Kin with Machines essay, which there were four of us authors on it because we were like, Look, you know, Lakota is going to be a bit different from Kanaka is going to be a bit different from from Cree. So we have to really represent those voices as best we can while still talking about what spans across them.

Speaker3:
So it was the same thing with the position paper, except now we have 15 different communities represented. And it's like, OK, so what are we going to try to write one document like, What are we going to do? Our funders, our funders are the were the Canadian Institute for Advanced Research. We are our primary funders and they were awesome. They, you know, usually they fund these things and like, there's policy paper that comes out of it. And we were like, Well, that doesn't quite fit right either, you know, anyways, so we were like, OK, so what we're going to do is essentially like a collected or collected edition, right? So we're going to make one document, but that document is going to have really different contributions in it. And we're going to try to write at least a couple texts that we can all sign off on. Right, so that we can capture what we all did feel was common in our conversation, you know, because that was important too. Right. And in the same way, it's important to keep the individual voices there and keep that individual texture and character. It's also important to call out where there are commonalities and where we can make a united. We can have a united voice in order to push things in the direction that we want it to go. Right. So the position paper, it's about two hundred pages, and it's a collection of academic essays, journal articles, poetry artworks.

Speaker3:
A short stories that where everybody we said, OK, look, you right in the mode that's most comfortable for you to say what it is you want to say about what you think. About artificial intelligence from within your communities, epistemological frameworks and and so that's what's there, and then we have we have three we have three unsigned essays which are ah, which are meant to represent the group, you know, which sort of like introduce the project, talk about the problems in general. And then we have a two page or two and a half page guidelines document where we're like, OK, here's seven. I think it's seven. Here's seven. Here are seven approaches to ethical AI design from an indigenous perspective that we feel pretty comfortable that most certainly all of the indigenous communities that we're there would endorse, right, would recognize themselves. But we also called it version one because we're like, This is going to change. And we what we hope is that individual communities can take that as a starting point and not because they need to start with us, but because it's just hard to start, right? And then they customize it to their particular set of protocols and they're particularly working in. And there was this there was there was differences of opinion in that group. Right. So, you know, some people were like, Yeah, we actually we want to develop the technology so that we we can we can create what they called holographic aunties.

Speaker3:
Right. So so these these computational sort of systems that that, you know, captured a bunch of their traditional knowledge and then can be consulted by future generations. Right. And then there are some other people are like, Oh, no, like not going to do that with my aunty, right? But it just ended up with this really great, you know, ended up with this really great conversation about like, OK, so what does it mean to embody this knowledge? What does it mean to both simultaneously do the work of preserving the knowledge, which computational technology can be incredibly useful for, right, but at the same time, not give up on the primary goal of preserving the bodies? Right. Are the inheritors of that knowledge right, and have that knowledge in them? You know, and that's I think is part of the, you know, part of our issue in the Western context is we're just super. We're like, we're really happy to throw away the bodies. We're like, OK, the bodies. They're slow. They're expensive. They talk back, right? Let's get rid of them so we can get automated systems in place as much as possible. Right? And so that's one of that was one of the fears of the people on the one side of the conversation. You're like, Whoa, we see where this is going in the Western context already.

Speaker3:
And that's not where we want to go. Right. But the other side was like, No, no, no, no, we're not talking about that. We are conscious of how do we do that and continue to uphold and support and and and ensure the continuance of the physical bodies that hold this knowledge. Right. And so that's part of where my one of my contributions to the position paper, which imagines this young Kanaka who grows up with three different artificial intelligences, right? Is is it's like, OK, how do we how do we both take advantage of the amazing things that computation can do for us, but also keep the human and the body there in present and not just learning, you know, but teaching. Right. That's one of the weird things, too, about all this machine learning stuff is like teaching is drives me nuts when I go to the sea for things, you know, they've done this weird thing with learning where there's no, I mean, there's teaching, but there's not really teaching. Right? It's sort of like kind of the learning just kind of happens with all this data and all this algorithms, and they never friggin ask. Up until recently, right? They don't ask, Oh, wait, who's who's doing the teaching, which we always ask with our kids? Right? That's like number one question for parents. It's like, who is teaching my children? Right, and indigenous culture is that's a super important question, right? Who are they learning from? Right.

Speaker3:
That's why the aunties and uncles are so important. You know, and now as somebody who's the father of two teenage boys, I'm just like, Oh yeah, aunties and uncles are really important because my kids won't listen to me. Right, but if they say it comes from somebody else, I can say the exact same thing. They'll listen to them. Right. That whole aspect of what teaching is and what it has been for humans forever is just completely leached out of the whole machine learning discussion. And and this is another reason why they just need to call it statistical analysis, right? Because it's not learning, it's not learning the way that humans think about learning, and it's super dangerous, actually, right? Because what's happened is that there's so much money behind it and there's so much money to be made that it's displacing that sense of learning. And that sense of teaching and the importance that we've traditionally given, even still in the western world to embody teaching you like even today, parents are like, I don't want my kid to like, learn just from a computer. Right? You know, I want a body in that classroom with them, or at least on the zoom with them, you know? And why is that? And part of it is because we do we understand that there is a bunch of information that is carried in our embodied actions.

Speaker2:
That's something that I really appreciated

Speaker3:
About this position. Paper was the embodiment, but then also the vignettes and the prototype. So you have the storytelling element and then also you have this forward thinking almost, you know, what can we imagine here? What can we imagine as the future in this? But as we close, I do want to focus in on this as a starting place for those who want to design and create AI. And I'm wondering what what do you want people to know? Like, what do you want designers and people who create a to know about all of this? So I guess there's a couple of things you know and I really do recommend, you know, particularly for people who are building systems. They go and look at that at two essays in particular. So one is the how to build anything ethically by Suzanne Chike, who I mentioned earlier. And the second one is the it's actually a series of of texts about building the WHO a key language prototype. And so this was this was a group of us who built a prototype, you know, so it was one of these apps where you point your phone at an image and it recognizes the image and then it gives you, in this case, the Hawaiian word for that image, right? So these apps exist already, and they were drawing on off the shelf technology to do it.

Speaker3:
But both in the process. So how they worked as a team, they tried to enact indigenous methodologies. So around respectful listening about, you know, the reason why it's who a key and its Hawaiian is. They're like, we're in Hawaii, right? The appropriate thing to do is to create a piece of technology that might be useful to Hawaiians, right? And there are a couple of Hawaiians on their team, you know, and and then also in the way the the the app itself sort of say, verifies information like, So where do you go? Who do you talk to to see that that translation is actually correct? Right? Who in the community is going to be the person that helps you? Uh, decide that right. So I think both of them really focus on process. Right. So the really focus on like, OK, how do we develop this technology? Not necessarily what the end result is, right? Because in some ways the end result looks very similar to other technologies. But how it actually works is a product of that process, which is hyper local, right? So I think this is very hard for technology developers to do that as young engineers. We are one of those magical moments, particularly working, you know, with code, you know, is when you realize that you could write three lines of code and they can operate over, you know, five hundred thousand instances.

Speaker3:
You know, in a blink of an eye, and you're just like the power, right, you're just like, Oh my God, this is amazing and it is amazing, you know? And so we're taught to to scale right in Silicon Valley and venture capital is all about scale, you know, and capitalism is about scale, you know? And so one of the things is like, OK, no, right? Part of what you're doing when you scale like that is you're you're running over. Difference. Ok. You're crushing difference. You're excluding it from your results. You're deciding that it's not important. And so for indigenous communities, this is what our experience has been for five hundred years, right? And we we are like, no, no, you know, we're not going to engage with you as a technology maker. That is what your attitude is going to be, right? Come work with us to make technology that we that fits us. Right. And because we're small. Hardly any technology makers want to do that, so what we have to do is we build the capacity, we are building the capacity in our own communities so we can build these systems ourselves, right? And to be honest, we're taking advantage of the fact that there have been there are these mass produced technologies that we can use and that we can bend and we can deconstruct, right? So it's not a it's not a continuum from Western technology.

Speaker3:
That's not the argument, right? It's a continuum, but it's like, how do we make it work for us? So hyper localized? Be in relation, right? So you're not don't just parachute in and then perish like this, you know? So some large tech companies have done this in the past where they're like based. They're like, Give us your language data, right? You know, you need to feed us. You need to feed us because we have this great technology that's going to automate translation for you, you know, and it's taken a little while. But you know, communities have woken up and they're like, No. All right, first of all, we're going to do all this labor for you, and you're going to own it. Right. Secondly, you're not even going to do it that well, because actually the way that you say backpack is different in Hawaii than it is in Maui than it is on the Big Island. Right, but your translation thing is not going to account for that. And that's just a trivial example, right? That's not talking about our cosmology, that's not talking about our genealogies, it's not talking about how we we we take care of the land, right, where things get really serious. So. So, you know, hyper-local being relationship, you're there.

Speaker3:
You've got to be there to work with the community, right? And the thing is, and then the what's the third thing? There's a third thing that I usually highlight, but I'm not remembering it right now because the thing is, is that this isn't just I don't think this is just good practice for indigenous communities. You should be doing this for all the communities you work with, right? You should be engaged with their lives in some way, so you actually understand what they need. You know, right now we're living. We're living in side. The dream. That frankly, you know, highly socially inept. Nerds have about what they want the world to be. That's the technological dream that we're living in right now. And. It's not a surprise at all that we're ended up with all these problems with bias. To me. Right. Because that's the imaginary out of which this technology has been made. Right. And part of that imaginary is I'm a white dude. I've never had to worry about the police surveilling me. I've never had to worry about, you know, being misidentified as a criminal. I've never had to worry about going online and having somebody abuse me because of my gender or my race. Like, it just goes on and on and on. Right. And and so these are the people making the technology. So it's not a surprise at all that their technology doesn't account for the way that people actually behave in those ways in the world.

Speaker3:
And, you know, I have to I realize I sort of have to I said this in an online thing the other day because some people were getting some of the typical reactions. As you know, I'm adopted. I grew up, I grew up surrounded by white people. I love them. They're my family. I love the community I grew up in. You know, but I've spent fifty two years in primarily white environments, right? So. I know something about white people in North America and in Europe, too lived in Europe a big chunk of the time. Right. And because people always like, Ah, you're making your, you know, you're making these big stereotypes and generalizations and stuff like that. And I'm like, Yeah, sure. Yes, OK. Not all white people. Yeah, right, whatever. But, you know, but I also worked in Silicon Valley for 10 years and not I worked on the technology. I worked on the research side. I worked on the a little bit on the product side, not that much. And I worked for venture capitalists. Right. All white people, almost ninety five percent from all kinds of different places. And I can say with confidence that this is. A very, very, very prevalent way of thinking about the world, thank

Speaker1:
You for bringing that up and for mentioning that because both Dylan and I being white people as well, we like to bring that into the conversation as much as possible because I mean, our position, like you were saying with epistemology, it drives everything that we do the way that we think, the things that we experience and the things that we create. So. Jason, we would love to talk with you about this for so much longer, but unfortunately we are out of time. So for anybody who is listening who would like to get in contact either with you or your work, where is the best place for them to go?

Speaker3:
There's two places. So there's Jason Lewis Dawg. So Jason Eliquis Dawg, and probably for this work, indigenous.

Speaker1:
Org And we will be sure to include those links and many more relevant to this conversation, as always in our show notes page. But for now, Jason, thank you so much for coming on the show today and telling us all about these amazing projects.

Speaker3:
Thank you, Dylan and Jess. It's been a real pleasure. I love your show.

Keep doing it. It.

Speaker2:
We want to again, thank Jason Edward Lewis for joining us today and for introducing us to indigenous A.I., which is something that I didn't know very much of anything about before his presentation at Europe's Jess. Was this also a new area for you as well?

Speaker1:
Yeah. For the most part, I was originally introduced to indigenous research practices this last year in one of my classes, so I was a little bit familiar with some of the concepts, especially like methodology that he outlined. But this was a totally new lens for me, and I was really pleasantly surprised by some of the stuff he brought up.

Speaker2:
Is there perhaps one thing that you would like to share that was a pleasant surprise?

Speaker1:
Are there perhaps is it's almost as if this is what we do in our outro? Yeah, that one thing for me, which was very difficult to come up with, by the way, because there were so many things I wanted to discuss with you. We had this giant list that we had to weed down into one small thing. The thing that stood out most to me was that when Jason was talking about, I believe it was the app that translated different like translated things into different Hawaiian languages or dialects. Or maybe it was just one language and was like basically bastardizing the process because these applications and the developers and designers of these applications don't really speak to the communities that are impacted by these decisions. I was really moved by that specific part of the conversation because it reminded me of a lot of the participatory and co-design and user centered and human centered work that I've been really passionate about in my research lately. And I just appreciated so much that Jason basically called out all technology companies for not including the different stakeholders that have relevant expertise and insights to provide in the design process and also relevant stakeholders that are impacted by these designs. And I think that I just wanted to like, emphasize that point to make sure that everybody else takes that away to because I think including including everybody who is impacted or relevant in some way into the design of technology is essential and is such a crucial first stepping stone to to making more equitable technologies that cause a little bit less harm. What about you, Dylan?

Speaker2:
Yeah, absolutely. And I think that one thing that Jason's work and indigenous eye perspectives in general brings to the conversation is inviting us to connect into greater webs of connectivity of how things relate to one another, not just solving things for solving sake, but asking, OK, we have these systems out there. We have, you know, ourselves and our ancestors and this physical world in front of us and also potentially a spiritual world around us. So how are those things connected to the technology that we're designing and deploying out into the world? That technology isn't this thing that's out there and separate, but it's part of all of these things that are around us. However, we might define them. And so I think for me, a major takeaway is that connectivity like what if we start from this relational place instead of a number of other places that we might start? How might that change how we imagine this technology interacting with all aspects of our world and not just a narrow interpretation of what those aspects might be?

Speaker1:
Yes, absolutely, and I think on this show, something that Dylan may be something that you and I can attempt to focus on a little bit in either this season or upcoming seasons is trying to break down that that Western centric Western Eurocentric empirical, empiricist view on science and technology like applied science. Because I think that although we've attempted to try to bring in some of these other perspectives in our previous episodes, this episode was just a really important reminder for me and probably for you to I'm sure to to make sure that we are promoting all different kinds of thoughts, all different ways of knowing different epistemology ontologies, different ways of thinking about this world that we live in, and different potential futures that we could imagine for our technological society. And this was just a really hopeful glimpse into what one of those potential futures could be, at least from from my perspectives, I really appreciated that.

Speaker2:
And also the language that we use and that we feature on the show like, you know, epistemology being like knowledge, like study of knowledge and then ontology being like a study of being like. There are so many different ways for us to think about knowledge and think about being that aren't necessarily the academic ways or the capitalistic ways or whatever ways we've been. I was going to say indoctrinated and perhaps I mean that to use and think and write. We say that as two people currently within the academic system working on their PhDs. And so it's it's important, like you're saying, just for us to to check that. But for more information on today's show, please visit the episode page at Radical. Org.

Speaker1:
And again, welcome back to this new season of the radical AI podcast. We are so excited to be here and to be behind the mix again where we feel most comfortable. I might say if you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pod catcher. Catch our regularly scheduled episodes the first Wednesday of every month. That's new and keep your eyes out for bonus episodes. That's not new. Join our conversation on Twitter at Radical i-Pod. And as always, Dylan, stay radical. Typekit.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including secure transcription and file storage, enterprise-grade admin tools, share transcripts, automated subtitles, and easily transcribe your Zoom meetings. Try Sonix for free today.