Episode 24: Democratizing AI: Inclusivity, Accountability, & Collaboration with Anima Anandkumar


24_ Anima.png

What are current attitudes towards AI Ethics from within the tech industry? How can we make computer science a more inclusive discipline for women? What does it mean to democratize AI? Why should we? How can we?

To answer these questions and more we welcome Dr. Anima Anandkumar to the show.  Anima holds dual positions in academia and industry. In academia - she is a professor in the Caltech Computing and Mathematical Sciences department. In Industry - she is the director of machine learning research at NVIDIA. At NVIDIA, she is leading the research group that develops next-generation AI algorithms. Anima is also the youngest named chair professor at Caltech, where she co-leads the AI4science initiative.

Follow Anima Anandkumar on Twitter @AnimaAnandkumar

Email Anima: arangelf@caltech.edu

Anima’s Website

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Anima_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Anima_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan and Jess. In this episode, we interview Dr. Anima Anand Kumar. Anima holds dual positions in academia and industry and academia. She is a professor in the Tech Computing and Mathematical Sciences Department in industry. She is the director of Machine Learning Research at Invidia at Invidia. She is leading the research group that develops next generation A.I. algorithms. A NAJMA is also the youngest named chair professor at Caltech, where she leads the Eye for Science initiative.

In this interview, some of the topics that we explored include what is the current attitude towards AI ethics from within the tech industry? How can I engineers and humanities experts work together effectively? How can we make computer science a more inclusive discipline for women? What does it mean to democratize A.I.? Why should we? And how can we?

It was an absolute pleasure to be able to have a Najma on this show for many reasons, but one of the main reasons, at least for me personally, being a woman in technology and coming from a computer science background is just how much of a trailblazer and nightmare has been for so many women, and especially women of color in this technological computer science machine learning space. And so the ability for us to not only have a name on the show, but to hear her personal story and her family's history and the reasons and motivation for why she does what she does and the amazing work that she does was just a real gift for Dylan and I. So for that reason and many more, we are so excited to share this interview with Dr. Anima, Anand Kumar with all of you.

Highly mobile come to the show. Thank you so much for coming on today.

Thanks a lot. I've been following your podcast and it's an honor to be on this. Thank you.

Why don't we just get started today by talking a little bit about you as a person before we talk about you as a researcher. So could you start off by telling us a bit about what motivates you in life to do the work that you do?

Yeah, you know, that's always the core of every person, right? What motivates them is what leads to everything else in the world. And for me, it's the curiosity of always being a curious person. I've always wanted to learn more, keep learning, keep growing. And yeah, when it comes to math and science, the order and structure that it provides for the universe around us and how, you know, it gives us to me, it gives me more meaning to my life around and to do good for humanity through maths and sciences. That's what has always motivated me. And to be now in the midst of all this progress has been a wonderful journey.

Were you always interested in these topics, like as a as a child, or is this something that has kind of been in process over a lot of time?

Yeah, I've always been drawn to math and sciences ever since I can remember. You know, I remember as a kid always fascinated with puzzles, with math problems. And I would go to my mom every day to give me new puzzles and even my grandmother. So the women in my family have been mathematically amazing. You know, my mom was one of the first female engineers in the community. And in fact, she went on a hunger strike to get into engineering because there was a lot of traditional right norms and people were worried about her malleability if she became an engineer. And so she encountered all that and overcame may have done. For me, that's always been an inspiration to further. You have more women in engineering and sciences. And yeah. So that's been my childhood experience has certainly shaped me to who I am today.

Wow. Could you maybe unpack that a little bit in terms of like unmarry ability and like a woman being an engineer and if that's impacted you and influenced you and your life before?

Yeah, certainly. I mean, right. This was many years ago. Right. This was the previous generation. And my mom hails from a traditional community. And at that time the worry was if a woman was very qualified, then, you know, who would marry her.

And honestly, that's still there today. Right? It's just in a different form, in a different extent, maybe.

So my grandparents were mostly concerned about her. So it was coming from a place of concern that, you know, what would happen to their daughter if she became an engineer and couldn't find a husband.

I get overwhelmed when I look at your resume because I look at it and I'm just like, oh my God, this person has done so much and is doing so much and even in preparation for this interview and thinking about kind of what we want to cover, because there are so many different options. And I'm wondering, as we get into kind of that that line, the through line between your personal work and then your family life and then this this work that you're doing out in the world, if you have something on your heart right now or a project that kind of is situating yourself right now, that you're really just like feel the energy around that you'd like to chat about.

Yeah, I'm for me, I'm very passionate about how I is having an impact in this world today, both the good and the bad.

And I know in this podcast there's been a lot of coverage around what it takes around. I what about I bias. How do we harness A.I. for good, for social good I for sciences. So it's really now almost a moment of truth for A.I..

Right, because you know, there's been excitement, maybe there's even hype over the last few years, but now it's like, OK, what can I actually do? And we are still in the infancy.

It's still early days. And I think it's important to manage expectations and at the same time understand what are the barriers for the current AI methods. So I'm very passionate about building next generation algorithms and frameworks around that to enable that, because it's important to now think of a holistic picture. We can't be just in a corner, the proverbial elephant and the blind men. Right. We can't work that way with the AI anymore. So it's to me like a convergence of so many things coming together to enable the impact of AI. That's been the most exciting for me right now.

And it does seem like you do sit a little bit in that convergence of all the different sectors of an industry. You're the the director of Machine Learning Research at Invidia and then also you're a professor at Caltech. And I mean, you're the youngest named chair professor at Caltech. So you have a lot of different places that you are situated within this AI field. And so I'm curious from your perspective here, what do you think the next generation of AI is? Where is it going?

Yeah, for me, I think it's so important to bring industry and academia together in a strong partnership to enable AI progress. Right. So when we look at on the industry side, especially at Nvidia, that's the heart of computing. Know, the modern day chip use enabled parallelism enabled us to overcome the end of Moore's Law. And we thought that we would not have these big, deep learning models that give us impressive performance. So computing is the heart of it, and that's an industry and the engineering required for it, how to build infrastructure, how to build the stack. And then we can ask what are the algorithms that can exploit this form of computing, the scale of computing? And for that, that is also a lot of answers on the academic side, because the foundations have been in academia at Caltech. We've had the both of Europe's in fact, the news conference started at Caltech with so many people don't know. And the history of it can be traced back even to the convergence of computing and neurosciences together. The program called CMS. And that started with a course with Richard Feynman, John Haberfield and Carome teaching together these diverse topics so that interdisciplinary nature and foundations and academia is critical to build the next generation algorithms. So we need the strong partnership of building new algorithms, thinking of different scientific domains and social sciences domains with expertise in academia and the engineering prowess of industry coming together.

We hear a fair amount in these interviews about. How a lot of folks are looking for that interdisciplinarity, there's there's a goal of being more interdisciplinary in these spaces, including an industry, and then we hear a lot of stories of pushback as well, of barriers to the interdisciplinarity. And I'm wondering for yourself from where you're sitting about where you see some of those barriers and how we might overcome them.

Certainly there are barriers, right. And that's why it's a challenge. Otherwise, would it be a research problem?

And to me, I think the question is how to dismantle those barriers. So at Caltech, I co-founded AR4 Science, which is a campus wide initiative to bring together different domains, scientists with A.I. experts together and to work in an integrated manner. Right. So most of the time it's not just, OK, take this A.I. method or take this open source code. I use it on the data in a black box way and outcomes the answers maybe once in a while that happens. But that's clearly the case. So there are so many barriers you can think about. Right? One is the data itself in many scientific domains. It's small, it's limited, it's noisy. But there's also a lot of domain knowledge. You, the domain experts, will go out of business any time soon. The whole those institutions that are so key to working with small scale data and the current deep learning methods. Right. Don't know how to infuse both of that together very well. And so that's then a new research problem for A.I. itself, how to build different kinds of domain knowledge and structure into our current A.I. algorithms that can seamlessly decide how to blend the two.

And that's one of my core research areas. And so what it needs is this close partnership, because the experts need to understand what are the core challenges in the scientific domain. Right. What is the domain expert bringing in? What kind of data is available? What are traditional solvers and methods able to do here? And then the expert has to decide, are the current methods enough or do we need new ones? In a way? Do we get started? And it's a long haul process. And so many cases it's not just the data is already. We need to then collect new data. For instance, at Caltech, I work with transistor node, have won the Nobel Prize a couple of years ago for our work in protein engineering. So now if you have to discover new, important proteins, right, it's then a continuous process. The machine learning should guide how to discover new ones. And that's true in most scientific domains. The experiments are expensive. So how do we direct them to do new experiments and collect better data? And so that virtuous cycle that we have to design good algorithms for.

So when I think of scientist, I, I tend to think of, you know, the biologist or the chemist or the engineer, and it's not as often that I think of like a social scientist or someone who's in the humanities. And so I'm wondering if in your eye for science initiative or organization or whatever the verbiage is there, are you also including social scientists and people who are part of the humanities in that group? And if you aren't, why not? But if you are, how how does that collaboration work? Because A.I. is pretty far removed from a lot of what social scientists are learning in social sciences, unfortunately, super far removed from what a lot of computer scientists learn. So how do they actually work together and speak the same language on some of these projects?

Yeah, in my experience, actually, that's worked wonderfully well, so indeed, at Caltech, we include humanities and social sciences. When I say I for sciences, maybe the Caltech is a special case because there is a small community and everybody is mathematically minded to some extent and want to collaborate and want to use it so that openness helps a lot. And so I've been working with Professor Michael Alvarez from the Social Sciences Division, looking into all kinds of issues on social media, for instance, the meta movement we've been studying, how the movement started, how it evolved. What were the counter movements? How do the conversations look like? What are the topics being discussed? Can we control trolling on social media? So it's been very fascinating to hear their perspective in terms of how they go about framing the problem. So what are the questions they want to ask when it comes to the meta moment? And so that gives me a different perspective than for me as a computer scientist. I'm always very quantitative and we need to be quantitative, but also make sure we ask the right questions and look at the social implications of that. And so that's been an ongoing collaboration that I'm very excited about.

Recently, we were on the line with our colleague, Dr. Jan Wortman VUN, over at Microsoft Research, and we were talking about the distinction between some of these languages of responsibility vs. A.I. ethics.

And originally, we have started this podcast and trying to probe and interrogate ethics, which has just evolved even in the months that we've been doing this podcast, that term and responsibility as well.

And I'm curious for you of where you're sitting again between industry and the academy and in so many different perspectives, how how you think about those topics and in terms of your identity as a computer scientist, how you go about thinking about ethics in general, indeed, so hopeful over the developments over the last even year or last few months.

Right. In terms of the increased awareness around ethics and the importance of it, I think for a lot of computer scientists, it's this notion of don't ask, don't tell.

And I'm not responsible for this. I just want to think about might leave me alone and are still you know, we have to counter that.

I think a lot of it comes from you can trace back to the lack of. Right. Liberal and humanistic education even on the way in college. So we only focus on technical education that's built on a military style regime. Right. So that doesn't always encourage this kind of thinking about the impact on society and on humanity. And sometimes that's overwhelming. I mean, to think that if I design something that can, you know, adversely affect people's lives, many people want to avoid a harsh truth like that. And that's why I think it's so important for researchers who have been outspoken on this to continue on this, because I think in the long run, there'll be more people on board and we're already seeing that change. You know, people are now talking about it. For instance, new report now requires a broader impact statement. Right? So even the process of writing that now the researchers are forced to confront what could be the impacts and, you know, for AI as an algorithm. Right. That's. Like that by itself cannot be good or bad because that's just math. But the problem is the societal context in which it's deployed and who has the power to use them and what happens if they are wrong answers given by who is in charge of handling that. I think that's the broader context that so many engineers are disconnected from. And that's why we have to reach out to policymakers. We have to reach out to politicians. We have to reach out to the general public. And the awareness of what I can and cannot do is so important. And that's been increasing over the last few years.

So with your experience working with AI and machine learning and industry especially, do you think that the people that you work with who are working on AI tools and are specifically from that computer science and engineering perspective, do you see that they are asking questions about societal impact and the unintended consequences of their technologies and their code? Or do you think that's something that is typically quieted and hushed? And like you were saying before, you know, don't ask, don't tell. And it's kind of like this weird stigma. What are you seeing in your circles?

I mean, there has been a change for sure, right? Earlier, there was just not even in the realm of conversation and especially like a minority person like me. Right. Did you.

And feel always comfortable bringing it up because otherwise I'm the only one talking about it and there's no response and it's just a quiet room.

But thankfully, that's no longer the case. In a way, we are forced to confront about it because of some of the things in the news.

Right. I mean, with the Black Lives Matter movement, finally face recognition being used by law enforcement, the company said we will stop selling that to law enforcement and there has to be regulation. So when you see aspects like that seeing title change now, companies realize, OK, we can just avoid this anymore. And suddenly there is a huge need for experts on this topic and how to navigate this landscape.

I'm curious if you'd be willing to say more about your experience, because I know you mentioned at the beginning of the interview as well, your experience as a woman in the space and then also your experience as a minority person in the space, in other identities as well, just what your experience has been with that. And then also there are a lot of folks that listen to this program who are young women of color who are looking for either advice or I know a lot of folks who look up to you in particular for everything that you've been able to do in terms of breaking through some of those stigmas. And I'm wondering if you have pieces of advice or if you just be willing to share more of your story around that.

Yeah, absolutely. And and I'm proud to be a woman of color in a I in computer science, you know, and be able to hopefully have more more of us and more minorities, more people of color in these communities. And that's badly needed. And I was you know, I grew up in a family where my mom being an engineer, she encouraged me to go look into maths and sciences in early childhood, and I loved it. So that became a great synergistic experience for me and even all to really know why I went to Junior in college cities in India, where there are so few women, so many of the classes, I was the only woman. And you know, at that point I guess I started hitting me. Oh, this is so all right. And all the attention is on me many times.

If I skip a class, suddenly the professor is like, where are you? Right.

So it's a very different experience than a typical male in going through engineering. And I know and that's propagated. So, you know, there's not many women and women of color in many places I go to. And I would like to change that. And for me, I think the important thing is to look for allies, look for mentors, look for people who are open and willing to hear my experiences right and how I'm feeling in such a room. And I think that awareness has also greatly changed in the last few years, because before that, in fact, I almost tried to hide away that identity. I never wanted to talk about my experiences as a woman.

I didn't want to receive women specific event.

I skipped all of them because I was just too worried about being put in a box and being branded in a certain way. But now I own that and I see so many other women owning that. And I think that's been a very welcome change.

I think I can speak for both Dylan and I when I say this, but I'll say especially from my perspective, being a woman in tech, I really do look up. To the efforts that you've made in Nima in trying to build this inclusive city and making computer science be incredibly inclusive to women, and one of those actions that you've done that really stuck out to me was when you fought for the renaming of the NRPs conference. And I was wondering if maybe for our listeners you can explain the story of what motivated you to do that, what went down during it and after it, and your feelings about all of that now?

Yeah, yeah. That was an eye opener. I never expected opening that Pandora's box again because to me, it was so simple.

You don't call a conference NIB's right. It's juvenile. It's just silly.

But, you know, but before deep learning just took off, I wasn't too bothered about it personally myself either because it was a small community. I felt mostly comfortable. Right. Even though I was new to the community when I started my faculty career, people welcomed me. And when it is three hundred or four hundred people, it feels intimate and good. So wasn't an issue back then, but it became a huge issue when all this tech culture started coming into the community and the community explodes. There's a lot of money coming in. And so there are bodies that those women, scantily clad rappers that involve unlimited alcohol, that involve all kinds of really bad elements. Right. So it's just very unwelcoming for women in the community. And that's when I think the naming became a huge issue. In fact, one of the hedge funds wanted to gain notoriety. And so the paper printed a t shirt about a joke on NIB's and started distributing it in the conference. So you can see how now the name is center stage and we can't ignore this. And but there was a lot I was surprised at the extent of pushback because first they did a survey and they did it in such a poor way because they said, oh, the majority don't care about this.

Of course, the majority are men and it doesn't affect them. They may just have a laugh and say, this is silly. Why are we wasting our time? Right.

And so that survey even showed how the voices of minorities and women are just right. No one cares. And that's when it was important to rally around this on Twitter.

When I saw that the other women are getting ignored and that nothing is being done. This is said. This is a decision is made.

I'm like, no, you cannot just do this. And so after I started tweeting, so many others joined. And in fact, we wrote a paper together based on our conversation on Twitter with Elena Fertig, Danilo Whiton and Jeff. So so was a lot of our allies and supporters, which was great.

But, you know, it's always those few bad apples, right, in social media that make life very difficult. So the trolling, I was just shocked to the extent of trolling that I could personally for me and a lot of it more from Twitter to read it on the Reddit threads kept getting very long ride. The moderation was a chill coming. They kept making all kinds of jokes about my looks, my sexuality, everything. It was just nasty to read that I even got contracts for other women who tried to challenge that got threatened. And so this is when I realized, oh my God, this can become dangerous very quickly. And I can now see how public figures for women have such a difficult life online.

And so, yeah, that experience taught me a lot on how to better manage my presence online, you know, how to block people, how to move them. I think all those controls now were essential and how also over time I to get more allies. So once I got a lot more allies, then they would take the conversation forward. Right. So it wasn't overwhelming for me.

So I learnt a lot about the importance of social media because we thought that we wouldn't have the community of allies without that. We wouldn't have the awareness, but also the downside of it that I felt in danger. Other women felt threatened and it was just mentally and emotionally a very challenging experience. And so in the end, I think the whole purpose was just not only about the name, the name was symbolic for the. Problems in the community, and so this saga, rather, showed the deep problems in the community and now people got talking about that. And then there is the Diversity and Inclusion Committee now that's taken very seriously and we take code of ethics, code of conduct very seriously. So I think it did bring about a big change in the party. So now our history, they're replaced by more inclusive events and ensuring there is good security that is good culture at these events. And there's good professionalism because that's what we want at a professional event.

One thing you said that really struck me, which is true to my experience as well, is that sometimes when a group or community starts and it's, you know, only 300 and 400 people, then it's one thing and then especially something with even A.I. ethics.

Right. Like people are trying to figure this thing out. Big companies are putting a lot of money into it along with responsibility and at the scale is just advancing at such a quick pace.

And I'm wondering from where you're sitting, especially as you're out, you're out in the video, you're in these spaces where there's a big scale, you're trying to do a lot, you're impacting your decisions are impacting the world in a very direct way at a pretty large scale. And I'm wondering for you how that money and that scale impact, I guess, the the work that you do or how you even think about, like, responsibly scaling the work that you do.

Yeah, I think this is where having diverse teams is so important because they bring different perspectives and experiences in terms of what the impacts can be, because for one person, it's impossible to visualize all possible ways that technology like I can be used.

And so I always think about what are the pros and cons in terms of the data we are using. Right. So unfortunately, a lot of standard data sets in the community are biased or imbalanced.

But then now, on the other hand, that's where the benchmarks are. So if you don't use them, you cannot get research done. So now if you use them, how do you still try to mitigate the effects of it due for the disclaimer? Or do you say you can now fine tune on a more balanced dataset? Right.

So I try to think about solutions, because if sometimes you know, the idea idealism is great, but if it's a utopia and it stops people from doing research, it won't get adopted.

So we have to find a middle ground to keep moving forward and keep moving in the direction of better ethical.

And so that's why I also think about the incentive mechanisms in the community, like if researchers are incentivized to keep using imbalanced data, said keep releasing models that are biased, and then companies are incentivized to just use them and make money. The problem is and so we need to also think about how to build the right incentives, either through regulation, either through public awareness or the PR angle, like, you know, you've got companies releasing biased AI. Then there's a lot of bad PR and that we've seen in the context of face recognition. So those aspects, I think we need to build the community structure to incentivize people to do the right thing.

Yeah, I want to latch on to that building awareness piece and it's almost like a transparency piece. It sounds like something that I've seen you work on a little bit and a term that I've heard you use is democratizing A.I. and that seems to kind of go hand in hand with this transparency piece and this awareness word. So I'm wondering if you can maybe give us a definition of what democratizing AI is and then what your efforts to do that are right now.

I mean, to me, democratization is both axis representation, accountability, transparency of AI. Right?

So that means we have to understand how I was trained in what data went into it, what kind of algorithmic decisions were made and what happens if I was wrong. What are the plan B that is in place? And so the model cards that Tim Kabiru, Mike Mitchell and others have come about as a great framework to enable that the.

Sorry about that. So so that's the first piece in terms of understanding how I was trained, you know, how it is going to be deployed and how it is going to be monitored and policed.

And then the second piece is, what about the representation in like if, for instance, we have our voice recognition and it doesn't recognize all the accents, then that's about the minorities are having a worse experience than the majority's.

But compare that to face recognition by the law enforcement, there's a bad recognition is in fact, a life and death situation for many minorities.

So, you know, we also want to weigh which one is an inconvenience versus an adverse effect.

And based on that policy around making the changes in the.

Ok, I can I can go again, sorry, some something's in my throat.

Oh, yeah, so based on how how the wrong decisions impact the community, we can then design regulation around that and we can say what should be the standards that are met? And is there a way for people to contest these decisions and write to these companies and demand better services? And finally, we need better representation in terms of the teams itself. When teams are diverse, only then we can have creative solutions and we'll know issues that can arise before I deployed.

When you talk about regulation.

I guess I just want to hear more about what you think about regulation and when you say it, if you're thinking about it more from the government perspective or more from industry self-regulating, and then is it beyond just general guidelines, like is is there some level of either punitive or more direct, I guess, incentive structure that you would recommend, especially when it comes to ethics and implementation?

So when it comes to regulation, it's not one size fits.

All right. So we have to understand what are the implications of a wrong decision by a current algorithm in which social context and because a lot of innovation has happened because of open data sets and open algorithms. So we have to keep that virtuous cycle going while also ensuring that the bad effects are minimized. And I think how we treat a self-driving car and a health care guy versus, say, Siri app is should be different. Right. So and that's what makes it so tricky to know how to set these standards. And for I think the government can start from these sensitive applications in a law enforcement, autonomous driving health care so we can start there and keep that much more rigid and transparent and then work our way through the rest of the domains as we talk about governance and standardization and global.

I have to ask you, because you have participated in what's called the global governance of a roundtable, something that I had never heard of before, looking at your resume. So I'm wondering if you can explain to our listeners what that is. It sounds really cool what your position was on it, how you were a part of that.

And then I guess what what the goal of this group is certainly, you know, the ARRL or the global governance of I was started in Dubai and I was so fascinated to hear that Dubai has a minister for. So there is a lot of forward thinking aspects in terms of how to bring a guy to the region and how to rally the community around and build better policies on governance. So as part of that was this event of bringing in more than a hundred experts, not just in Korei, but around policy, around governance, around regulation. So I got to meet people from all over the world thinking about these issues. I chaired the Committee on Mapping the Progress of A.I. And so questions like how do you decide if this is a progress or not? I think is so critical because there is so much of hype around in the media. So we need to have metrics to measure what progress means and how to, you know, look at the impact of A.I., because as we can see from the hype, not all the promises are delivered, but as experts, we knew that wouldn't be feasible anyway. Right. So how to set the correct expectations and especially make policy makers aware of the limitations of A.I.?

Because a policymaker could say, oh, I want an AI that's completely transparent and private, every kind of checklist.

And then computer scientists will be like, no way.

I mean, so that's why we need to kind of set those expectations of how to enable tradeoffs. And that's what that event helped me connect with many people in the area to do that.

As someone who is situated in machine learning and really you've done some very deep ties with know algorithms themselves, I'm wondering how you think about bias and fairness in the work that you do, because they can have such different connotations, depending on what context you might be in.

Certainly. And that's why it's such a tricky topic to think about bias and ethics. Right.

And I think in terms of the core algorithms, when we are working on, I even don't think we should be claiming this is ethical or fair because the societal context is so important. But as we remove that, when we are doing research, we just look at the math. Right. So to me, I think I'd rather think about it in the framework of good generalization. You know, it generalizes to the minority classes and the data set. It can be robust. If you put up the data, it still works well. It gives you the right uncertainty. So, you know, when he's making mistakes, it's not making them with high confidence. I think these are all desirable properties of the algorithms.

And so once we build such robust and generalisable algorithms, then it's really up to the policymakers and everyone else to ensure that it's used in the right context and trained on the right data.

So I ask this question, understanding that you might not be able to answer it because of the way that large tech companies work. But I'm wondering, maybe even not just with Nvidia, but with your experience working with machine learning algorithms for so long, do you have a specific example that comes to mind of a model you were creating a data set you were using or something that you were making that was with machine learning or AI, where you realized, oh, this might be bad or this might be biased, this might be unfair, this might be unethical. And if that happened, what you what you did to go about trying to remedy some of that.

Yeah. So, you know, I think I can now talk about my experience and Amazon, where I worked before I came to in media.

And there I think, you know, that was right. Or three years ago or so. So back then there was even less awareness about AI fairness and the topic was even just beginning to come up in conferences.

So in a way, when the face recognition tool was released then and it was being sold to law enforcement internally, there were discussions, but it was very hard to convince the management in terms of that removing such a capability or removing the tool. Because, you know, the argument was, oh, no, it's really up to law enforcement to decide how to use it. We are just enabling them. We are the tool makers. Just because we manufacture a knife doesn't mean we are dangerous. So this shirking of responsibility, if you want to call it, I think that was there and that was very hard to overcome. But I think, you know, once the movement becomes global, like Black Lives Matter, and that puts focus onto this issue, things change. So sometimes I think we have to be patient when we are doing activism inside companies and still existing in the structures that have systemic racism and sexism. We have to be patient because most people are allies. They want to help, but they feel always worried about taking big steps. And sometimes you need to rally the larger community to make that happen.

So this being the radical AI podcast, every episode we ask our guests about what radical A.I. means to them or just what the term radical means to them, because part of this project is kind of codifying that. And so we're wondering for you what you think of as kind of your definition for radical AI and then if you situate yourself within that definition, just a quick pause for all of you listeners, an anonymous answer to Dylan's question here.

She references the use of Tensas and how they have radically changed the field of machine learning. For those who don't know what Tensas are, you can think of a Tenzer as a giant matrix that exists inside of a machine learning algorithm. This matrix helps the algorithm utilize large quantities of data in order to learn from it and become artificially intelligent. And Najma actually spearheaded the development of Tensas algorithms in her seminal paper titled Tenzer Decompositions for Learning Latent Variable Models back in twenty fourteen. All right, let's get back to the interview.

Yeah, I mean, radical is such an energizing word for me, right? Like radical is sharply deviating from the norm, not just deviated sharply deviating.

And yeah. So for me, you know, when I came into the field, I did that when I was beginning my faculty career and because I knew there are so many challenging problems in the area so I could see it just from a fresh lens without preconceived notions know. So I came across the problem of unsupervised learning and how to discover hidden variables from data in an automated way. And for that challenging problem, I started to ask, okay, how do we, you know, think about using relationships in the data to extract this at scale? And that through that I came across the use of Texas. You know, I had maybe looked at Texas before in my quantum mechanics class, never thought about it much. And so this connection of thinking about correlations and data and the use of tenses was to me very radical. And at that time, great machine learning didn't have the use of tenses at all. You know, now we have tens of flow, tens. Of course, everything is a data tensor. Right, so there is more awareness of how more Tensas can do for machine learning, but to me it was really exciting to be in that radical place to introduce Tensas to machine learning. So, yeah, so that's been I think for me, being radical is bringing about a positive change. And when I look through even how far, you know, in my family, my mom was radical and being the first female engineer in the community or how my grandfather built machines without having any engineering background, all those inspire me to take risks in order to do better, to do good for the society.

And as we come to the close of this interview, we ask our guests a piece of advice that they might have relevant to something that they said during the interview. And I want to focus in on this, wanting to do good, because I truly believe and maybe this is the optimist in me, but I truly believe that engineers, for the most part, do want to do good or at the very least, they don't want to do bad. So I'm wondering from your engineering and computer science perspective, for other computer scientists who are looking to not do bad, but also to do good in their work, in their job and their life.

What advice would you have for them?

Yeah, I think the first important thing is education. Right. And we engineers many times don't have the tools to even think what it means to do good, because technical education doesn't always have the full grasp of humanities. So, you know, ethics courses are only now getting introduced in some universities. So it's almost like you have to be much more proactive in first gaining that awareness and talk to the experts and think more deeply about it, because I think that's where this disconnect is, because many engineers have not even a cube to think about it.

And it's always a bias because if someone is very smart, I can do all this complex math. It's like, oh, how hard can this be? And I know it's hard because we don't have the foundations to think about it. So I would advise to start from the basics, do a course on ethics, talk to the experts and humanities, and only then can you think more creatively and deeply about this problem.

So we're just another situated as Ph.D. students, just as in information science and I'm in religious studies. So we have the humanities and computer science and kind of talking. And one of the series of research projects that we're working on is on computer science education, which you've spoken to a few times in this interview. And I'm wondering if you because you're also situated in that education space about what what do we do? Because they're even from the humanities. Like, I'm very siloed and learning particular things, but I'm not learning some of these computer science things. But I have an ethical perspective that I could possibly bring to those conversations. And computer scientists obviously have a lot to bring to those conversations, but don't necessarily get that ethics training or the moral philosophy training that might be beneficial for building technologies down the line. And I'm wondering if you have thoughts on what do we do about that?

And I think that's where Kaltech being a small place, has helped a lot because, you know, for instance, Frederick Eberhart in Philosophy, you know, he works on Kozulin friends.

He's teaching a course on ethics for A.I. And there is a lot of conversation with computer scientists. Many computer scientists take that course in a talk to him. So I think you need they just like that who can speak to both communities and have common language. So I still believe I think some mathematical grounding in humanities is important to have that conversation started and similarly for computer scientists to try to have that foundation in humanities. So we need to meet in the middle.

Well, Nima, thank you so much for sharing all of your work with us and for our listeners who want to engage more deeply with your work or maybe get in touch with you, maybe even prospective PhD students, what is the best way for them to do that?

Well, I'm on social media and on Twitter, for instance, and they can also reach out to me through my website.

Great. And we'll include all those links in the show notes as well. Thank you so much for coming on. And it's really been a pleasure.

Thanks a lot. Just and doing what you're doing is amazing. It's great to increase public awareness and it's radical. Thank you.

We again want to thank Dr. Encima Anan Kumar for coming on the show today and for a wonderful conversation just as we come out of that conversation, what is especially sticking out to you or resonating with you right now?

Well, I think the first thing on my mind is everything that Animo was talking about in terms of the stigma that engineers have against ethics.

And that is like such a big statement. So maybe I should back up for a second before I just say something that's such a bold claim like that. But I think especially in my experience coming from a computer science background and getting my undergraduate degree in computer science, I know exactly what I was talking about here, and I've experienced this firsthand. So maybe I do have the right to say such a bold statement, although I shouldn't generalize it to all engineers. But I do think that in the field of engineering, there is this stigma against the word ethics.

And we talked about this with Gen Wortman born a little bit in our recent episode with her to what is the problem with the word ethics and what is the problem with the idea and the concept in the topic of ethics when it comes to engineering problems? I know for me personally, I have quite a few friends in engineering where every single time I bring up, oh, what are the social consequences of this algorithm? What about AI and machine learning, changing society?

They just roll their eyes and groan and say like, oh, here's Jess again, talking about AI ethics. OK, here we go.

And I think that I can't be alone here. I'm sure there are a lot of engineers who are maybe in my position who feel the same way that I do and get constantly pushed down or getting constant pushback from their other engineering friends. And I'm sure maybe even people listening to the show right now are in the other boat and they're just engineers who want to do math problems and algorithms and development work. And they don't really want to have to think about the social impacts of their code. And honestly, I think that's totally understandable as well. I have quite a few friends who are in that position. And so I don't think there's necessarily anything wrong with just wanting to be a computer scientist and not a social scientist.

But it doesn't mean that it's not important for us to figure out ways for engineers to learn how they can incorporate ethical speculation and thinking and these important concepts and discussions about the societal impacts of our technologies in the education for computer scientists so that maybe it isn't as much of an eye rolling conversation or there isn't as much of a negative stigma or stereotype around bringing ethics into that technical space.

Yeah, I guess the only thing I'd push back on on what you said a little bit is that I think that all of those problems also show up in social science spaces as well.

So one thing that I'm thinking about, like, look, I just don't want to idealize my own, you know, field of social science or of the humanities as like not not taking the easy way out in the same way that we might be, like, assuming or accusing or whatever folks instead of of doing.

So I'm thinking here's what I'm thinking basically, is that people, as Nima said, like to avoid hard things. Hard things are by definition hard and difficult. And if things are difficult, we might not necessarily want to do them, especially if it's not like in our faces. And this is something that like when I used to serve as a minister, something I learned time and time again of people like there are certain things people just don't want to deal with. People don't want to deal with death. People don't want to deal with privilege. People don't want to deal with the hard, you know, realities of life.

It's much easier to just kind of be in a space in which we all think we're doing good for the world and we don't have to examine or look in the mirror or or whatever. And I think the engineers have a very they're in a very particular place right now because of the certain.

Powers and privileges that they might have in terms of designing technology, but I think all of us, especially if we're in a comfortable or privileged position, it's really hard to get out of that privileged position and say, oh, no, I actually want to look at the hard thing like, oh, there's this hard thing out there that's that's what I want to spend my time doing. Like I get it even as a social scientist, like if there's a problem out there, if there's like an easier way to solve that problem or a more efficient way or even coming from like, you know, an engineering family, it's like I love solving puzzles. Right? Like, I love having an answer. And I can claim it's the answer.

And I think it's really hard for for all of us to really throw ourselves into. No, actually, maybe we should look at the hard things, like maybe we should throw ourselves into asking the hard questions as opposed to avoiding the hard things. I think it's like a very human response and interaction that maybe it's easier to, especially because of the real potential harms that engineers can cause in designing technology. But I think it's a let's say it's a case study for how we all deal with that tension of not necessarily wanting to ask those hard questions or look at those hard things or the impacts that we're causing based on wherever we're situated in this technological system.

Yeah, that's actually a really good point. I hadn't thought about that too much. Maybe it's it's less so something that is the fault of the engineers and it's more so just a part of the human condition that we really don't like ambiguity when it comes to solving problems.

And I can definitely see that which is which is not an excuse. Right. It's not an excuse for us to not ask the hard questions. It's just part of the description of it.

Definitely. Yeah. And I think the natural progression here is saying, well, OK, so we have this ambiguity. There are no best solutions to some of these problems that exist in society and that are perpetuated by technology. So what do we do with it? And I really appreciated animos advice here. And I this is probably coming from a biased place for both you and I, Dillane, because this is something we're actively researching. But her call towards education and awareness building in the field of technology, and I guess I should say, maybe the discipline of technology and incorporating ethics and humanities into that discipline in a way that is approachable and understandable and maybe not quite so fear mongering or scary for those who weren't looking to get into that in the first place.

Yeah, one of the it's kind of cool for me. So folks who are listening to this when it comes out live, it's cool to to listen to this and to our previous episode with Dr. John Wortmann. Banas just referenced earlier as coming out and kind of back to back weeks because there are certain themes that have gone through both of them, including this question of avoiding hard things and and education, what we do with education and then also this concept of language. So I really appreciated Anita's insight here into the change of the conference into NRPs conference and what needed to happen behind the scenes in order to make that happen. And behind all of that is like this assertion or maybe this this ground of to what degree does language matter? To what degree does what we call things matter? And obviously, again, maybe not, obviously, but I think just both in you and I come on the side of what? No, actually, what we call things signifies some meaning of some kind. And therefore it matters to to a large degree what we call things like even when we call this podcast, the radical AI podcast, where signifying something and we're not signifying other things by the language that we're choosing. But that was just such a great case study for me. And a great reminder for even when we're unintentionally naming things in our field or naming new technologies or even naming robots. Right. That we are making some sort of claims about purpose and meaning and some of the negative consequences that we don't even think about are or some some of us don't always think about, I guess, ah, whether those naming conventions are creating welcoming spaces or harmful spaces to folks in our community.

Totally. I even saw this on Twitter the other day. I wish I remembered which scholar it was who said this, but they were asking if instead of us using the language for a double blind study, instead of calling it that, to call it a mutually anonymous study just because that language of the double blind study is not promoting inclusive city. And so I think you're totally right. I mean, we've talked this to death probably on our show at this point. Language matters, narratives matter, definitions matter. And the way that we tell the stories around all of those decisions, they matter well.

And there's a reason why we talk it to death on the show because it matters, right? Like it's real. I mean, we're going to keep talking about it and people are going to keep talking about it until it stops mattering, because it does to such a large degree, because it frames conversations that we don't even realize that we're framing. And again, that's that's why that's our one of our rallying cries, that stories matter in language.

Yes, so you can definitely expect to hear more conversations about why storytelling matters and why language matters and why narratives matter, my definitions matter and why all these things matter from Dylan and I. But for now, we are out of time. So we'll have to cut the conversation there.

For more information on today's show. Please visit the episode page at Radical. I dig.

If you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite podcast to make sure to stay tuned for new episodes every Wednesday and join our conversation on Twitter at radical iPod. And as always, stay radical.

You do you want to do any banter, orienteering? Are we going to banter at this recording? The. This has just been a while since we've inventor of the. You know, I feel like this is pretty serious, actually, all adventure. This event is the best.

Honestly, I can probably talk about some of the stuff we were just talking about in this outro for like much longer. Like, we should we should talk we should pick this stuff up at some point again.

But yeah, no, that was that was a great conversation with him, I think.

Well, Dylan, stay on the line, let's keep talking about why storytelling and language matters.

I'm not done yet, but but we're we're out we're out of time. Maybe we should record an episode that comes out every month on a regular time.

That's like a smaller episode, maybe like a mini type of. Like a miniature episode.

Yeah, like a smaller than normal. Kind of. Yeah, like a like a miniature poodle.

But like I wish there was a word for that and like I wish there was like some sort of like combined word of like miniature an episode. That was one word. Have we made our point to make this world a better place as this point to me?

I think this is why language about spelling. At some point we'll come up with another many that if you listen this far, just just stay tuned. Just keep saying it. And this has been done in jest.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Manual audio transcription is tedious and expensive. Here are five reasons you should transcribe your podcast with Sonix. Create better transcripts with online automated transcription. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript. Sonix has the world's best audio transcription platform with features focused on collaboration. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone. Automated transcription is getting more accurate with each passing day. Do you have a lot of background noise in your audio files? Here's how you can remove background audio noise for free.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.