Claire Leibowicz is a Program Lead directing the strategy and execution of projects in the Partnership on AI’s AI and Media Integrity portfolio. Claire also oversees PAI’s AI and Media Integrity Steering Committee.Emily Saltz is a Research Fellow at Partnership on AI for the PAI/First Draft Media Manipulation Research Fellowship. Prior to joining PAI, Emily was UX Lead for The News Provenance Project at The New York Times.
Follow Claire Leibowicz on Twitter @CLeibowicz
Follow Emily Saltz on Twitter @saltzshaker
Follow Partnership on AI on Twitter @PartnershipAI
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.
Relevant Resources Related to This Episode:
PAI & First Draft Manipulated Media Fellowship Launch
Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed
PAI’s Deepfake Detection Challenge Kaggle Dataset
A Report on the Deepfake Detection Challenge
It Matters How Platforms Label Manipulated Media. Here are 12 Principles Designers Should Follow
5 Urgent Consideration for the Automated Categorization of Manipulated Media
Transcript
PAI Media_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.
PAI Media_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.
Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan, and just in this episode, we interview Claire Liebowitz and Emily Salt's to representatives from the Partnership on Azz A.I. and Media Integrity Team.
Claire Leibowitz is a program directing the strategy and execution of projects in the Partnership on ICE, A.I. and Media Integrity Portfolio. Claire also oversees pieces A.I. and Media Integrity Steering Committee. Emily Saltz is a research fellow at Partnership on EHI for the VEI and first Draft Media Manipulation Research Fellowship. Prior to joining Pyi, Emily was UK's lead for the News Providence Project at the New York Times.
As we come up to the 2020 election in the United States, where Jess and I are based, there's been so much conversation just out in the zeitgeist about fake news, about media, about what we can trust.
And it was such a wonderful experience to be able to sit down with Emily and with Claire to discuss what is media integrity, what is media manipulation, and what do you, the listener, need to know about fake news? How should we regulate manipulated media and fake news? Who should have the authority to label it? Should we label it at all? And much, much more.
We are so excited to share this interview with Claire and Emily of partnership on EHI with all of you.
We are on the line today with Claire Liebowitz and Emily Soltz from Partnership on A.I. Clear. And Emily, welcome to the show. It's great to have both of you on. Let's go ahead and get started with both of your stories. So why don't we start off with Claire. Could you tell us a bit about what motivates you to do the work that you do?
A big question. Well, first, thank you for having us. Yeah. So I think just by way of background, so I actually come to the tech world from a cognitive science background, which may not seem completely intuitive, but I think, you know, having done research in social neuroscience and how people think about their group identity and how that functions in the brain, you'd actually be surprised how much that is brought to bear on questions of how people interact with each other online and then how A.I. systems are involved in that. So a lot of the work that we do cares deeply. And we'll talk about AI, about how people are ultimately implicated in a lot of the decisions we make about technology.
So I really wanted to think about one A.I. as this kind of interesting metaphor for human cognition, but also the reality that people interact with these systems.
So today we will talk we work a lot on information, integrity and media manipulation. And the million dollar question, which my colleague Emily and I work on, is how people make sense of a lot of what we see online. So what really energizes me is centering a lot of our work from the people who are kind of dealing with the result of A.I. systems. So that's a little bit of background, but I'll pass it to Emily.
So I first kind of learned on a first hand level a little bit about like how A.I. systems work and how people interface with them.
As the third member of this startup called Pop Up Archive in Oakland, California, that worked with a lot of audio archives, media companies, podcasters like yourself, actually using speech to text technology to kind of create a platform to to search and tag speech. And I sort of saw from that process like firsthand just how many human judgments went into it, the like, how people were reacting to automated transcripts. And so that just really interested me in human computer interaction in particular. So I ended up going to school for that. And after that, working a few UX research and design positions, still very interested in media places like Bloomberg and New York Times R&D lab, and yet was really excited, by the chance, at the partnership on A.I. to work at this kind of in this nonprofit space, bringing together a bunch of different organic organizations to explore these questions around media and A.I..
And now, before we go too deep into the question of media media manipulation, I think the general question is what is the partnership on A.I.? Because I think folks have heard about it. But I know, Claire, you've been there since the beginning. And so could you talk a little bit about the organization and then maybe, Bridget, into what is so interesting about media and media integrity to the partnership on A.I.? Sure.
So the partnership on A.I., which will call PII throughout this conversation, so it will be easier for us. We're a nonprofit, multistakeholder organization devoted to what we like to say is the very broad mandate of creating responsible A.I. and developing and deploying that technology in a responsible way.
And we invite a lot of diverse voices into the process of technical governance of A.I. systems through, again, that design deployment and through these venues where we have dialogue and we collaborate in order to create best practices. So actually, I really like to think that our origin story we were founded in early twenty seventeen speaks volumes to a lot of how we do our work and what's in our DNA.
So the heads of A.I. Research and some of the largest technology companies, so Facebook, Apple, Amazon, deep mine, Google, IBM and Microsoft, they grew up together in this kind of academic field where they were computer scientists, publishing papers, going to conferences. And I think in the early twenty first century, there was this conviction that a lot of the impact of their work would affect people. It's not just in a scientific community and also that some of the challenges around responsible eye development and deployment implicate not one of these companies or one of these industries actually in reality. So they very quickly infused this non-profit with independence, but also a really multidisciplinary multistakeholder founding where we have more nonprofit and. It's the not in our partnership, we have academics, we have people really notably from all over the world, and we ground a lot of our work in this notion of experiential expertise and on the ground encounters with the technology so that we make sure it lands well. And I think amidst that broad mandate, one question that's really ripe for this multistakeholder input that we think warrants not only technologists and product managers who need to create deliverables quickly weighing in on the topic, but also missing disinformation, researchers, journalists, fact checkers, even you could say human rights defenders. And that is all in our partnership. But is this question of how we have high quality information in the 21st century in the age? And we'll talk about how I implicates that. But to us, this question of what is high quality public discourse? Who gets to decide? And then how do you ensure that the technology emboldens spreading of that high quality information? That's kind of the million dollar question for us and we believe requires all of that input in order to actually be done responsibly.
So, Emily, I'm going to pass the mic over to you then, how do we have high quality information in public discourse in today's day and age?
Well, one thing just to add on to what Claire said is I don't think I don't think it's necessarily a technical answer or solution to this. I mean, these are when you're talking about issues like missing disinformation or or Moul information is another phrase that Claire Wardle, who we work with at first draft, sort of has this framework for different types of information issues.
And it's not just. Information that was intentionally created to deceive, but also things that people might share unknowingly or spread accidentally, not realizing that they're inaccurate and might be things that are leaked, that's that information category. So there's I think one place to start is just like realizing there are a lot of different things that we're talking about when we're talking about information in general. And they each have like very different ways to address that.
And yeah, I think.
You need to kind of keep coming back to like, who are the people creating and sharing these types of information and why and what are the best ways of addressing that which might not be technical at all? It might come back to like understanding the motivations and needs of an actual people within communities and their cases.
So I'm wondering if we can.
I guess give give some examples or just unpack what we mean by media manipulation, because I think kind of in the general lexicon, sometimes we hear about fake news, sometimes we hear about what's happening on our, like, social media. Like, I know, you know, the social dilemma just came out. I don't know if you saw that movie, but I know there's a lot of people they were just talking about this stuff like what's happening with media and our data and all this stuff online.
And I'm wondering if you could just give like a a one on one about what you all look at and maybe some examples of what media manipulation is.
So maybe this is clear. I'll start broad and then and then pass it to Emily. And it's interesting, right? Because we're it we have a guy in our name and there are many forms of at a very basic level, actually. Let me take a step back. Manipulated media could be me editing, filtering an Instagram post. It could be me putting a caption on a tweet. Right. Of me retweeting this radical I podcast logo and then putting a caption that maybe says this podcast stinks. That changes kind of the connotation of that post. I would never say that in this podcast is amazing. But you understand that kind of the caption could manipulate the artifact or media. And in our land, a lot of the popular notion of manipulated media is a fake or an A.I. generated piece of manipulated imagery. And to us, we think it's really important to think about debates and their potential as right as technology gets more sophisticated, but also to get to a very basic level, Dylan, that, you know, adding a filter is manipulated media. So, too, are those other examples and what makes it potentially dangerous versus an opportunity for artistic expression or the like, and how do we differentiate those things? So, of course, in a lot of our work, we want to prevent malicious, manipulated media. Whether that is A.I. generated or not sometimes doesn't matter. Of course, we care about the scale and susceptibility that might come from some of the AI tools that you can use. But ultimately, I think we really care about this notion of what makes it malicious, manipulated media. What is it about that captioning that might change the sentiment or make the artifact deceptive? And to us, that's, again, a really behavioral question. And I'd love to talk maybe later about some of our work in debates, but how a lot of that work that's been focused on A.I. generated content has made it even more apparent that we need to think about this as part of the spectrum and ultimately in terms of what the the impact of that media is.
But I'll pass it to Emily.
Yeah, no, I agree with all of that, and I think it's really important when you're talking about manipulated media to disambiguate between, like the type of manipulation to an actual media asset, like, you know, whether someone's used Photoshop or like a Snapchat filter or something from like what? What are the claims associated with that piece of media and what context it is appearing? What was the intent and what are the impacts of that? Because like, I think it's an easy thing in public discourse. There's like a lot of anxiety about deep fakes and things like that. But a lot of the examples that you actually see that are using things like a like Gane and a machine learning synthetic media are like for academic examples of like, you know, are comedic examples like these swopping like Steve Buscemi with like someone at the Oscars.
So I mean, that's a very different kind of harm from something where the manipulation literally might just be like changing the text on a photo associated with like a picture of a protest that was taken, you know, 10, 10 years ago to say that it's happening now. So, so disambiguate between like the the harms associated with something in a post in context and like all of the text and the people understanding it and how they're reacting to that from what actually happened to the media abstractly.
And let's take a step back and come to this example that keeps coming up deep fakes. Could one of you define what a deep fake is and maybe give an example of a harmful use of deep fakes? Sure.
I'll take a stab at that. So deep is the canonical term for A.I. generated manipulated videos, predominantly. So an example we actually haven't seen really a ubiquitous, malicious debate that hasn't been used just for illustrative purposes. So BuzzFeed News did this really phenomenal example where a comedian depict Obama to actually warn people about the threat of death, because very Medda through the context of the video, but are really important in terms of the the technology and the harm. The bulk of the deep, big use cases we've seen today are based in kind of nonconsensual sexual exploitation. So deep they porn. So that's a really malicious use case that we see today. And I think it was our it was our colleagues at Deep Trace. They now have a new name, their organization, which is slipping, slipping my mind. But they published a report where in twenty nineteen ninety six percent of the fakes were pornographic in nature and not around political discourse. So there's this question of, you know, the A.I. generated content is important in thinking about who might want to use that. What are the costs and benefits associated with having the technical sophistication to create a really robust FAQ? And right now it's it's not really accessible to the masses, but a particularly motivated pornographer or someone trying to exploit someone, I think today has certain tools at their disposal to be able to do that. So we at its core, it's A.I. generated synthetic media.
So I'm. Just another podcasters and I edit the episodes, right?
So I had to, you know, hover, I won't release how late in the week before episodes come out ahead of them.
But basically, I'm manipulating media every week, and I don't think that what I'm doing is necessarily malicious.
But I could understand that there's a certain point where that would become either malicious or dangerous. I think Claire was your word earlier, and I'm wondering what are our metrics for that?
Like, when does manipulated media become dangerous?
I'll start by just saying and then turn it to Emily, that a lot of the platforms who really care deeply about being able to gauge whether or not the content is merely a benign podcast manipulation or maybe a butchering of Joe Biden's latest speech, that makes it seem like he was saying something different than he was. Some of them have different heuristics for when they lead to an intervention. Whether that intervention is labeling, we don't see really take downs, but some have described likeliness likelihood, excuse me, of causing real world harm as being one heuristic for how they judge that. And again, that's at the discretion of the platform because of complexities in how we treat platforms today. Others have real clear language about deceptive. To the average viewer, I think is what the Facebook policy says. But again, there's some ambiguity. And what does it mean for your podcast? Making it deceptive to the average viewer? What is deceptive and to what extent can you know your intention in actually editing that podcast?
But little.
See if you have anything to add about how we eat, but, yeah, that's a really difficult question and who should decide that is one of those that we grapple with. So to us, there's the value question of who should be at the table to try and adjudicate in which region of the world may be different notions of what's harmful in one part of the world from what's harmful in the United States in terms of editing, who should be there and making those decisions? And to what extent can automation play any role, whether as a triage mechanism for all, just like manipulated content, and then it gets analyzed by humans to gauge if it's deceptive or not. To what extent can automation pick up on some of these cues that are indicative of what we think of as being deceptive or harmful? But that's really hard and impossible right now. I would say no.
I mean, I think abstractly, like you can look at the policies that like Facebook and Twitter have crafted that referred to things like severity of harm, likelihood to cause harm, like sort of in concert with the manipulation. But it gets really and sort of at the extremes with things like like graphic and violent terrorist content or things related to violence. There is a lot of agreement.
But when when it comes down to specifically, I think political material like reasonable people may disagree on a lot of these things, which is why I think it's so important to keep coming back to, like, really grounded examples. And in a way, we're all kind of experiencing this in real time as we see like Twitter label a clip of of Biden with with video of like or with audio of an NWA song instead of just but not labeling, you know, a similar kind of edited together a political clip that that Biden posted.
And, you know, they have rationales for that. But it's the kind of thing some of the questions that we're really interested in are like from a user perspective, how how are people perceiving these different harms within communities? Because it can be different, depending on, you know, your political background here, that the people that you're talking to. What? Yeah, what you feel like that threat is to you as an individual, so that's that's something I think like we want to really ask more about and understand the affected and marginalized communities who are who are seeing media manipulated about them or like groups related to them.
Definitely. And let's assume just for a minute, let's put on our optimistic hats and say that we've solved the malicious intent problem and now we are able to know if media is manipulated with malicious intent. But we want to tell the people who are consuming this media that it is malicious or that it is mis or discernible information in some way. And Emily, I know you've done quite a bit of work on this at the Partnership on Aaya. And a report that you wrote recently was explaining 12 principles that designers should consider when labeling manipulated media online.
So could you tell us a bit about what you discovered through that research, that that article we also did with help from people that first draft commission, Victoria Quine, Claire Wordle, Claire, on this call.
And yeah, it was it was really like an effort to kind of synthesize a lot of the existing research that's on corrections and where the gaps are when it comes to media and particular to to sort of see like what are what are kind of the starting points that we might be able to agree on to two ways of thinking about it. There's the initial determination of like, is it even false information? Is it manipulated media that is is harmful? And then so so our principles are kind of like sidestepping a little bit that point of classification and assuming you have a piece of media that everybody can agree is very harmful. So so starting from there, kind of thinking, knowing a lot of the the effect of even being exposed to an image in particular that's been manipulated to show something else, like has a sort of continuing effect in memory, even if you've seen a correction. So one of the things we recommend is to kind of if you can avoid showing that manipulated media in the first place with something like an overlay or provide friction to avoid people even engaging with that, then that's probably the most valuable thing you can do, because corrections you would at the same time or after you're going to be much less effective because there's still that memory trace of seeing the initial thing. So that's one way of thinking about it.
And then there's sort of the OK, you do end up seeing it. What happens then? What are ways of communicating that correction at that point? And in that case, you do want to make the correction kind of as noticeable as possible, emphasize the accurate information over the falsehoods so that, again, it's sort of like what is the what is the what is the memory that you want people to come away with? Like so there's this idea of, like, the truth sandwich, like maybe put the correction in between, like the two true statements. So things like that, again, kind of considering what are different people, what are the questions that people might have as they're looking at an example of manipulated media and providing more context links, flexible, passive of analysis to answer that, if there's like an original piece of media that can kind of like fill in the question in their mind of like, oh, you know, this this crowd shot that was manipulated or something.
Here's what the crowd really looked like. If you can fill something in and explain something in detail, those are some of the things that come to mind. But yet we have 12 principles. And one of the things that we really stress at the end is these really are meant to be kind of starting points based on existing research. And they're not a substitute for like asking these questions with actual people in the context of their particular use cases. So, like, if you're talking about someone on Facebook who's scrolling their media versus someone who's posting something versus somebody who's like in a closed WhatsApp group or something, those are all probably going to need slightly different interventions. And it's probably going to depend on the type of media you're talking about. So, yeah, they're really just some general guidances, but trying to stress that don't generalize too much. That's actually just to underscore that point.
And this is Claire, it's interesting in the impetus for this work and the way you frame the question just is really interesting. I assume you have perfect detection of not only the artifact and you're laughing. People can see that because it's kind of this lofty assumption. But I realized over the course of my now. Two years working in and media integrity work, Ippei, that in every meeting that was ostensibly about technical detection. So can we use A.I. to find manipulated artifacts, imagery, video, using media, forensic, every question got back to. So once you do or there's uncertainty, what do you tell people about it? How do you label it? What's the intervention? And again, that is a very behavioral question. And I think from our vantage point, it is one that warrants not only consideration of first principles, but like Emily put together, that apply to all platforms. Right. So the notion that photos are sticky and even if you correct for them, that's relevant to Facebook, that's relevant to Twitter, that's relevant to Google, the platforms in air quotes. But also we really want to think about what's distinct about how information is presented to a user. Tomalis point in context on Twitter versus on Google, how may they vary? And rather than blanket, say you all, there's this prescription that will have perfect kind of interpretation by all users that we also say it's really complicated.
There are things you can start doing, but also we need you to be more transparent and do more research on this and share that with civil society and academia so that we can all be part of this creation of really good interventions. And also is labeling the best intervention. That's a question mark. It's a popular one right now because it avoids the question of taking down content. And we think it's a really great starting place and there should be rigorous research, which is why we're doing it with with lots of experts. We work with first draft very collaboratively on this. We work with our steering committee, which has industry and civil society, and those are oversimplifications of broad swaths of actors. So we kind of want to incrementally move towards.
Both this. Complicating element of all the platforms are quite different. How do we tease out what each one should do? And also being really adamant about general first principles about that we're observing that everyone can start with.
Yeah. And and to add on to that. So since since we wrote that, we have started some user research with people from around the country who have reported having seen labels actually in action and had a lot of conversations with people to really kind of understand not just abstractly, but like how what are what is the range of attitudes when it comes to how these things are being used right now? And you see, I mean, you can probably guess just from your own personal experiences, like there is a really deep division between people who see labels as more paternalistic. Offensive is a word that came up a lot, spoon feeding them information very clearly politically biased versus people who see labels as like like platforms need to do more of this. Like there's so much junk out there that needs to just be taken down or removed. So we're trying to understand. Like what? Yeah, to Claire's point, like, is labeling even the best intervention in all cases? How are people understanding labeling as a thing that's being done to media that they see and their perceptions of like who's doing it in the case of like Facebook or Twitter and the fact checkers associated. So these are all questions that we're trying to explore right now through research.
The name of your group is something that's really struck me. So it's a partnership on AI and then it's media or A.I. and media integrity. And this question of integrity is just, I think, really powerful. It's not a word that we hear as much as I would expect to hear right out and like the Gothic space or the responsible tech space, and especially when we're talking about political actors or even industry actors, it's sometimes levied against them that they're not necessarily acting out of integrity, you know, by certain groups at certain times.
And so I'm curious for you all as the partnership Onii who are looking at integrity, what integrity means to you in terms of A.I. and that is a that is a you know, you're right that in the current moment, our calibration of integrity is really it's really maybe we should think more deeply about what we mean, what it means to have integrity and how we think about that. But I will first preface and say a lot of the tech platforms have integrity teams, and it is a word that to us should be just front and center alongside responsible eye. Integrity is part and parcel of that. And I as an institution, we were founded around certain tenets and thematic pillars that actually kind of incubated this work on media integrity, one of which is this this area around the social and societal influences of A.I. which care deeply about how we think about human rights, principles, values of privacy, values of democratic ideals also in work and how we think about that. So I think that's entrenched in how we think about integrity. But this question of morality being part of our work in technology and if you think of morality as part of integrity, it comes up in every meeting.
We have people kind of interrogating what it means to have integrity online today, which maybe sounds like a non answer. And I have certain values, right, that people can can, you know, not be harmed on the Internet. But also there's this not just optimizing for non harm, but having, you know, being able to be bolstered in terms of these values of privacy and the right to exist fully and and under all of those values. But I think as it relates, there's a lot of bleeding of the notion of political integrity and media integrity. And those are deeply intertwined today because of a lot of the speech and manipulations that might be harmful as it relates to. Certain facets of life today and public discourse, but I will say the terminology not to be really Emily's the linguistic background person on this call, but you'd be surprised to what extent debate over terminology like this actually becomes part and parcel of the how do we build responsible technology? What does it mean to be responsible? What does it mean to have integrity and how people interpret that? And then how do you operationalize that in technology is something that we still think about on a daily basis at VEI.
Yeah, let's actually let's go a little bit deeper on integrity even and let's relate it specifically back to this media and and misinformation piece, because, Emily, you said something really interesting when you were describing the the contention that people have with labeling and the different viewpoints that people have. And so when we're talking about integrity and media integrity being like a moral question, it's also a political question. And so when you're labeling manipulated media, I'm sure this is something that's come up in both of your work. How do you know if you have integrity and what you're really labeling as the manipulated media? Right. Because you have like you were saying, Emily, on the one side, some people are really going to appreciate that you have this label that the media has been manipulated. But then on the other side, some people are going to say, well, you're labeling is fake news. You know, you're labeling is misinformation. So where do you draw the line and how do you know what what is the real news? I guess my question is, where is the truth anyway?
I mean, so something that I knew I was going to end up using this quote in this interview, something that I keep coming back to, is this like we think we're we're having a crisis of what's true, like we're in this post Truth World. But really, it's a crisis of how how we know what's true is true. And at the epistemological crisis, Cory Doctorow, I believe and yeah, I think that that's really the center of a lot of it, of what we mean by integrity is like trying to trying to be open and exploring, like how how people understand things is true in the context of like the labeling work that we're doing. We've actually sort of the work has we found ourselves getting more and more into conversations with about like Facebook's third party fact checking network, that international fact checking network I have seen, and how fact checkers actually apply media ratings right now. And they've been expanding that for media and trying to think about the relationship between that process and how those terms end up surfacing to end users. And there are a lot of valid questions that you can raise about the methodologies of different fact checkers. And I mean, we're not going. So there are a lot of other organizations that are doing work on that. But I think that's all part of this broader question of integrity is like, yet what are those processes for fact checking and then how how do those feed into what people are seeing on platforms at the other end?
Yeah, more questions than answers. We always say that. But I think more broadly, also in our eye, in media integrity area, this question of how, for example, those fact checks or notions of truth get automated is really important right now. So some of these existential crises have existed for thousands of years over notions of what's truthful and not. And propaganda has existed for many moons. And people have been spreading conspiracies and lies for a long time. We all know they now have the speed and scale of the Internet and in the same vein, some of our defenses. So the fact checking protocols and how we label and intervene are their platforms have some incentive to automate that, to keep up with the funnel of content that that comes through and that they can check. So we think it's really important to be having these conversations alongside the technologist who works on Facebook's A.I. integrity team, alongside the fact checking partnerships individual like Facebook's News Partnerships Integrity Team. Right. They have different integrity teams and all of those need to be swimming in the same direction. And sometimes we think making sure that we can cultivate not only the multi stakeholder environment where the misinformation expert or the journalist in Kenya can actually talk about how his back tracking process works and share that with an AI engineer who might later be tasked with automating some of these fact checks. We think that's really vital right now not to decide perfectly. The answer to Emily's well put conundrum of how do we understand what's how we assess truth today, but just to make sure that we're all attentive to these complexities and try and ensure that we responsibly attend to them.
One of the things that we're really impressed by at the Partnership Onii is that partnership element and clear in your last answer, it was it was all about partnership.
And I think people are really scared of partnership and part maybe for good reason, too.
I think partnership is really hard to do well or effectively. And I'm wondering if either of you would be willing to talk about what that partnership looks like on the ground, maybe on the steering committee for the Integrity Group or or beyond, but just for folks out there who are looking to partner or talking about partnering but don't exactly know how to do it effectively if you have any tips and tricks of how to get started. Sure.
So we've been in our DNA being multistakeholder. We spend just as much time grappling with kind of the intellectual complexities of the topic, which we've been talking about. I like to say as the interpersonal and organisational complexities of doing this work really well, and we like to say that pie can neatly gather these diverse stakeholders in a facilitated context where they're both comfortable presenting honestly and and speaking their truth while also not necessarily feeling like we're going to take one side or the other.
And I guess what I mean by that is we have this way of saying things that if everyone was happy all the time, all of our partners, we'd be succumbing to some type of least common denominator as it relates to our best practice suggestions, because we have many disparate viewpoints in the partnership. We had a convening a few weeks back, virtually, of course, with almost 40 representatives from fact checking from journalism. And when we say from journalism, there could be a writer, but there could also be a human rights investigations journalist who looks at video evidence to put forth a story with platform representatives. And there's it's very often that first you need to make people see eye to eye and feel like they are all aligned towards some similar goal. I think the unifying principle is that most people who come to the table are proverbial table and have a seat. FBI all have some desire to create a better world from technology and a belief that it's not just technology that will lead to that world, that there's lots of input that needs to happen. And what we need to do is help those people see eye to eye, understand the technology, understand the social dynamics. And there's a lot of foundation setting. There's a lot of debating about terminology at the beginning. But I think if you can do that in a really I'm not going to say inspirational, but in a way that makes everyone see why that's necessary and why these challenges are so socio technical, then it becomes really compelling to work with the person who has this other type of expertise that can really help us, even if you're one type of actor, move the needle on media integrity.
And I think it's just hard to imagine. Right, when we think of like information online, I think a lot of people's brains go to the platforms. But if you polled a subsection of of people on the street, you probably hear what do you think of when you think of information today? Some might say Twitter, some might say the book I read last night, hopefully some people would still say the book I read last night or the newspaper. And I think that all those stakeholders need to be there and more. But I think also just to add, we have this steering committee that meets weekly. That's a third representatives from civil society, a third from media and a third from industry. And again, that's an oversimplification because from industry, we had a computer vision scientist, but we also had a policy expert from a company that dealt in content moderation that bring to bear very different lenses from the corporate perspective on this issue and in that forum. And again, they're alongside a video and human rights expert who thinks about how to advocate for for citizen journalists around the world. And that is just so illustrative of if you can have those people get to know each other, get to know how they understand the issue and bring that to bear on. Maybe it's a machine learning challenge, as we did, or maybe its input onto Emily's research. That makes it really much more robust because they all have different understanding that doing that is half the battle. But also building trust and really doing it in evidence based way has been really central to making sure partnership doesn't become a.
Prospect, as Dylan was describing it, could be, and shifting the topic a little bit, Emily, Claire, you are both on the radical podcast, as you know, and something that we like to ask our guests on this show is in order to help create this definition of what radical I really is. What you think radical? I really is. So, Emily, why don't we start with you. What do you think the word radical means as it relates to AI? And do you think that your work is considered radical?
I mean, I think radical like to be radical is to reconsider, to, let's say, radically consider the whole picture of people involved in creating an affected by AI technologies. So it's not just the the engineers are people creating these systems, but radically AI means talking to people who have had a deep fake pornographic image create video created of them. It means talking to to to people across different fields and understanding how how they understand how these technologies work and assessing how and why I is created in a critical way. And I like to hope that my work is radical in the sense that I'm a in doing user research and like actually kind of trying to capture specific moments of, like a lot of people encountering manipulated media and labels and their reactions to that really centering people in their interactions with with automated labeling kind of processes and having that inform the things that that platforms do.
Part of me enjoying PII and continually to this day when I show up to work and even in this conversation, it becomes apparent to me just the fundamental premise of how does its work still feels quite radical to me. And again, hopefully people have I this from the conversation, but that this theory of change that just you need all of these equities there to make meaningful change in not only the AI field, but in a crosscutting field like how we have information online today is a still a radical premise. And while there are many universities that are trying to do multidisciplinary work and I think within companies there are multidisciplinary teams, it's a radical notion to suggest that you need and it's not even just people at the table in this kind of proverbial sense, but also meaningful, consistent, facilitated, evidence based engagement amidst many people who don't typically talk to each other. And that continues to feel quite radical. And some days it feels really mundane that you're kind of doing this translational work between communities, but you can reimagine that as the most radical rendering of progress that is afforded by kind of this mission. And it does feel radical to suggest today that it should be expected, not just like a cherry on top, but expected that the diversity and breadth of participation we have is is just just an everyday thing in the field. And I'm lucky that a lot of my colleagues, Emily and others we work with and other partners to have this belief, but to be kind of at this central convening point where we get to enable that.
That's a. Pretty radical.
Thing to do, I like to think, as I guess I mentioned earlier, you know, where we're living in this post social dilemma documentary world in which there's a lot of anxiety out there, especially around media and identity.
And so as we move towards wrapping up this interview, I'm wondering if you all would have any pieces of advice for folks who are just super anxious right now going into this election in the United States and in this divisiveness and uncertainty and all this stuff like what should people know?
To what degree should people be scared? And is there any hope in all of us?
Well, Emily can chime in, too, but we both just made a face at which people can't see of, I guess, earnest hope. But we talk a lot about empathy, as silly as that sounds for people who disagree with you, whether on the role of platforms, politics, online.
And I think that having some level of empathy and humility can be helpful for I don't want to say coping because that strong language, but both for coping right now and also helping break down some of the barriers that we see online.
But.
I think we need more empathy, both within the community that thinks about responsibility, but also the people who.
Are citizens of the Internet today, advice I'd give would be to to study, study of other myths and disinformation that's been going around.
I Jane Litvinenko at BuzzFeed, for example, has been posting these really great lists of running lists of of kind of like hoaxes, missing disinformation related to covid and Black Lives Matter, and just getting kind of familiar with the range of tactics. Also, a first draft that they have a they have a neat texting program. You can sign up for that kind and you can encourage your family to sign up for this.
That gives you tips day by day to just have the kind of things to be aware of, because I think we can only expect this all to accelerate leading up to the election. And I think as much as like individuals can be aware of the dynamics and aware of their own emotional responses and how and how certain. Like politically motivated or profit motivated actors might be trying to manipulate or be trying to capitalize on that emotional response, just being aware for yourself, like am I sharing this because it's just like instinctual and I agree with it. And like, you know, it confirms my identity versus like really slowing down and saying, like, what?
What can I find out about? Like, why might this have been created? Who created it? And yet just slowing slowing down and and studying and sharing that with everyone, you know?
Yeah. We always say or we've said it's a combination of emotional literacy, like knowing how you feel and reaction to posts or people and also media literacy just as much.
Yeah. And also like not making anyone feel bad about it, which. Yeah. Maybe it's what you're getting at too clear with the empathy. It was like I make these mistakes like. Yeah. We've talked to like very renowned Missenden disinformation experts who've talked about times that they've shared it. So it's not like it's not like because we study this stuff, we're immune, but just be. Yeah, I think try trying to be aware and.
It's the best we can do well for all of our listeners, whoever wants to get in touch with Claire and Emily and Partnership and I, we will include all the links you might need in the show notes. But for now, Claire and Emily, thank you so much for coming on and sharing your expertise on this subject with all of us. Thank you so much. It's been fun.
We again want to thank Emily and Claire for a wonderful conversation about manipulated media and just what do you think about manipulated media? So many things, Dylan.
Oh, my gosh. I'm having such a hard time with trying to discern what the difference between malicious and harmful media is versus creative and artistic manipulated media.
And I keep coming back to Claire's example of Instagram filters because I know a lot of people on like dating sites who have seen pictures of potential partners who have very much manipulated their media and they are totally not happy about it.
And I would argue that they probably think that's malicious or harmful intent. So I'm just having a really hard time figuring out how we can even decide what is malicious or harmful.
And there's like a moral element to that as well, because I don't with the definition we were working with in this interview, manipulated media. I actually don't know if there's like a value judgment as it is. It seems like whether it causes harm or like whether it hurts people, then it becomes a negative, manipulative media or positive, manipulative media. But manipulative media, as it is, is more of a description right now. And it sounds like that's where they're drawing a distinction between manipulative media and fake news, because I think fake news there is that like value judgment and still like I wonder like what you're saying when like when we do this podcast. Right. We edit it every time we put it out. And is that somehow dishonest? Right. Like if we cut out and or if we, you know, even cut out like a phrase that we said or that a guest said, like hypothetically, we would have never done this. Right. But if we had said something that might have been offensive in that first take or might have been just like not exactly what we wanted to say. And so we did that take again, that's manipulated media. But is that negative or is that, like, immoral? What do you think about that? Just just punting the softball now?
I don't know.
That's that's what's so hard about this.
And that's why I am so tripped up about this concept of labeling, too, because if we don't even know when media is necessarily manipulated and right now we're just talking about like intent of harm. Right.
But also like there's that whole discussion about what even is truth anyways. If we have no idea what is like true news or media so that we can base our assumptions off of so that we can actually label something as true or fake, then how do we even approach this problem? It just it seems like there's just so many different sides of this issue that need to be confronted. And I don't know the answer to any of them.
So I'm still stuck on your sports metaphor. From the beginning of that answer, I punted you the softball. I like that we're breaking we're bringing it together just now. It's a manipulated media. That's a no no. But I think one thing that one distinction that you made that's interesting, too, is like the impact of harm versus the intent of harm. And you use the term intent of harm. And I don't know if I manipulated. Media is always intending harm, but sometimes there's an impact of harm. I think it's almost more clear when there's like an intent to do harm versus when there's someone who, like is maybe just trying to make some money by changing their advertisement in a certain way or like. And then we get into this whole ad world, which also in capitalism and what that drives us to do, because I think there are real incentives to manipulating media.
So when it comes to addressing manipulated media, then where do people who want to reform, like where do we start? Do we start with what the incentives are or do we start with more regulation about the media itself? Like how do you start on traveling that?
And one question I also have for you just is like what your thoughts are on on that question of like labeling and regulation, like who who should be given the authority to regulate it?
Should it be a group? Like why should it be a government group? Should it be like just like users like where should that power lie?
And is there like a correct answer to that, though?
And I've actually pretty strong opinions about this. And it's ironic because I was talking about this in an ethics class that I'm in. It's actually taught by one of my advisors, Casey University of Colorado.
And we were talking about this literally yesterday about different methods for helping give agency and power back to the people who are consuming media. And I genuinely don't think that labeling is a good solution if it's coming from the platforms that are hosting the media. I think there's just way too much at stake. If, for example, Facebook starts tagging posts as fake news, I think that so many people would get. So upset and so frustrated, and they would not believe Facebook for a second. So in my opinion, I think that this responsibility really does need to fall on the consumer. And if there can be a little bit of responsibility from the platform. So like an example of that, I've heard from philosopher Regina Reaney is that what if four news articles, our like and re tweet and share buttons actually were just endorsement buttons and there was like an additional pop up that came up when you wanted to endorse news to share it that actually, like, made you take responsibility for the impact of that news and that you trusted it and read that source and that you were OK with disseminating it to your community like that would be such an effective way to lower fake news and misinformation, as opposed to just assuming that these fact checking organizations are going to get it right every single time.
Why would that be?
Why would that be better, though, because wouldn't that just be an echo chamber where the more news that you consume from a certain source, it's going to shape your opinion about what news is is right and what news it's going to skew right. And then you're just going to keep skewing it and the more than is going to skew your community, if you're the one endorsing it, like, isn't it just kind of an echo chamber effect?
So it's not a perfect solution and it's definitely not.
But I think that it's definitely a good start. I think that as as Emily and Clarabelle is saying at the end of the interview, I think that we all do need to be responsible for our media consumption. We need to have emotional literacy, media literacy and empathy. We need to stop judging people for falling victim of fake news because it happens to all of us and it's happened to both of us for sure.
Not me. Absolutely. It happens all the time.
And so we're we're recording this. And the night that before we released this episode for The View, this is the day that we released. And so that's right. Now it's October 6th. And yesterday was October 5th, which was the day that the United States president, Donald Trump, got out of the hospital for covid at Walter Reed and then tweeted almost immediately when he got back to the White House that covid was we just shouldn't even worry about it. No one should worry about covid. And then the fallout from that was that Twitter and Facebook both had that tweet saying this isn't right, but don't listen to this. This is some level of fake news. I forgot the exact term that they use.
And for me, based on where I fall, I am happy that Twitter and Facebook had that lever to to pull on that.
And there's a lot of people on Twitter who are very upset that they have that level to pull. And now Donald Trump is asking to basically repeal.
I forget which act it is, but the act online that gives Facebook and Twitter power to be able to regulate in that way.
And it's just it's kind of it's kind of a mess right now. Right. But I guess it gets to the heart of just how difficult this is, especially when we start talking about free speech and the freedom to be able to publish, you know, what you want. And it's what the United States is founded on, justice. And to some degree, I'm like. I don't know I don't know about that argument.
I don't know if I goes into another degree, I'm like, you know, I wouldn't necessarily want Twitter or Facebook cracking down on my post because based on their ideas or their metrics, you know, I was causing harm where I really don't feel like I was causing harm.
See, that's kind of what I'm getting out of this, though.
It's not really about who is the better fact checker, the consumer or the platform or the organization. But it's it's actually where should we give the power to? And in my opinion, I think the power is best placed back into the hands of the people. And I mean, this is a concept that we come back to continuously on our our show radical A.I. This is one of the core concepts of this community and one of the themes that we really all have a strong opinion about. And so if I have the choice to give the power to Facebook or Twitter to label this for me, I would rather have the choice to make that decision on my own.
I hear that and I love the idea of power to the people in like a theoretical sense, like I think that's really that's really wonderful and I want that. And also, we're living in a time and this is why I'm bringing this up, because we're like on the cusp of this next election where the people, quote unquote, in the United States are going to choose the direction that the country is going to go into. And then there's all these questions about the Electoral College and whose voice is actually going to be counted and then, you know, the mail in ballots. And is everyone's vote even going to be counted? Are we even going to know what's happening with the election until January? All the stuff is like out in the air.
And so I guess it's just a question for me of like, yeah, it would be awesome to have a democratic ally or a democratic way of figuring out like what media is and then also.
How do you determine that, like who represents the democracy then, because, again, going back to 2016, you had a popular vote saying one thing and then you had a majority in a different sense, saying something else, which is to not say anything partisan, any of this. But it is to say that there are different ways that we can figure out the quote unquote will of the people.
And I think there are people that could legitimately argue that Facebook or Twitter knows better than I do about like fake news because they're looking at it constantly, whereas I'm not I don't know if I buy that, but I think there are people that could make that argument and I might be swayed. So I don't know.
I mean, obviously I'm all over the place, but I don't think we're going to solve media manipulation today or tomorrow or this year, maybe at some point in the future. But honestly, probably not in a way that everybody is going to agree with and be happy about. But hey, listeners, we'd like to hear from you. Would you rather have the power to make the decision about what is manipulated media in your news feeds on your own? Or would you rather the platform make the decision for you?
Let us know.
And, you know, regardless of where we fall on this, I at least am very happy that folks like Emily and Claire and the fine folks at PII are looking into this, and especially with the steering committee that they've created to look at media integrity.
And I guess just to for us to close on on that concept of media integrity, I just think that's a wonderful lens to to look at this like what does integrity look like in these spaces?
For more information on today's show, please visit the episode page at Radical, I beg if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite podcast.
Make sure to join us every Wednesday for new episodes. Join our conversation on Twitter at radical iPod. And as always, stay radical.
The media don't include that.
I absolutely am going to I think we solved it, though, today.
I just I think we really solved this entire manipulated media thing. I think fake news is obsolete now. Yeah, I think we did it high five. I can't give you a high five over zoom.
Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Imagine a world where automated transcription just works. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone. Sonix takes transcription to a whole new level. Easily share and publish transcripts that were automatically transcribed by Sonix. Let powerful computers do the work for you; automated transcription in minutes. Sonix is the best automated transcription service online.
Use Sonix to simplify your audio workflow. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Transcription is now more accurate and more affordable. Sonix's automated transcription is fast, easy, and accurate.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.
If you are looking for a great way to convert your audio to text, try Sonix today.