Facebook Ads, Propaganda, and Global Politics with Nayantara Ranganathan and Manuel Beltrán


Propaganda.png

What should you know about propaganda and political ads in the age of information? How do they impact democracy across the globe? To cover this important topic, we welcome to the show Nayantara Ranganathan and Manuel Beltrán.

Nayantara Ranganathan is a lawyer and researcher studying the politics and culture of digital technologies. At the Internet Democracy Project, she worked on applying feminist methods of research and practice to questions of data governance. Within her independent research, she is exploring how technology is remaking law and regulation in its own image.

Manuel Beltrán is an artist and activist. He researches and lectures on art, activism, social movements, post-digital culture and new media. As an activist, he was involved in the Indignados movement in Spain, the Gezi Park protests in Turkey and several forms of independent activism and cyber-activism in Europe and beyond.

Together, Nayantara and Manuel founded the Persuasion Lab, a project exploring new forms of political propaganda on social media. They are also both members of the Real Facebook Oversight Board.

Follow Nayantara Ranganathan on Twitter @neintara

Follow Manuel Beltrán on Twitter @beltrandroid

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

adwatch_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

adwatch_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. As always, we are your hosts, Dylan and Jess, and we are on our third episode of our month long series on technology and politics. In Episode one, we talked about fake news and misinformation. In Episode two, we talked about everything you need to know about voting and technology. And in this episode we break down propaganda and political ads and how they impact democracy around the globe.

To cover this topic, we brought two experts onto the show. Nayantara Ranganathan and Manuel Beltrán Nayantara is a lawyer and a researcher studying the politics and culture of digital technologies at the Internet Democracy Project. She worked on applying feminist methods of research and practice to questions of data governance. Within her independent research, she is exploring how technology is remaking law and regulation in its own image. Manuel is an artist and activist. He researches and lectures on art activism, social movements, post digital culture and new media. As an activist, he was involved in the Indignados movement in Spain, the Gezi Park protests in Turkey and several forms of independent activism and cyber activism in Europe and beyond. Together, Nayantara and Manuella founded the Persuasion Lab, a project exploring new forms of political propaganda on social media. They're also both members of the real Facebook oversight board, which we explore a bit in this interview.

So the reason we wanted to bring both Nayantara and Manuell on was actually because I ran across a tweet from Nayantara on Twitter in the middle of September, so a little bit over a month ago. And in this tweet, she shared the Persuasion Lab's project on this infrastructure that they built, which is basically this giant data set of 98 countries, plus an additional thirty nine countries, which they already had of political ads and propaganda from the Facebook platform and other online platforms. And it wasn't just sharing this data, but she presented it in a way that she gave specific examples of ways that researchers can actually utilize this data set to battle political propaganda and to test the effectiveness of transparency or the lack thereof on these platforms. And I was hooked instantly and incredibly compelled by this story and their project. So we just had to bring them on. So without further ado, let's hear it straight from them.

We are on the line today with Nayantara and Manuel of the Persuasion Lab. I'm wondering if you could actually start us by talking about the Persuasion Lab, how that came to be and what you all do and what you're looking at.

And the persuasion lab is more of a container for the work that we've been doing around examining propaganda on social media for the past years. In the most recent kind of iteration of it, we have been collecting political advertisements of Facebook and other social media companies like Snapchat, Twitter, as well as companies like Google to essentially have a parallel archive that is outside the stewardship of these platforms. And the idea is that in that especially with Facebook, which is the platform that we started with, it was quite difficult to get, let's say, a proper hold of data that was meaningful.

So, for example, while there is the ad library, which is kind of their flagship tool of transparency, that is still a very two dimensional kind of tool in the sense that the entry point into the data set is through either keyword searches or you can start with page different pages and then look at the ads that they have. Or so basically all the parameters of data that the dataset contains cannot be accessed in a more a more open or three dimensional way.

So we that was just, I guess, one of the reasons why we thought it was important to have a different model of.

Access to this information.

Right, so I didn't but my colleagues say the lab is a very small team. We are just took. We do like several things. We try to understand how new infrastructures of propaganda and social media work. We try to explore their intrinsically or pack dynamics. Just how a lot of investigation work on trying to understand and make sense of how is this functioning. We collaborate with a lot of different journalists in a lot of different places, trying to then analyze and understand and obtain meaning out of the data. And we also do what we could perhaps call interventions in which we are also trying to do in one hand chains and shape the terms of discussion. So how are we discussing about propaganda online nowadays? We define our terminology, our understanding frameworks and so on, and trying to create also practical interventions such as this part of the archive that we run through this infrastructure, that it's on a daily basis every 24 hours collecting new political advertisements or social media as an intervention to try to take away the monopoly of knowledge and control that these platforms have about all the data on the political attachment.

So let's maybe start at the beginning here and just unpack what political ads and propaganda even. Are our political ads the same thing as propaganda? Are they different or they influence each other? What do you both think?

I think we work with as a starting point.

We work with political ads as a framework, as a classification and as a topic of discussion only because that is kind of the way that Facebook organizes paid posts on their platform.

So I think this is a question that comes up a lot, that when you guys collect data, do you like how do you decide which is political? And then we kind of. I we explain that it's it's Facebook that does the classification, and this is kind of extremely problematic in many different ways.

Propaganda, at least the way that I use it, is not like a derogatory term, but more of any any kind of influence that you might have that any person might have.

And therefore, like what is, um, let's see how how do these algorithms what is the kind of axis along which the influence can we think of it as a unitary thing even to begin with and so on.

Um, and maybe.

Thinking of political ads is a classification, like if you had to think of the form of that classification, the first thing that comes is, is the the wording, the language of.

Political ads, and I think we problematize at the lab, but the political part of it, as well as the advertising part of it. And also, like the whole kind of Galaxie of terminology around. The transparency framework, so, for example. What does it say that commonly you like nonpaid for organic foods because it's far from organic, but I mean, it's far from like this, I guess, elemental natural quality of reaching what it needs to.

But actually it is very specifically engineered in particular ways.

Um, that is an example of, I guess.

The ways that that that the systems of classification are extremely convoluted and pointing in directions that might not necessarily be useful, I think if we take the starting point of propaganda, something in avoidably society, something that forms part of all kinds of larger forms of organizations of human society. The question here is how a new technology. In this case, advertisement, microtargeting and optimization. How does this change the power dynamics of the regulation, the creation, the content of this propagandas? And I think we are witnessing. Quite often, Sweets, so traditionally we have been thinking, if I simplify it, we have been thinking about advertisement from the perspective of this is a company trying to sell me something or this is someone trying to convince me to vote for candidate A or for candidate. But now what we encounter in this data set of so-called political advertisements is that a lot of other actors that were not traditionally creating advertisements are doing so, such as government agencies or governments making public announcements, a government announcing the release of a new law in the form of Hubbards Facebook advertisement or in the form of lots of NGOs investing large amounts of money to to bring emergency response towards the pandemic or even hospitals that are just announcing health safety measures through this infrastructure sort of paid advertisement. Which deceived, makes us wonder whether the terminology of still calling those advertisements is still something. Let's say precise, or if perhaps we are witnessing a shift towards. In who are subsuming into advertisements, many other forms of public discourse have lacked that before it did not belong. As advertisements, so this whole kind of generic term of continuing advertisements perhaps is something to really critically revisit and with perhaps a different term, maybe something along the lines of paid political discourse or paid public discourse or paid discourse that could be perhaps hinting towards a more precise term of office.

Speaking about this, for people who maybe know almost nothing about this and especially the potential harms or damage that can be done through propaganda or political ads.

Could you say a little bit about those potential harms and why this work is so important? I think there are a few differences.

And in the experience of propaganda, our political discourse at large in the times before microtargeting nowadays, for example, we like to use this this example of if the four of us will be sitting in a public square right now and we will be looking at a billboard that's a physical billboard of paper with a political advertisement, regardless whether we like it or not, regardless of whether it's a fake news or real information or whatever, the four of us will be able to be part of the same shared collective experience of our political reality. So that will enable us to practice democracy, to practice democracy in the sense of we can have a discussion. So this aspect of public discussion is something that we see as intrinsically necessary for our democracy. But now, if we imagine this scenario that instead of in the public square, the four of us walk away and each of us is taking on their phones, our Facebook feed, and each of us is receiving different political advertisements. In the moment that we come back to the same space and we try to have a conversation, the four of us have been subject to different perceptions of our political reality that have been.

Specifically, intentionally targeted in different ways to each of us to cause the largest engagement of the largest liveness to be persuaded by this has and this in itself, I think, poses a very dangerous threat to the very fabric of our practice of democracy in the sense of how we are able to discuss in society about politics.

And if I can add to that and put it in a different way, I think there's a lot of debate around whether that, you know, what is the likelihood of being persuaded by microtargeting?

To what extent is it a hyped up kind of thing? And what to what extent does it actually useful? I think I think regardless of whether it is effective or whether you're just getting ads for something that you already bought, I think what is important is to understand that the the shared semblence are like collectively being able to understand what the information environment is like at any particular point in time has become.

Really difficult, even with an intentional mind to do so, could you both maybe give us a lens as to how this political, you know, microtargeting on platforms online is changing the public discourse and democracy globally? What have you both experienced?

I think I don't know. I would like to situate myself, let me see very much in India. And so I don't know about like the vast expanse of the rest of the world, but I can like we did look at elections in India in twenty nineteen, and that's where we kind of started working on and watch and eventually persuasion that.

So what we saw was.

A lot of like what we were able to see in the data set in the political ads, data sets, let's say, was very clearly and different.

Well produced ads being dubbed in different languages, according to different states, like very typical, obvious, unsurprising kind of things. But what we found was a deep sense of dissatisfaction with the information that we had in the dataset, because what the data said shows is actually the aftermath of docketing.

So, for example, where you have the demographic information, that is how many men or women were targeted, how many people in these different age ranges were targeted?

All of these kinds of information, what it is telling us is. What happened in the aftermath of an advertiser buying ads, Facebook then deciding according to its own optimization targeting or not delivering it to particular people and then to the demographic profiles and region profiles of these people. So let's say the intention of the advertisers or the kinds of information used by Facebook in delivering these ads, all of this is still extremely undisclosed. So I think from the beginning, what the political ads data shows points you in the direction of.

Content of the ads, as well as some of the more traditional ways of understanding, targeting, so I guess.

With television and so on, I think it was to fair to understand targeting on the basis of religion or, you know, like you have different language ads in different places, or, for example, you would place a particular kind of ad if it was a children's show or if it was a late night show, that kind of thing.

But the same kinds of devices or. Market segments are being used in presenting political ads data.

What is the information that advertisers are providing, the intentions of advertisers or the way that the platform itself delivers ads is far more sophisticated.

And by Facebook's own admission, for example, is like close to two million data points that are used in deciding which advertisement cost to whom. So.

I guess.

That was that was the main takeaway, at least for me, from the Indian elections that we were seeing, like all of this money going into advertising and in the absence of good electoral finance, transparency in the country, in any case, Facebook was not helping make any difference either short throwing light on what was happening with the political parties or within its own platform.

There were in the beginning, we started finding a lot of problems in.

Let's say we found a lot of violations of electoral law in India, but we experience the lack of mechanisms for them, in this case the Electoral Commission of India, to bring that into accountability, to be able to even understand the data that we were providing. So as much as we have regulations put in place across the world for regulation of political ads in radio, in TV or even in so-called electronic media, there is very little to nothing depending on the place when it comes to the online advertisements. And when it comes to microtargeting advertisements, it's it's even closer to zero.

But I think in general we are seeing a discourse. So regarding the question of. What is the influence of social media in democracy?

This question is very much completely given up to be a majoritarian, U.S. centric conversation about what is the role of Facebook with foreign influence, with boards, with networks of harassment or you name it. But those are problems that have been occurring in the rest of the world through the whole there are problems that are happening right now with elections such as in Bihar or Myanmar that are happening in parallel now with the US elections, but is not receiving like a one percent of the attention in terms of public discourse, nor in terms of actions.

And I think here it comes to a question of what kind of governance model do we want to have him in the platform, first of all, whether we want to have a platform like this or not, but as we have it, as an emergency response, what kind of governance model we can have.

And for example, with a lot of the discussion on nationalizing Facebook, what will be the role of governance on Facebook, being a US company towards the elections in Myanmar to watch the elections in Cuba?

How will that operate with countries that geopolitically the United States is at odds? And I think we can observe like the extension of of the politics, of the geopolitical influence of the United States. And it's been materialized, for example, in their takedowns of networks when they are cutting up.

I'm announcing takedowns of grass, HUMBOLDT'S, or Iranian boats or North Korean influence campaign or Cuban boats. It tends to be with countries that very much is resonating with US foreign policy.

So it could be even more problematic than it is now to tort reform. In this scenario in which US lawmakers kind of held Facebook to account to be a patriotic company that protects the United States interests while it is operating, it has much more many users outside of the United States who has influence in much more elections in the rest of the world than the United States. But those are unfortunately not the terms of the discussion that is ongoing now.

I was wondering if we could talk a bit about oversight in general and oversight on Facebook and Facebook's ads particularly. So Facebook has its own oversight board. But then there was this new group that came about that you both are involved with called the real Facebook Oversight Board. And I was wondering if you both would be willing to talk about why that came to be and what what what its function is.

Just so Facebook launched this initiative of a Facebook oversight board that it's supposed to be an independent organization that seeks to.

To oversee some of the parts of Facebook, but it's a very conceptual it's a body that, first of all, its Facebook song initiative is paved with Facebook's money in the in the reach of what they can do is extremely limited as to be about content moderation.

And we have seen this happening with other governments, with totalitarian tendencies in which they create fake judiciary bodies or oversight boards to have a kind of certain sense of accountability in this regime while depriving it from any kind of meaningful agency power and so on. So we think that very much. I think that Facebook's own oversight board is very much a tool to legitimize, to make acceptable and to normalize the status quo of how this company operates, rather than to bring any at all meaningful change to how the platform works, while also distracting the attention towards, you know, we already have a process, you know, to follow with complaints and so on through this part.

Ok, so about a month ago, we released, together with 23 of their civil rights leaders, activists, the scholars, some general critics that have been for a long time, probably mobilizing the blood from this real Facebook oversight board, which you can see as a campaign to try to put into the spotlight the problematics of Facebook over cyber and also having a very pragmatic, practical and an urgent goal of the campaign that is being performed as an emergency response to the situation in the United States to try to influence to to, let's say, to lesser degrees of harms that this platform will is provoking now in the run up to the election and in the run up in what might happen after the election.

While we're on the topic of Facebook here and using it as a a bit of an infamous example, but of course, Facebook is symbolic of many other social media platforms out there that are struggling with political ads and propaganda. Right now, I want to talk a little bit about transparency and what its role is in all of this, because it seems like there is a big problem with transparency when it comes to knowing what's even classified as a political ad, knowing, you know, who is getting what political ads over what other political ads. So what is the role of transparency or the lack thereof in all of this?

I think transparency has been such an important promise in response to so many things, in response to how hate speech policy is operationalized and just wants to have political ads or parades and so on.

But unfortunately, since Facebook itself has like it's a self-regulatory mechanism where it has come up with the transparency framework, it has ended up kind of entirely subverting the actual Putin promise that it was buy.

On the one hand, the data that is released by Facebook and the other platforms and all of the platforms that, for example, have what they call political ads, there is a certain ceiling that has emerged which.

So, for example, Twitter bans political ads, Facebook. Gives a certain level of impunity or like certain privilege to political speech where they do not fact check political speech.

So so regardless of what your approaches for the ban and whether you give it special privileges, all of them agree that political ads as a category and that category and it's viz. through political ads, transparency kind of hides the entire iceberg off of advertisements that do not fall under the realm of political ads and even within the data set. If you start to look beyond this big, heavy world of transparency and then to start to look into what are the different parameters that fudge that has made known, it is extremely limited and mostly just what the advertisers that in any case kind of submitting, you know, like not not their targeting content, but for example, like what is the visual of the what is the data that comes with it? What does the caption that comes with it? These are some of the kinds of parameters that it was placed entirely unsatisfactory. But if you look at also the language in which a lot of these parameters are described, one begins to think that.

And.

One begins to think that it is entirely.

In service of platforms, business models, the way that these frameworks have come to be, so, for example.

Um.

What is the question of what isn't and is it just the visual, if there is some, you know, Jason, targeting data, there's lines of code that come along with it.

Is that part of what one imagines is and or if we were to look or what is important is the impact of.

Facebook's advertising network then also looking at people that are receiving ads and the reason why I had a single particular ad would be entirely different from the reason why you receive the same.

Mine could be because I'm tagged as someone under 30 in Bangalore, and yours could be because you go to a particular college and so do we look at the ads that we have received as the same, or is there value in looking at that as two separate cases of advertising? So what that is done to us, like these two separate cases, is doctors impressions. So one ad has 100 hundred million impressions is how it is presented. But what if we were to think of that as here is a hundred million ads that kind of.

Visualises.

The profiling behind this infrastructure, the targeting, the unique ways of targeting for each individual, whether it works or not, but that it is like one thousand million ways in which one particular act moves.

And so to get back to the question, what I'm trying to say is that transparency. Is architected by the language that famous platforms.

By the aesthetics of Dashboard's and Iqbal's and all of these like quantitative kind of effective tools that make you feel like eau de de de de de de, so much stuff to like, here's a visualization and there's a sticker and like, here's graphs and all of them. So this is the stakes. There's language, there's processes and mechanisms, all of which make it that we also into laws and and governance models.

So.

The framework evolved by.

Marketing departments of the biggest advertising company becomes what is also the way that we want to kind of change things within the laws. And so let's see, like I think it's not just even the case for social media. Even if you look at government initiatives of transparency, there's a lot of APIs. There's a lot of like dashboards and passwords and whatnot. But I think it's part of a larger kind of.

Promise and how that is being delivered through a particular aesthetic.

One thing I really appreciate about the Persuasion lab is that you're doing a lot of advocacy, you're doing this work analyzing ads, and then you're also bringing art into it. And I'm wondering if you could talk a bit about the exhibitions that you've put on and the role that you see art playing in the work that you do.

I think that's very much correlates with what my intelligence is playing about. What language, perhaps? I think we are trying to make sense of. Extremely complex and extremely opaque infrastructures, which is it's really difficult, I think, for people to imagine that and that not a lot or, you know, to kind of question these foundations of our beliefs on what we think about certain technologies. And we found that in this case of the persuasion lab, it was. Let's say it was a framework that work for us to, first of all, establish ourselves as a lab that is also running a lot of projects in an artistic context.

So our infrastructure for collection and liberation of our entitlements was first deployed in an art exhibition in Austria. And this is what now has a life of its own line. And people can can download the data just from the Web. But in the of business, we try to also let's create a situation, create the conditions in which a different kind of discussion can happen, in which we can also imagine a space of questioning these platforms outside of the realms and the terms in which these companies normally want to engage in criticism and also to point it out with all of these tactics of dashboards, of a bit of transparency that these companies are selling our octopus that you can see in our exhibition. Is this the physical representation of the of the first infrastructure? It was also, say, an aesthetic intervention, trying to reveal how difficult it was to collect this data. You know, that is not something that you can perhaps experience in the dashboards in our websites, but you can perhaps experience in a different way when when you are coming to the situation of an exhibition, something that both Berlin and I find very interesting about the persuasion lab and some of the language that both of you have been using in this interview is the word liberation.

So I'm curious what both of you mean when you say the word to liberate and what that means in the context of political propaganda online.

That's a fantastic question. As you were asking about the US before, I have one example that I like to to compare the world that we are doing with the United States.

There is something called the pather system. So in order to to access the law in the system, do you call access to documents and papers of the law in a digital form which you need us to pay every time you were downloading this data? And of course, that was extremely problematic when when you're thinking in a democracy in which you have to pay to access the law, you enter into a lot of problematics of who can access these documents.

You can't. Or maybe I'm just a student that wants to inform myself about how the law works, or I'm just a curiosity, something that every citizen in democracies who have access to. I think I compare the importance of having no transparency as we understand it, not real transparency data on how propaganda, political propaganda is the political line. That is something that we do not debate. I think as a whole society understand that we should submit to a kind of equal level of importance of having this as something that belongs in the public domain. Facebook has created, for example, a few initiatives in which they gave Saund researchers access to bits of the data, not all of it. I think that will be a still extremely problematic in which this transparency data is only available to a few. Or also if this data is only available to a technical skill, few that can access it through the complex system of verification may be or source.

It came our idea to OK, what if we liberate all of this data in in the form of an archive? It is something that, let's say by the optics, Facebook has a difficult time to censor. But just a few days ago and then the New York Adsorb Observatory has received cease and desist letter to stop the initiative of collection of hats as well. And I think this just redouble our commitment to the effort of liberating this as not just something that we distribute publically, but something that is being taken away from the public, something which intentionally made it difficult to access, which intentionally does not allow for systematic collection of it. So in the moment in which there are so many barriers, I think it's not only a question of distributing these freely, but really liberating it from the prism of Facebook.

And I think it's also about the integrity of the data and whether it's the teeth that it has, whether that changes once it is out of Facebook's systems, so so an API, what it allows you to do is like fashion, a particular kind of query that is specific, that retrieves information from their system and kind of presents it to you.

But what we do is kind of liberated also from that relationship off of putting your hand in and picking up a particular thing, um, to kind of look at it from all sides, understand it better as as an object and, um. And yes, I think also there is sometimes when people are curious about the project, they also want when they want to work with the data, they kind of think of the data set also as a finished or a complete thing, whereas that is full. So it's never a God's eye view of political advertisement, but always kind of. Carries a particular message point of the time of collection of the particular query that we have to input. I mean, we'd like to do a maximal kind of quick so that we get as much information as possible and kind of archive it. But the idea is that even with all of these imperfections and even with the fact that ultimately this is data that Facebook release of. So I imagine that is like many degrees of, uh, of. Safe keeping their asses already. That happens before it is released, but even then, I think there have been cases of disappearing acts from the library, things like that.

And I think the that any teeth that this data could have. Is sharpened once it's outside the control office. So I think liberationist towards that idea of also keenly understanding that shows like now Facebook is making this data known, but it is still very much on its own terms, under its own control.

Exactly. And I think it's also not just a way for other people to access the data and study it and see the content, but also hopefully an intervention to hold the archive of Facebook's accountability as we have them up before they're the latest and the latest elections in the United Kingdom.

Thousands of advertisements disappear from Facebook at that moment, we did not have public infrastructure put into place, but in that moment we could compare, OK, like did the data Facebook change from what they were giving yesterday, what they give today?

So as just to put the emphasis of this is not only just about the content, but also to try to problematize the whole thing, the whole framework in itself by having this separate archive.

As we move towards closing the interview, I'm wondering if you have any advice for users of these system so users of Facebook who are seeing ads is what can we do?

What can people do or what should people do about these ads that they're seeing on a daily basis? And then any other closing thoughts that you might have that you want to share?

So as we have been discussing, I really for quite a while, the conversation, the language, the terms that we used to discuss about these problematics, our whole concepts of these platforms, even the concept of what is our overall, what is my role as a citizen of this platform?

I'm like a user. Oh, I have no powers. I can tell you as we saw this and. I think a lot of the problems that we have been facing, not just with political propaganda's in name only, but in the general, with the open source movement and with recycling, with the climate and many other things, is that a lot of the responsibility has been put into the individual. You know, Coca-Cola is making advertisements asking me to recycle their bottles while they are polluting so much more. And I think. The the current status quo when it comes to political organization around technology starts from extremely precarious situation in which we are seeing our selves as individual users of these platforms in which what you can do as an individual gets narrowed down to or you will be using data instead of some or you would be using brave instead of Google Chrome.

You know, this this kind of question of individual choices, while at the end of the day is going to be a very few people who has the privilege.

And it's really about privilege to to use those platforms from the basis of they have a computer powerful enough to run it or they have the time to learn about this or they have the expertise or whether these networks have a platform and so on. While Facebook has arrived to a point in which so many people depend on this platform, like their whole life depends on this platform to be in touch with the family members, to perform their work to to study with Facebook groups. The level of dependency of the platform has reached a point in which I don't see it as a viable political proposal to say we all delete our Facebook accounts because we would be very few and it will be unfair to them. Put the blame on the ones who are still on this platform when they are victims as well.

So I think. Maybe one of the problems is that we do not really. We are not yet able to properly imagine how we would like to organize our technologies in society, like I see a proportion of 99 percent of criticism towards Facebook, one percent of discourse about alternatives of how could we definitely organize all of this problematics? And maybe I come back to Earth again.

In previous projects with the Institute of Human Obsolescence and others we were trying to work with the idea of. What if instead of seeing ourselves as individual users of a platform, we will see ourselves as the workers of this platform, so let's save the data that we producing for these platforms is so intrinsically Crosio.

Maybe we have performed more for them. And what kind of forms of political organisation could emerge from a different understanding of what is our relationship with this company? What kind of different political proposal could we articulate if we claim that an advertisement is not just an advertisement, but it's something different. So. I think this comes really, too, to to concepts of how to do we see ourselves, how do we imagine ourselves, how do we make these companies?

And I think in our work we are trying to.

To try to question. All of those aspects.

And for those who are looking to question, to reimagine and to liberate themselves, if they want to look more at both of your work and the work of the persuasion lab, where is the best place for them to go, which we're to be better at organizing a website, to be a good repository of what we do.

So I think that is kind of a good place to begin.

I just want to thank a decent Dylan for giving us a bit of advertisement.

The space within this podcast about advertisement. So that's the perfect place to start with will be at that watch. That is our website. You will find plenty of resources to get yourself lost. That from text to understand more about US interfaces in which you can browse by country. You can browse by themes such as the Camrose advertisements that to speak about the climate crisis, statements about the pandemic of covid-19, you can of course, download all the datasets that we liberate every 24 hours.

And we have an activity section in which you can see different investigations that we have published, different reports with me, the listings of all of our exhibitions and all of that is there.

And we'll be transparent and promise that this is not a political advertisement, but we just really love the work that's going on here. So thank you so much, Manuel and Nayantara, for coming on the show today.

So just propaganda in the age of information, there is just so much going on with this conversation and I'm not entirely sure where to begin, but I'm brought back to something that Manuel brought up in the interview, where it was the difference between this image of a billboard that multiple people, all four of us are looking at. We're all seeing the same thing versus propaganda now, say, in a Facebook ad where everything is curated on an individual micro basis. And it's kind of terrifying to me when I think about it.

And I love what the Persuasion Lab is doing, especially using art as a resistance and all of this. But it's it's just a lot to to think about how propaganda has changed so much in such a relatively short amount of time, like even thinking back to, like the 70s and 80s to today and how we don't even necessarily know what to call, when to call propaganda propaganda, when to call it just political ads and how to think about all of this in the first place, especially in terms of ethics.

Yeah, I really actually appreciated that Manuel gave the the new terminology that we should be calling political ads, which is pay to public discourse. I've never heard of that before. And that really stood out to me. And I think the other thing I'm sitting with right now, that's something I've actually kind of struggling with a little bit.

I feel like the bubble was popped for me in this conversation. I feel like I've been sitting in this like US centric bubble because of this election that's going on. And I have not been paying any attention to what's going on globally. And I feel weird about it because I genuinely did not know that Myanma and Cuba were having elections right now because I am so enveloped in this bubble that I mean, I'm definitely to blame for the majority of it. But I'm wondering also if part of that has to do with the fact that the only things that I see online are political ads and propaganda about the US election.

And so I'm wondering if there are other people that are feeling like they're existing in this bubble that was created for us and based off of inferences that are made about us that we didn't actually ask to be in in the first place.

Absolutely. And it's tied to the platforms that we're using. It's tied to Facebook. Something that really stood out to me was when our guests today we're talking about where Facebook is based. Right. And the fact that it's based in the US and can be tied to US lawmakers so, so easily now and also possibly in the future and how much that impacts how information is shared to the point where we might not even know, like who is creating or curating or what information is real or right or whatever, because it's still in this somehow still really nationalistic lens because of where that company was founded and where it's based. And now it's impacting people all over the world to the point. And I think meanwhile, again, was making this point about how people are dependent on it now.

And I've seen it firsthand and traveling. And it's it's true. I mean, it's a platform, maybe even a luxury for some folks in the world and for other folks.

It's like how they find jobs, how they find goods, how they get education. Like Facebook has a lot of utility that isn't just all negative or all political or all whatever. And yet the politics are so wrapped up into everything and so is the economics and capitalism and all of that.

And again, that's what makes me feel so overwhelmed by all of it, because I as a user, it's why I asked that question at the end. I feel so out of control in in this.

Yeah. I mean, because Facebook doesn't just have utility. It it has power. Right. Like the people who create Facebook and design Facebook and code Facebook and work for Facebook have a ton of power over the entire world and what they choose to even categorize as political versus not political in different countries, which is like crazy to me that that's different depending on the countries that, you know, ads are being displayed.

And that is super powerful and that's also super dangerous if that power isn't wielded well. So I'm sitting with a lot of discomfort in this conversation.

I'm trying not to be a Facebook hater, but I'm just feeling really weird towards the company right now after this conversation.

Yeah, it's it's kind of wild to think about their answer. And this was towards the beginning of the interview when I think we had asked them how they determine, you know, what ads for the work that they're doing, what ads they qualify as political.

And they said, no, it's actually Facebook that's doing that qualification.

And then they're interacting with those ads that Facebook has said what is political or not. And like you're saying, like. Categorization is an act of power, and that's exactly what's happening. So it's not necessarily to say there's a bunch of poor intentioned or ill intentioned people at Facebook who are trying to do evil or wield power in ways that are like abusive explicitly. But it is to say that all of those decisions about what gets seen, what doesn't get seen, who gets to see what even in the algorithm, there are real consequences to how all of that unfolds.

And I think this is just so important that there is this other group that have named themselves, you know, the real Facebook oversight board. I hope I got the order of those names right. But it's so important that those people exist. And it's really wonderful that the you know, and maybe maybe we're biased because we have had some of those people on the show before.

So like how Benjamin's part of that, Safiya Noble is someone else who we've been in conversation with, even though we haven't had her on the show as an individual guest.

But I think it's really important that the people who are looking at these practices for companies like this are not just the people that are chosen by the company itself.

Yeah, I mean, that's how I feel like oversight boards should be in general, right? Like if you make an internal oversight board, I feel like that's a juxtaposition. Like who? How can you provide oversight from the inside when you still have all the biases of the people who work on the inside and you still have you know, if, for example, Facebook, you still have Facebook's best interests at heart, like oversight should be people who have no vested interest, who are willing and needing to shoot straight with the people who are doing harm. And that's why I just absolutely love what the real Facebook oversight board is doing. I think it's a great project, a great start, and hopefully it'll help promote greater transparency from not just Facebook, but, you know, all social media companies, especially when it comes to politics.

It makes me just as a quick tangent. It makes me think about like The Iribe and our academic world, which stands for the Institutional Review Board, is an internal review board.

We were arguing about this the other day. And now I'm guessing myself it's I want to say it's institutional either way. It's within that institution, which is supposed to be regulating itself to some degree. And I'm like that. There seem to be some issues with that, like how did that get to be the gold standard? Maybe there's a great answer to that question. I don't know.

But it does seem like if a group is regulating itself without any sort of external certification or anything like that, then there's at least some gaps for either misuse or outright harm. Because if I was an entity, as an entity, as an individual right, I'm like I'm not I don't always see all the gaps in my own, you know, ethical decision making. Sometimes I need people to call me out because they see things in a different way, maybe in a more nuanced or holistic way.

Yeah. So maybe, I mean, all internally oversee this episode and say that we are out of time for today. And we have talked long enough about political ads and propaganda. But be sure to check out the show notes to get all the links and information that you need to download that data set from MEANWELL and Nayantara, because it is amazing, and especially if you're a researcher in this space, we would highly recommend you go and check them out.

For more information on today's show, please visit the episode page at Radical Doug.

If you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pod catcher, catch our new episodes every week on Wednesdays, join our conversation on Twitter at radical iPod. And before we say our closing line, just a note.

If you are listening to this on the day it comes out or a few days after that, so that be Wednesday, October 20th. Please note that our internship application process is closing on November 1st. So if you know someone who would be a great fit for that, please send our information their way and we'll add that to the show notes as well.

And just in case you are more of a visual or auditory learner than a visual learner, that is a radical iPod dog backslash internship.

It's not right, it's just radically IDAG. Did I say the wrong thing, you said radically, iPod dog. Oh, wow, it's been a long day.

That's our Twitter. So again, now that you're thoroughly confused for the general episode, page and other things to do with us, you can go to radical ECG. If you want information on the internship, you can go to radical IG Backslash Internship. And if you're interested in following us on Twitter, you can find us at radical iReport. That's that's how you got important. Thank you. I was confused. Reach out, talk to us. Tell us how we can fix you.

And as always, stay radical. Solid work is good. We really we we were in sync on that. But you also. I think fire by fire is awesome, that's.

You know, as far as this very. This is probably this is the hero business propaganda you for.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Create better transcripts with online automated transcription. Here are five reasons you should transcribe your podcast with Sonix. Transcribing by hand is no longer necessary; put away those headphones. Sonix's automated transcription is fast, easy, and accurate.

Use Sonix to simplify your audio workflow. Journalists worldwide transcribe their interviews and short audio segments with Sonix. Imagine a world where automated transcription just works. Automated transcription is better when you can easily collaborate and share the transcripts; our powerful permissioning system makes collaboration a breeze.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.