Improving Social Media: Content Moderation & Democracy

All Tech is Human Series #8: with Sarah T. Roberts and Murtaza Shaikh


Details from the original livestream event put on by All Tech is Human

Details from the original livestream event put on by All Tech is Human


Key Takeaways & Some Action Items

Platforms and Accountability

  • What are the rules and policies that large tech companies and social media companies have towards hate speech and misinformation? Ask yourself the following:

    • How do these rules get written? Who writes them?

    • How do these rules get enforced? Who regulates them?

Legal Enforcement

  • Social media companies have 1st amendment rights, not their users (a common misunderstanding, because social media companies are private companies)

  • American norms and laws as interpreted by the interests of silicon valley are now being pushed globally to communities and other legal bodies that might be in conflict with these models (e.g., content moderation)

Content Moderation

  • What is it?Brand management and liability mitigation for platforms who are maintaining a B2b relationship with their actual clients and customers who are other corporate entities, advertisers, data miners, etc… “ (said by Sarah)

    • There are significant protections and effects now that are felt by users, but we have to understand that thinking of the users is secondary (though under the guise of “free expression all the time”) to their primary concern of brand management and liability mitigation 

  • What is a “public sphere”?

    • Places that are “public” and are legally enforced as “public” space

      • Content (e.g., hate speech) can be regulated in public spaces, however people often think of social media as a “public space”

    • Look at social media as faux “public spheres”, these are increasing

    • Meanwhile, real “public spheres” are decreasing

  • Can there ever be a universal content moderation?

    • This is likely not possible (said by Murtaza)

      • It would have to be implemented across “state lines” and legal boundaries

      • Content moderation is about online context, which makes regulation very difficult

      • There is a lot of tension in places like the UK (and different countries) between: what is harmful & legal versus what is harmful and illegal

Hate Speech

  • Does not have a globally agreed-upon definition

  • Usually defined along the lines of “a direct attack based on protected characteristics” (said by Murtaza)

    • Examples of hazy issues: hate speech based off of linguistic identity, immigration status, etc.

  • One element of hate speech: incitement of violence

    • This is what happened to get Donald Trump kicked off of Twitter

    • Twitter had a policy against hate speech that incites violence

    • This is a tricky scenario, because often content moderation can not make any changes or responses until after the damage has been done (either discrimination, harm, or actual violence)

Section 230

  • Created by and applicable for the US

  • Allows social media companies to not be liable for the content that is posted on their platforms

    • Allows for free speech

    • Also allows social media companies to come up with their own policies on content moderation

  • Currently being spoken about being revised by both Republicans and Democrats

    • Both sides have claimed harm from section 230

How can we make social media less extreme and harmful?

  • See social media as a magnifying glass on society, not necessarily a mirror (said by Sarah)

  • Use this to see where there are harms that exist offline as well

  • And use this to “light a fire” on things that should be changed

  • “Show your work”

    • Many of these companies do not have to do content moderation, but from a PR perspective and to build user trust and to think about human welfare -- “showing your work” (doing the work and being transparent about it) is essential




Do you have any more action items or takeaways that you’d like to share that are related to this topic? Any resources you’d like to include in this list? We’d love to hear from you! Feel free to leave your thoughts in a comment below (you can post your comment by hovering over the bottom-right corner of the comment box) and clicking ‘Post Comment’.

Previous
Previous

Improving Social Media: Misinformation & Free Expression

Next
Next

The Business Case for AI Ethics