Media platforms

Social media platforms say transparency on moderation makes it harder, the opposite is true – Feed

First published by the Electronic Frontier Foundation

Imagine if your boss made up hundreds of petty rules and refused to divulge them, but each week your salary was locked in based on how many of those rules you broke. When you’re an online creator and your “boss” is a giant social media platform, that’s exactly how your compensation works.

Algospeakis a new English dialect that has emerged from desperate attempts by social media users to “please the algorithm”: that is, to avoid words and phrases that make media platforms’ algorithms social media suppress or block their communication.

Algospeak is practiced by all types of social media users, from individuals speaking to their friends to science communicators and activists hoping to reach a wider audience. But the most ardent practitioners of algospeak are social media creators, who rely, directly or indirectly, on social media for a living.

For these creators, accidentally falling into an invisible linguistic fence erected by social media companies can mean the difference between paying their rent or not. When you work on a video for days or weeks or even years, and then the “algorithm” decides not to show it to anyone (not even people who explicitly follow you or subscribe to your feed ), this has real consequences.

Social media platforms claim they have the right to set their own internal rules and declare certain topics or behaviors prohibited. They also say that by automating recommendations, they help their users find the best videos and other posts.

They are not wrong. In the United States, for example, the First Amendment protects the right of platforms to moderate the content they host. Outraged, andthe very conversational space has its own norms and rules. These rules define a community. Part of freedom of expression is the right of a community to freely decide how they will speak to each other. Additionally, social media, like all human systems, has its share of predators and parasites, scammers, trolls and spammers, which is why users want tools to help them filter out the noise so that they can access the good stuff.

But legal issues aside, the argument is much less compelling when the tech giants do it. Their moderation policies aren’t “community standards” — they’re a unique set of policies that attempt to evenly regulate the speech of billions of people in over 100 countries, speaking more than 1,000 languages. Not only is it an absurd task, but the big platforms are also pretty bad at itfall well below the mark on speech, transparency, due process and human rights.

Algospeak is the latest in a long line of tactics created by users of online services to avoid the wrath of automated moderation tools. In the early days of online chat, AOL users used creative spellings to circumvent coarse filters, creating an arms race with lots of collateral damage. For example, Vietnamese AOL users could not talk about friends named “Phuc” in the company’s chat rooms.


But while there have always been creative workarounds to online moderation, Algospeak and the moderation algorithms that spawned it represent a new phase in the dispute over automated moderation: approaching moderation as an attack on new creators who help these platforms thrive.

The Online Creators Association (OCA) has took to TikTok to explain its policy of moderation. As OCA co-founder Cecelia Gray Told the Washington Post‘s Taylor Lorenz: “People need to tone down their own language to avoid offending these all-seeing, all-knowing TikTok gods.”

For the creators of TikTok, the judgments of the service’s recommendation algorithm are extremely important. TikTok users’ feeds don’t necessarily feature new works from the creators they follow. This means that you, as a TikTok user, cannot subscribe to a creator and be sure that their new videos will automatically come to your attention. Instead, TikTok treats the fact that you’ve explicitly subscribed to a creator’s feed as just a suggestion, one of many signals incorporated into its ranking system.

For creators on TikTok — and creators on other platforms where there’s no guarantee your followers will actually see your videos — understanding “the algorithm” is the difference between getting paid for your work or not.

But these platforms will not explain how their algorithms work: which words or phrases trigger the downgrade. As Lorenz writes, “The creators of TikTok have created shared Google Docs with lists of hundreds of words they deem problematic for the app’s moderation systems. Other users maintain an account running terms that they say have strangled some videos, trying to reverse engineer the system” (the website Zuck got me for chronicles innocuous content that Instagram’s filters have blocked without explanation).

The people who create the materials that make platforms like YouTube, Facebook, Twitter, Snap, Instagram, and TikTok valuable have devised many ways to draw attention to groceries and rent money, and they have convinced billions of platform users to sign up to get their creations when they are uploaded. But those followers can only pay attention to those creations if the algorithm decides to include them, meaning creators can only eat and pay rent if they please the algorithm.

Unfortunately, the platforms refuse to disclose how their recommendation systems work. They say revealing the criteria by which the system decides when to promote or bury a work would allow spammers and scammers to abuse the system.

Frankly, that’s a weird argument. In the practice of information security, “security through obscurity” is considered a wild ride. The gold standard for a security system is one that works even if your opponent understands it. Content moderation is the one major area where “if I told you how it worked, it would stop working” is considered a reasonable proposition.

It’s particularly vexing for creators who won’t be paid for their creative work when an algorithmic failure buries it: for them, “I can’t tell you how the system works or you might cheat” like your boss saying “I can’t tell you what your job is, or you could make me think you’re a good employee.

It’s there that Exposed follow-up comes in: Tracking Exposed is a small collective of European engineers and designers who systematically probe social media algorithms to replace the popular theories that inform Algospeak with hard data on what platforms rise and fall.

Tracking Exposed asks users to install browser plugins that anonymously analyze the recommendation systems behind Facebook, Amazon, TikTok, YouTube and Pornhub (because sex work is work). This data is mixed with data gleaned from automated testing of these systems, with the goal of understanding how the ranking system attempts to match inferred user tastes with the materials the creators make, to make this process readable. for all users.

But understanding how these recommender systems work is just the beginning. The next step – letting users modify the recommender system – is where things really get interesting.

You choose is another Tracking Exposed plug-in: it replaces your browser’s YouTube recommendations with recommendations from many services on the Internet, selected according to criteria you choose (hence the name).

The Tracking Exposed suite of tools is a great example of adversarial interoperability (AKA “Competitive Compatibility” or “comcom”). Empowering users and creators to understand and reconfigure the recommender systems that produce their feed – or feed their families – is a deeply empowering vision.

The benefits of probing and analyzing recommender systems don’t stop at helping creators and their audiences. Tracking Exposed’s other prominent work includes a study on how TikTok promotes pro-war content and demotes anti-war content in Russia and quantify the role that political disinformation on Facebook played in the outcome of the 2021 elections in the Netherlands.

Platforms tell us that they need internal rules to make their chat spaces thrive, and that’s absolutely true. But then they hide these rules and punish users who break them. Remember when OCA co-founder Cecelia Gray said her members tied themselves “not to offend those all-seeing, all-knowing TikTok gods?”

They are not gods, even if they act like them. These companies should make their policies readable for the public and creators, by adopting The Santa Clara Principles.

But creators and audiences shouldn’t wait for these godlike corporations to descend from the sky and deign to explain themselves to the poor mortals who use their platforms. Comcom tools like Tracking Exposed allow us to demand an explanation from the gods, and extract that explanation ourselves if the gods refuse.


Source link