Media content

Moderation of social media content is not neutral, says USF researcher

[ad_1]

After a mob of pro-Trump agitators stormed the US Capitol last week, forcing a delay in certifying the electoral vote for the president, Twitter blocked President Trump’s post, and other platforms quickly followed suit, citing fears that his posts could incite further violence. Speak, a Twitter alternative renowned for its more permissive content rules, was removed from Apple’s App Store and Google Play, and Amazon halted the platform’s services, essentially taking it offline. Ali Alkhatib is a research fellow at the Center for Applied Data Ethics at the University of San Francisco, who studies ethical issues in technology, such as unfair bias, surveillance, and misinformation. With a background in social science and computer science, he examined how these platforms craft and enforce rules from a social rather than a technical perspective.

“All the technical systems that we build are things to support, or I guess – I hate to use that word – to disrupt the way people live their lives,” Alkhatib said. “It’s not to say that technology is a bad thing or anything like that, but that it exists in society and it affects society in these ways.”

The Capitol Riot made it painfully clear that online speech can and does have real-life consequences. But that has always been true, Alkhatib said, and to pretend otherwise is a luxury only the privileged can afford.

“The view that the things that happen on the internet are all sandboxed and like games, or playful, inconsequential things, is the kind of thing you say when you’re isolated from the reality of the consequences,” he said.

Alkhatib said the technology, rules and algorithms that govern online platforms — including safeguards against abuse or violence — are not neutral.

“What ends up happening is that groups of people who already have power end up using these bureaucratic institutions and structures to perpetuate violence,” he said, comparing the application of the rules of the platform to real-life law enforcement, with the same racial and social biases. .

“Unless you build this specifically with marginalized and vulnerable groups, it’s hard to build a system like this that does anything other than further oppress people who are already under the thumb of various other structures and various other bureaucracies and powers,” he said.

He also said he hopes users of these platforms will feel more comfortable demanding the changes they want to see and advocating for improvements, like residents might ask a city to fix a nest- goose bumps or another quality of life issue.

“We don’t need degrees in computer science or philosophy or anything like that to be able to say, you know, ‘Twitter shouldn’t work that way,'” Alkhatib said. “’It’s not working for us and it’s hurting us. And the fact that I am a human being is enough to give me the right to have a say in how my life is mediated and managed.

[ad_2]
Source link