Media platforms

Researchers say social media platforms’ freedom to moderate content has significant tradeoffs

Leading academics on the intersection of social media platforms and democracy claim that while moderation of content by social media platforms can be beneficial, it has its drawbacks due to the potential of platforms to remove valuable content. and the lack of transparency regarding the decision-making process.

Tuesday’s event hosted by the Stanford Constitutional Law Center took place in the middle debates on the power of social media platforms to regulate online speech and Instagram’s recent accusations intentionally deletion messages related to the potential expulsion of Palestinians from the Sheikh Jarrah neighborhood in East Jerusalem.

Eric Goldman, professor of law and co-director of the High Tech Law Institute at Santa Clara University Law School, said the discretion of social media platforms – the ability to terminate accounts and remove content – is a positive aspect of platforms that should not be taken away.

Many governments, in an effort to limit the platform’s discretion, are going directly to companies to demand the removal of content, according to David Kaye, clinical professor of law at the University of California at Irvine. Kaye referred to India, where the government required the removal of the Twitter and Facebook accounts of those who criticize the government and its response to COVID-19.

However, according to Goldman, the freedom of the platforms to apply their own editorial policies on what to remove or restrict is important because without this discretion, the platforms’ servers would be inundated with “horrible content”, which he defined as antisocial content that hurts other communities and creates barriers to community participation and formation. “Horrible content” is not necessarily illegal, he added.

He added that the platform’s loss of discretion would hurt the platforms’ business models, as companies would not pay for ads and users would not pay subscription fees to access platforms with a. unorganized and “terrible” content.

Goldman has also argued that “terrible content” has implications beyond the digital realm. The platforms that allow all content to live online set “a rough standard for how we expect to interact with each other in all of our lives and in all of our society,” he said. If a standard was set allowing anything to be put online, Goldman argued that people would take this to mean that any behavior is acceptable in the offline world as well, even if it is harmful to society.

While Goldman favors platform discretion, Evelyn Douek, a law professor and SJD candidate at Harvard Law School, warned against his compromises. While she said she was also wary of an intense regulatory model in which the state controls social media platforms, she warned that stealth platforms are dangerous. She referenced social networks platforms which she says are currently suppressing “very valuable human rights evidence” in Palestine without transparency about how decisions are made and what material is lost.

Kaye added that while he was concerned that the government would get involved in editorial content decisions, he sees a significant problem in the lack of transparency in the decision-making processes of different platforms.

“Given the impact that platforms have on public life and on public institutions, there is a real argument for regulation to encourage more transparency,” he said.

Kaye suggested exploring alternative oversight models beyond government regulation and self-regulatory models. Specifically, he recommended multi-stakeholder and cross-sector models that can provide insight and a form of non-government regulation on business. These “can very well provide a channel of grievance, a channel of transparency on how businesses behave and respond to some important public issues,” he said.

While content moderation and platform discretion are difficult issues with complex proposed solutions, Douek stressed the importance of ongoing conversations on the topics.

“We’re never going to solve the content moderation problem. We’re never going to get to a point where we have a great set of house rules, ”she said. “What we need to do is find a way to keep discussing this and discussing it more productively for the rest of the time, and having a common language and framework to do this is a great start. . “


Source link