Media content

We need to regulate moderation of social media content, but we can’t simply eliminate section 230(c).

The dangerous facilitation of online disinformation and misinformation is now a defining feature of social media platforms. Facebook, Twitter, TikTok, YouTube and other platforms have faced controversy for encouraging the rapid proliferation of political and Covid-19-related misinformation, climate denial and hate speech. While many countries have adopted or are working to adopt legal frameworks to regulate the content moderation practices of social media companies, the United States has maintained a laissez-faire status quo, relying on platforms to act independently in accordance with their community guidelines. However, faced with the multiplication of consequences in the real world, self-regulation is no longer a tenable solution. The United States should explore opportunities for government regulation to strengthen and standardize content moderation practices.

Popular support for stronger moderation of social media content emerged notably in 2018 after a rapid rise in rumors and misinformation on Facebook sparked attacks on ethnic minorities in Myanmar and Sri Lanka. Public criticism of Facebook’s role in catalyzing real-world violence initially pressured the company to create stronger policies to remove misinformation and hate speech on its platform. More recently in the United States, the proliferation of misinformation and misrepresentation regarding the 2020 presidential election and Covid-19 vaccinations has produced significant offline consequences. The January 6 Capitol insurrection – in which a violent mob of Donald Trump supporters seeking to delegitimize the 2020 election results clashed with police, entrapped lawmakers and vandalized the Capitol building of United States – was widely attributed to conspiracy theories, false claims of voter fraud, and inflammatory content that exploded out of control on Facebook prior to the attack. Similarly, Covid-19 conspiracy theories, anti-Asian hate speech and vaccine lies circulating on social media platforms like Facebook, TikTok and Twitter have undermined vaccination uptake and prompted harassment. anti-Asian and hate crimes online and offline.

Social media companies have taken critical voluntary measures to crack down on the spread of such content. Facebook has implemented “emergency” measures to suppress misinformation ahead of the 2020 presidential election and has stepped up its moderation efforts to curb escalating calls for violent protests over the election outcome. To combat misinformation related to Covid-19, Twitter and Facebook not only banned President Trump, but they also tightened their content moderation practices by adding fact-checking and warning labels. more extensive sanitary facilities; connect users with credible scientific information; and engage in a more rigorous application of their community directives. Yet these measures are inconsistently applied and riddled with loopholes that allow the continued spread of misinformation. While Twitter and Facebook have defended their moderation practices to lawmakers, they have failed to avert the Capitol insurrection or effectively quell disinformation about Covid-19.

“It’s hard to be guardians of the internet.”

The shortcomings of social media companies underscore the urgent need for government infrastructure that regulates content moderation practices. In addition, broad democratic support for government regulation has materialized in the wake of the pandemic and has continued to grow. Yet particularly in the United States, where objectionable speech is legally protected, government regulation of content moderation practices faces an almost intractable tension that legislation across the world has failed to effectively reconcile without controversy. . How to define and codify the parameters of a “removable” or “forbidden” discourse without infringing on freedom of expression? Also, who (or what entity) should be responsible for determining whether a specific piece of content falls within these limits?

Part of what makes regulation so difficult is the nature of social media platforms and the content itself. Social media companies are different from traditional media companies in that the content on their platforms is user-generated. Since their platforms only facilitate the dissemination of user-generated content, social media companies do not engage in the same forms of editorial monitoring as entities that publish content are required to do. As a result, Section 230 of the US Communications Decency Act generously shields social media companies from legal liability for the content users post on their platforms. As social media companies are not legally responsible for the speech of their users, they have no legal obligation to moderate content.

Section 230’s intermediary liability protections have made this the primary battleground of the regulatory debate in the United States regarding the moderation of social media content. Lawmakers have presented numerous proposals aimed at combating online content moderation, many of which seek to erode or eliminate Section 230 liability protections for intermediaries. 230(c) gives social media companies legal immunity from liability for their users’ content, it also gives companies legal immunity for moderate content. Under the second provision of Section 230(c), social media companies are shielded from liability for engaging in content moderation in accordance with their community standards and terms of service. Indeed, social media companies made required use of these protections after the Capitol insurrection and amid the Covid-19 pandemic, as noted above. As a result, eliminating Section 230 protections not only undermines innovation and free speech, but could also further deter social media companies from moderating content.

While Congress has held several hearings with executives from major tech and social media companies to investigate avenues for regulation, those hearings have mostly revealed lawmakers’ lack of understanding of the industry. . Moreover, even as social media companies have defended their moderation practices, the technical difficulties of effectively moderating content have forced companies to outsource the task to algorithms and third parties. It is difficult to be the “guardians of the Internet”.

Given these hurdles, US lawmakers, social media companies, and pervasive information environment experts must make a concerted effort to collaborate and think creatively about how to balance free speech protections. in line with the public interest to limit dangerous information. Federal courts have and continue to establish exceptions to the Section 230 liability shield, which can begin to determine what types of content social media companies must meaningfully moderate. Additionally, lawmakers could develop transparency and digital privacy standards that all social media platforms must meet to qualify for Section 230(c) protection. Ultimately, online speech can no longer be governed by the regulatory frameworks of a non-digital age – our laws must adapt to our increasingly virtual world.


Source link