Media platforms

Don’t leave regulation in the hands of social media platforms. The government needs to reassess its role

JJurisdictions around the world are grappling with the issue of regulating user content on social media. India, too, is changing its approach. Public consultations on the proposed amendments to the rules of information technology, 2021 took place recently. A considerable point of attention has been the design of a grievance mechanism to resolve user complaints against actions taken by social media platforms.

Why is this important? Unlike their initial avatars, today’s social media platforms actively moderate and curate the content they host. They do this by removing any offensive speech, restricting access to such speech in a particular jurisdiction, and suspending or terminating a user account. Platforms moderate content to comply with statutory mandates as well as their own terms of service, and to increase user engagement.


Also read: Panel for appeals and deadline to remove content – government proposes tweaks to close ‘gaps’ in IT rules


An effective grievance redress mechanism

The exercise of these powers in the case of high-profile accounts such as former US President Donald Trump or celebrities like Kangana Ranaut has regularly made headlines. But lay users also face the consequences of such powers going unnoticed or unheard. In their respective transparency reports, Facebook and Instagram reportedly removed 3.20 million posts, while Google removed around 60,000 URLs. OUA motorcycle. Therefore, as “arbiters of speech”, they are able to violate a person’s freedom of speech and expression.

To protect users from improper withdrawals and account suspensions by social media platforms, the need has arisen to institute effective grievance mechanisms (GRM). In India, prior to May 2021, GRMs of social media platforms, if any, were designed in accordance with the terms of service of the respective platform. There has been no standardization, in terms of resolution and timing, in the design of these GRMs. If you were to file a complaint, the process would typically involve filling out an online form, which would typically solicit an automated response.

The Information Technology Rules (or IT Rules) 2021 (Guidelines for Intermediaries and Digital Media Code of Ethics) simplified this by bringing uniformity. Social media platforms must now designate a “complaints officer” before whom a user can lodge a complaint. The Grievance Officer is required to acknowledge receipt of the complaint within 24 hours and resolve it within 15 days. If dissatisfied with the officer’s order, the user may appeal to the High Court/Supreme Court.

However, accessing the court’s writ jurisdiction can be a time-consuming and costly process. And not all users can afford it. With this in mind, it was important to create an appeal body that does not require as many resources to engage.

The government’s motivation behind the creation of this appeal committee also seems to stem from other factors. According to government, it created this level because “currently there is no appeal mechanism provided by intermediaries nor a credible self-regulatory mechanism in place”. During the public comment hearings, he clarified that the proposed ‘grievance appeal board’ was just a ‘mezzanine measure’ which he had to reluctantly take due to the inability of social media platforms to design something themselves. Indeed, government and social media platforms have seen convergence on a self-regulatory approach to be the most optimal design for an appeals mechanism, even if the minimum structure is unclear.

Apparently, this insistence on self-regulation appears to be a step-by-step approach. This helps dispel censorship fears that stem from the government exercising control over online speech through the Grievance Appeals Board.


Also read: ‘Government-selected’ – Twitter users could ‘jointly address court’ as posts, handles blocked


Concerns about a self-regulatory model

However, letting social media platforms control the regulatory process is not in the interest of users. Discourse, by nature, is contextual. What may offend one person may appear legitimate to another. In fact, a person may find objectionable speech in one circumstance but not in another. The courts themselves have come to diametrically opposed determinations with respect to the same speech.

Because the determination is so subjective, the process must be objective to ensure fairness. A self-regulatory model removes such objectivity for several reasons.

First, social media platforms have not have been models of objectivity in deciding what content they want to host or remove. Their political biases became visible through their decisions to amplify or restrict certain types of content. For example, while Twitter is generally considered to be more favorable to liberal/leftist views, Facebook has been accused of being favorable to right-wing positions. An internal appeal mechanism will likely toe the line of the organization, and will carry and reinforce the same biases in deciding whether speech should be allowed or not.

Second, even if a number of social media platforms come together to form an appeal level, instead of individual appeal mechanisms, members of that appeal level will not have functional independence. As long as the appointment, terms of employment, and terms of service of members are controlled by social media platforms, they will be wary of making decisions that may harm the platform.

Third, a self-regulatory approach to judging speech is likely to be riddled with trust issues. Take the case of Facebook. The platform’s solution to ensuring transparency and impartiality in its content moderation decisions was to form the Oversight Committee. Facebook created a $130 million irrevocable trust to maintain the board’s independence, and the board overruled many of Facebook’s content moderation decisions. But now the Council notes harsh criticism that its existence has not materially improved Facebook’s content moderation practices. On the other hand, Facebook complained that it could not follow the council’s recommendations and sought to “improve the referral process”. This is emblematic of the tenuous foundations upon which any self-regulatory mechanism under the umbrella of a social media platform rests.

These concerns are amplified if, at a later stage, social media platforms are subject to penalties for wrongfully suspending or terminating a post or user account. Social media platforms can hardly be expected to design self-regulatory mechanisms in such a way as to encourage them to be held accountable and penalized for their decisions.


Also read: ‘Government-selected’ – Twitter users could ‘jointly address court’ as posts, handles blocked


Go forward

Given all of this, it is important to ask whether processes that will have a significant impact on freedom of speech and expression should be left to a self-regulatory mechanism. This is all the more the case when accepting the legitimacy of declaring speech offensive or not rests on the fairness of the process itself. Since a self-regulatory approach would lack such fairness, it is important that government reassess its role.

It should design a level of appeal that is free from undue interference from it or social media platforms. To do so, it should start by removing the constitution of the Grievance Appeal Board by way of notice and giving it statutory status.

The government has repeatedly said that the Information Technology Act 2000 is long overdue for an overhaul and will announce the “Digital India Act”. This may be the right place to plan for a robust design of these appeal mechanisms, instead of being one foot in and one foot out.

Trishee Goyal is a Research Fellow in the Applied Law and Technology Research Center at the Vidhi Center for Legal Policy. She tweets @TrisheeGoyal. Views are personal.

(Edited by Prashant)


Source link