Media platforms

UK attempts to regulate social media platforms


Editor’s Note: Guest columnist Kate Jones replaces Emily Taylor this week.

Efforts to regulate social media platforms are gaining momentum in the UK. In May, the British government published its draft Online security bill, which will be studied this fall by a joint committee of MPs and the House of Lords chaired by MP Damian Collins. Collins led Parliament’s briefing on the Cambridge Analytica scandal in 2018 and is a leading UK voice on disinformation and digital regulation. Meanwhile, the House of Commons Sub-Committee on Online Harm and Disinformation will also investigate the bill.

These inquiries come hard on the heels of a Report of the House of Lords Committee on Freedom of Expression in the Digital Age, released last month, and a Law Commission report recommending the modernization of communication offenses in the UK.

As new regulation of social media is a political priority, the Online Safety Bill provides the UK with an opportunity to demonstrate the strength of its independent regulatory approach, even as the European Union develops parallel its bill on digital services. A wide range of industry and civil society voices are involved in the UK digital regulation debate. Yet despite the heated conversation, the challenges of social media regulation remain far from resolved.

The 133-page bill would create a skeleton regulatory framework that would be fleshed out over time with both secondary legislation and regulatory codes of practice. This framework approach allows for flexibility and incremental development, but has drawn criticism for the extent of discretion it leaves to Ofcom, the UK’s independent communications regulator. Likewise, the bill arguably empowers the Secretary of State for Digital, Culture, Media and Sport to wield considerable political influence over the scope of free speech with little parliamentary scrutiny.

The bill would impose ‘duties of care’ on around 24,000 social media and internet search platforms operating in the UK, ranging from Facebook to local chat groups, to assess and manage the risks of illegal content appearing on their websites. services, and the risks that content harmful to children will appear on their services if children are likely to access them. The language of “duty of care” means that platforms should adopt adequate processes, rather than having to achieve flawless results. The largest platforms must also assess the risks associated with “lawful but harmful” adult content that may appear on their services, and they are required to specify in clear and accessible terms of use how they treat this content. Platforms should also make it easy for users to report content they consider illegal or harmful and should have a complaints procedure for users. There is an exception for journalistic content.

Many years after the online harm debate began, tensions between freedom of expression and protection from harm remain familiar but unresolved. Perhaps nowhere are these tensions more visible than when it comes to whether platforms should have legal obligations regarding content that is lawful but harmful to adults, a thorny subject that the draft legislation of the EU, unlike the UK, refuses to address. . On the one hand, the British bill’s inclusion of “lawful but harmful” content has been widely criticized as legitimizing censorship and restriction of freedom of expression, in violation of human rights law. On the other hand, there are serious concerns about speech and content online that do not meet the thresholds of illegality but can and do cause harm. These include online expressions of racism, misogyny and abuse, as clearly seen in England following the European Football Championships, and disinformation that can have a major impact on security and democracy, such as currently amply illustrated with regards to COVID-19 vaccines.

Several years after the online harm debate began, the tensions between freedom of expression and protection from harm are familiar but unresolved.

Here, a new approach may emerge, as voices calling for more attention to the spread of harmful material, rather than its mere existence, grow louder. The House of Lords Free Speech Committee report proposed that much of the Bill’s ‘lawful but harmful’ content provisions should be replaced with a new design-based duty, requiring platforms to take steps to ensure that “their design choices, such as reward mechanisms, architecture of choice, and content curation algorithms, mitigate the risk of encouraging and amplifying uncivil content. The committee recommends that the biggest platforms allow users to make their own choices about what types of content they see and from whom. Richard Wingfield of Global Partners Digital told me: “If content ranking processes were more transparent and freely available, social media companies might be required to be open to integrating alternative algorithms, developed by third parties, which users could choose to moderate and rate what they see online.

These proposals are long overdue. It is not the existence of abuse and misinformation that is new in the digital age, but their viral transmission. For too long, a heady combination of commercial incentives and lack of transparency, accountability and user accountability has resulted in the exponential expansion of the reach of shocking, emotional and divisive content. These design-based proposals risk encountering resistance from the platforms; even the Facebook Oversight Board could not access Facebook’s information about its algorithms. But they are beginning to address society’s real concern with legal but harmful content: not that it is said, but that it spreads.

It can be said that the bill should not only address the design of the platform, but also, as the European Action Plan for Democracy in the EU – countering the deliberate use of manipulative techniques, such as disinformation, by those who misuse social media platforms to distort public and political opinion or deliberately sow social division.

If the UK government can take any comfort from the multitude of critics of the bill, it’s that it has come in equal measure from all sides of the online harms debate. The structure of the bill is complex and, for many, its provisions are too vague, in particular its definition of harm. Some fear its skeletal framework will make implementation impossible to anticipate and entirely dependent on any codes of practice from Ofcom. Others see this incremental approach as positive, allowing for significant regulatory evolution over time. For platforms, its provisions may be too onerous. Others may consider platforms to have too much power to control online discourse. For free speech advocates, the Bill’s imposition on platforms of a duty to “consider,” or take into account, the importance of protecting the right to free speech of users is inadequate, offering no bulwark against the assault on freedom of expression in the bill or the risk of a chilling effect due to over-implementation. Privacy advocates argue that despite the duty to consider privacy, the bill would legitimize far greater scrutiny of personal communications, including encrypted messaging, than current practice.

Omissions from the bill also raise objections. It does not cover online advertising fraud, despite the recommendations of a parliamentary committee. It does not give Ofcom or social media platforms the power to tackle urgent threats to public safety. And it doesn’t directly address the complex issue of anonymity. The media, already threatened by the social media business model, doubts that the bill’s protections for journalistic content are strong enough.

Social media regulation is vital, because government, not commercial interests, is the democratic guardian of the public interest. The Online Safety Bill is a forerunner in proposing a risk-based regulatory model for tackling harm online, as opposed to regulatory approaches that trample human rights and freedoms of media in prohibit perceived harm such as “fake news”.” Despite its criticisms, the bill – with its creation of obligations for social media, transparency and accountability to a strong and independent regulator – is a hugely positive development. Now is the time to reconsider aspects that could infringe on human rights, in particular the clauses on lawful but harmful content, and replace them with new provisions that would attack the heart of the problem of harm online. .

Kate Jones is an Associate Fellow in the International Law Program at Chatham House, Senior Fellow at Oxford Information Labs and Fellow at the Oxford Human Rights Hub. Previously, she spent many years as a lawyer and diplomat in the UK Foreign and Commonwealth Office, serving in London, Geneva and Strasbourg. She also directed the Diplomatic Studies program at the University of Oxford. Follow her on Twitter@KateJones77.

Source link