Media platforms

Digital Rights Watch defends efforts of social media platforms to remove terrorist content


Digital Rights Watch has defended social media platforms and their efforts to remove heinous violent content online, telling an Australian parliamentary committee that businesses shouldn’t be expected to be always up to date with all of that content at all times.

“I don’t know what we gain by putting the kind of pressure on companies to basically remove all this content all the time,” said Digital Rights Watch director of programs and partnerships Lucie Krahulcova, appearing before the Australian Joint Parliamentary Committee on Law Enforcement.

“I don’t think it removes the problems of society, it does not remove the socio-economic problems, and it certainly does not remove the violence that is happening in the world.”

Comments were made about Australia Criminal Code Amendment Act (Sharing Abominable Violent Material), 2019 (AVM Law), which is currently being examined by the joint committee.

The AVM Act requires content service providers to remove obnoxious violent content and notify police if their services are being used to share heinous violent content, or if they risk being fined 10% of their turnover. annual global business. It also gives the Electronic Security Commissioner the power to issue notices to content service providers and order them to remove specific violent and heinous content.

Krahulcova also told the committee that there is a difference between acting quickly to remove terrorist content that is broadcast live and removing violent content on an ongoing crisis.

She warned that imposing excessive penalties on companies for posting violent material on their social media platforms risked covering decades of footage from activists and journalists to avoid regulatory backlash. . In explaining this concern, she referred to YouTube’s automated tools removing videos covering war crimes and violence in Syria.

Digital Rights Watch was not the only organization appearing before the joint committee, with representatives from the industry body Communications Alliance, Meta, Twitter and Snap also in attendance on Wednesday.

Meta APAC vice president of public policy Simon Milner told the committee he liked the idea of ​​all companies being held accountable for addressing harmful content on their online platforms, which could include the obligation for companies to use proactive detection technology.

While Milner said he was happy the government was considering legislating these kinds of measures, he acknowledged that not all companies have the capacity to implement them. He cautioned, however, that all businesses should still have the means to measure the prevalence of heinous violent material on their online platforms and have the means to deal with that content.

Milner also said the need for companies to proactively monitor obnoxious violent content may change once the online security law comes into force early next year, as the law gives the commissioner the power to act. electronic security more powers to hold departments accountable and creates new programs that force the removal or blocking of violent “harmful” or obnoxious content through opt-out powers.

“This is going to include codes and frameworks that will encourage companies to take some of these corrective actions with respect to certain categories of materials,” Milner told the committee.

A coalition of organizations – which included those who appeared before the committee on Wednesday – met a fortnight ago to recommend various amendments to Australia’s heinous violent materials laws online.

The organizations jointly proposed various changes to the AVM Act, ranging from more review processes to clarifying definitions to reducing penalties.

Chief among the legislative amendments recommended by the coalition is greater clarity regarding when the law’s monitoring obligations for the removal of heinous violent material are triggered. The coalition said many organizations are confused because directives from the attorney general’s department have indicated that there is no obligation to proactively monitor heinous violent content, and that there is only one obligation to remove such content when found.


Source link