Media platforms

Social media platforms fail to tackle abuse of scientists

[ad_1]

Anthony Fauci has been targeted by pandemic-related misinformation online.Credit: J. Scott Applewhite/Getty

Social media sites such as Facebook and Twitter are not doing enough to tackle online abuse and misinformation targeting scientists, a study by international campaign group Avaaz suggests.

Analysis, published on January 19, examined published misinformation about three top scientists. He found that while all the posts were debunked by fact checkers, online platforms took no action to address half of them.

“Two years into the pandemic, despite making significant policy changes, the platforms, and Facebook in particular, are still failing to take meaningful action,” says Luca Nicotra, campaign manager at Avaaz based in Madrid.

Scientists under attack

Online threats targeting scientists have become a major issue during the COVID-19 pandemic. A survey of Nature last year found that many scientists who spoke publicly about the disease suffered damage to their credibility or reputation, or were threatened with violence. About 15% had received death threats.

Nicotra and his colleagues examined pandemic-related misinformation targeting three prominent scientists: Anthony Fauci, director of the US National Institute of Allergy and Infectious Diseases in Bethesda, Maryland; German virologist Christian Drosten; and Belgian virologist Marc Van Ranst. They checked posts on five social media sites – Facebook, YouTube, Twitter, Instagram and Telegram.

Between January and June 2021, the authors identified 85 posts on the platforms that contained misinformation targeting scientists and their institutions, and which had been debunked by several fact-checking organizations. At the end of July 2021, when the study ended, 49% of the posts were still online and had not been removed or labeled with a warning about the fact-checkers’ findings. The posts had collectively racked up nearly 1.9 million interactions.

Not labeling debunked misinformation is a problem, Nicotra says, because unlabeled posts get significantly more engagement than labeled ones. Labeling is a “very effective strategy” to combat misinformation, says Nicotra. “Especially if users who have already interacted with the content are also informed.”

Much of Avaaz’s report focuses on Facebook because the size of the platform allows for better statistical analysis, but also because other sites generally don’t provide access to the necessary data and tools.

“We know enough to say that the same problem exists on others, and it might even be worse,” Nicotra says. “But the lack of transparency makes our job more difficult.”

Problematic posts

A spokesperson for Meta, the parent company of Facebook and Instagram, based in Menlo Park, Calif., said the company has strict rules on misinformation about COVID-19 and vaccines, and does not allow death threats against anyone on the platforms. It has “removed more than 24 million pieces of content for violating these policies since the start of the pandemic, including content mentioned in this report,” the spokesperson said. “We have also added warning labels to over 195 million additional COVID-19 content items that do not violate our policies but are still problematic. We will continue to take action against any content that violates our rules.

But Nicotra says the platforms are still missing a large number of problematic posts, especially outside the US and Europe, and in languages ​​other than English. In 2020, Facebook spent just 13% of its budget on developing misinformation detection algorithms to regions outside the United States, according to documents released by whistleblower Frances Haugen, a former chief product officer. of the company.

Another problem is that the algorithms that govern social media are designed to keep people engaged and therefore tend to highlight content that is controversial or emotionally charged, Nicotra explains. He says new regulations, such as the European Union’s Digital Services Act – which requires companies to assess and take action to reduce the risk of harm to society from their products – could force changes to algorithms.

No quick fix

“These are underlying issues with social media platforms that we are now seeing surging with COVID, and with other crises they will potentially re-emerge,” says Heidi Tworek, a historian who studies health communications at the University. from British Columbia to Vancouver, Canada.

While tweaks to algorithms and better enforcement of companies’ own terms of service are helpful, says Tworek, there’s no silver bullet that will solve the problems of online harassment and misinformation.

Some organizations have started working on ways to support scientists facing online harassment. In December 2021, the Australian Science Media Center in Adelaide hosted a webinar that provided practical advice for scientists on how to protect themselves, including how to control privacy settings, and where and how to report abuse. The webinar also highlighted the need for institutions to provide support. “It’s an area that has often been overlooked, but they have a responsibility to take care of their employees,” says Lyndal Byford, director of news and partnerships at the center. The UK Science Media Center (SMC) plans to hold a similar event on February 24.

Fiona Fox, chief executive of the SMC in London, hopes such efforts will help researchers feel safer talking about their work in public. “We can’t let this stop scientists from engaging with the media,” she says. “The public interest lies in good science communication.”

[ad_2]
Source link