Media platforms

How Social Media Platforms Can Stop the Spread of COVID-19 Misinformation

COVID-19 has taken the world by storm. And being the first pandemic our generation has experienced, it has brought a lot of confusion and misinformation.

Ever since talk of a COVID-19 vaccine started popping up, so have many of the myths and misinformation surrounding it. And with social media conversations spreading like wildfire through the accounts of real people, it can be hard to tell the difference between fact and fiction.

Here’s what social media platforms can do to fix the problem.

Social media is an extension of our lives, so information shared online can spread like wildfire. When misinformation about COVID circulates, it creates uncertainty, skepticism and mistrust, which can cause people to reject proven public health measures.

This becomes easier when the public is not well informed about COVID-19 and vaccines, and lacks trust in authoritative sources, such as government, scientists and medical professionals.

In June 2020, BBC News reported that social media companies were not doing enough to curb the spread of fake news regarding COVID-19 and the COVID-19 vaccine.

Facebook has since issued a statement claiming to take action against this by sharing and promoting authoritative information on COVID-19 vaccines, as well as combating misinformation by removing false claims on the subject.

In a report published in March 2021, the Digital Hate Center found that of the anti-vaccine content that was shared on Facebook and Twitter more than 812,000 times between February 1 and March 16, 2021, 65% came from just 12 accounts.

USE VIDEO OF THE DAY

Read more: Report finds most misinformation about COVID-19 was spread by these 12 influencers

These include prominent public figures such as Children’s Health Defense President Robert F. Kennedy Jr. and entrepreneur Joseph Mercola.

Image gallery (2 images)

Social media companies have used various initiatives to combat the spread of COVID-19 misinformation. That’s why you’ll see a lot of pandemic-related information when you open your social media apps or browse your feed.

The home section of the YouTube app, for example, features an article about COVID-19 from the World Health Organization. When you click Learn more, it takes you to a resource hub that includes myths, tips, and Q&As.

YouTube is also removing videos that violate its COVID-19 policy.

When you search for the hashtag or topic COVID-19 on Twitter, you’ll find a resource portal at the top and options to learn more about COVID-19 and vaccine information in your country.

You may also spot a warning under posts with misleading COVID-19 information. Any tweet from a user promoting 5G conspiracy theories has a blue exclamation mark with a message from Twitter about getting the facts about COVID-19 and links to a story debunking the claim.

Facebook has banned more than a billion fake accounts, deleted 12 million posts spreading misinformation about COVID-19 vaccines, and says it has hired fact-checkers around the world.

Read more: Is Facebook doing enough to fight misinformation?

In addition to weeding out repeat offenders, social media platforms can take the following steps to reduce the spread of misinformation online.

1. Tap into celebrities and religious and community leaders

Many people are suspicious of authority figures, like the government. Social media companies can partner with celebrities and religious and community leaders to post videos online featuring factual and demystifying information.

They can also share their personal experiences around vaccination.

2. Establish shared online standards on social media platforms

Social media platforms have algorithms to detect fake news. However, they can go further by establishing a common online standard of conduct regarding the treatment of fake news.

3. Establish a clear threshold for enforcement action

A low threshold, such as two warnings, would allow moderate enforcement before removal, such as restricting a page’s ability to go live or post video content without moderate review.

This would still allow the user to exercise their right to free speech while being moderated for the type of information they upload.

4. Display corrective messages to users exposed to misinformation

Social media platforms may show users who have been exposed to content in violation of the COVID-19 policy corrective posts from experts and trusted sources.

Businesses should consider adding a warning screen in front of third-party websites or unreliable sources harboring misinformation about COVID-19.

Related: The best fact-checking sites for finding unbiased truth

6. Ban private and secret anti-vaccine Facebook groups

Ban private groups that traffic primarily in vaccine misinformation and prevent groups that require a Facebook disclaimer from existing as private or secret groups, as anti-vaccines rely on the confidentiality of such groups to spreading dangerous anti-vaccine misinformation.

As highlighted by the COVID-19 pandemic, the role played by social media in spreading misinformation can have serious social and public health consequences. Mass vaccination remains the most effective strategy for achieving long-term control of the pandemic.


In order to improve COVID-19 awareness and vaccine uptake, social media companies have a key role to play in ensuring the correct information is presented to users on their platforms, while combating misinformation. .


smart news apps
5 smart news apps to help you avoid fake news with more reliable reporting

Want to get away from fake news? You can use these smart apps to get factual and neutral information instead.

Read more


About the Author



Source link