Media literacy

Why using Facebook should require a media literacy test

We don’t allow people to start driving motor vehicles until they have had a driver training course and then a test for a very good reason – vehicles are dangerous for drivers, passengers and pedestrians. Social networks and the deceptive and harmful content they disseminate are also dangerous to society, which is why a certain level of media literacy – and testing – should be a condition of their use.

Social media companies like Facebook and Twitter would surely oppose such an idea, calling it heavy and extreme. But they willfully misunderstand the enormity of the threat that disinformation poses to democratic societies.

The Capitol Riot gave us a glimpse of the kind of American disinformation that helped create and illustrates why it is so dangerous. On January 6, the nation witnessed an unprecedented attack on our seat of government that left seven dead and lawmakers fearing for their lives. The rioters who caused this chaos planned their march on Capitol Hill on social media, including Facebook groups, and were driven to violent actions by months of disinformation and presidential conspiracy theories, that they believed to have been “stolen” from Donald Trump. .

While major social networks have made significant investments in the fight against disinformation, it may prove impossible to remove all or even most of it. This is why it is time to focus on efforts to curb disinformation and its spread to give people the tools to recognize and reject it.

Media literacy should certainly be taught in schools, but this type of training should also be available where people actually face disinformation: on social media. Large social networks that spread news and information should require users to take a short media literacy course and then a quiz, before connection. Social networks, if necessary, should be compelled to do so by force of law.

Moderation is difficult

So far, we have relied on major social networks to protect their users from misinformation. They use AI to locate and remove, tag, or reduce the spread of deceptive content. The law even provides for protection of social networks against legal action for the content moderation decisions they make.

But relying on social media to control disinformation is clearly not enough.

First of all, tech companies that run social media often have a financial incentive to let disinformation subsist. The content delivery algorithms they use promote hyper-partisan and often half-true or half-true content because it consistently gets the most engagement in the form of likes, shares, and comments from users. It creates ad views. It’s good for business.

Second, large social networks are being forced into a never-ending process of censorship expansion as propagandists and conspiracy theory supporters find more ways to spread bogus content. Facebook and other companies (like Parler) have learned that taking a purist approach to free speech, that is, allowing any speech that is not illegal under U.S. law, does not is not practical in digital spaces. Censorship of certain types of content is responsible and good. In its latest capitulation, Facebook announced Monday that it would ban any publication of debunked vaccine theories (including those against COVID-19), such as those that cause autism. But it is impossible even for well-meaning censors to follow the endless ingenuity of the purveyors of disinformation.

There are logistical and technical reasons for this. Facebook relies on 15,000 content moderators (most of them under contract) to control the posts of its 2.7 billion users worldwide. And it’s increasingly turning to AI models to find and moderate harmful or bogus messages, but the company itself admits that these AI models can’t even understand certain types of harmful speech, as in memes or video.

This is why it may be better to help consumers of social content detect and reject disinformation and refrain from disseminating it.

“I recommended that the platforms provide media literacy training directly, on their sites,” says Paul Barrett, disinformation and content moderation researcher, deputy director of the Stern Center for Business and Human Rights at the University of New York. York (NYU). “There is also the question of whether there is a media education button on the site, looking you in the face, so that a user can access the media education data at any time. “

A quick introduction

Social media users, young and old alike, desperately need tools to recognize both disinformation (false content disseminated innocently, out of ignorance of the facts) and disinformation (false content knowingly disseminated for political or financial reasons) , including the skills to find out who created a piece of content and analyze why.

These are important elements of media literacy, which also involves the ability to cross-check information with additional sources, assess the credibility of authors and sources, recognize the presence or absence of high journalistic standards, and to create and / or share media in a way that reflects its credibility, according to the United Nations Educational, Scientific and Cultural Organization (UNESCO).

Packaging a toolkit of basic media literacy tools — perhaps specific to “information literacy” —and presenting them directly to social media sites serves two purposes. It provides social media users with handy media literacy tools to analyze what they see, and also alerts them that they are likely to come across biased or misleading information from across the screen. connection.

This is important because not only do social networks make deceptive or bogus content available, but they deliver it in a way that can disarm a user’s bullshit detector. The algorithms used by Facebook and YouTube favor content that is likely to elicit an emotional, often partisan, response from the user. And if a Party A member finds out about a report about a shameful act committed by a Party B leader, they may believe it and then share it without noticing that the ultimate source of the information is Party A. Often the creators of Party A such content bends (or completely shatters) the truth to maximize emotional or partisan response.

It works great on social media: A 2018 Massachusetts Institute of Technology Twitter content study found lies 70% more likely to be retweeted than truth, and lies spread to 1,500 people about six times faster than the truth.

But media training also works. The Rand Corporation conducted a review of the available research on the effectiveness of media literacy and found ample evidence in numerous studies that research subjects became less likely to fall into bogus content after varying amounts of training. in media education. Other organizations, including the American Academy of Pediatrics, the Centers for Disease Control and Prevention, and the European Commission, have come to similar conclusions and have strongly recommended media literacy training in schools.

Facebook has already taken steps to embrace media literacy. He has partnered with the Poynter Institute to develop media literacy training tools for children, millennials and the elderly. The company also donated $ 1 million to the News Literacy Project, which teaches students to carefully examine an article’s source, make and critique judgments about news, detect and dissect viral rumors, and to recognize confirmation bias. Facebook also hosts a “media literacy library” on its site.

But everything is voluntary. Requiring training and a quiz as a condition of access to the site is something else. “The platforms would be very reluctant to do this because they would be concerned about denying users and reducing engagement,” says NYU’s Barrett.

If social networks don’t act voluntarily, they could be forced to demand media literacy from a regulatory body like the Federal Trade Commission. From a regulatory standpoint, this might be easier to accomplish than moving Congress to require media education in public schools. It could also be a more targeted way to mitigate the actual risks posed by Facebook, compared to other proposals such as dismantling the company or removing its shield against lawsuits arising from user content.

Americans became aware of the disinformation when the Russians armed Facebook to interfere in the 2016 election. But while Robert Mueller’s report proved that the Russians were spreading disinformation, the causal line between that and the decisions actual voting results remained unclear. For many Americans, January 6 made real the threat of disinformation to our democracy.

As more tangible damage is directly caused by misinformation on social media, it will become even clearer that people need help fine-tuning their bullshit detectors before going online.


Source link