Media platforms

Social media platforms face misinformation

Big tech companies, and especially social media platforms, are facing a new situation where content moderation and the removal of misinformation will be of the utmost importance. The military conflict in Ukraine is another situation where digital platforms may have to play an important role.

Companies are better equipped to deal with misinformation and misinformation than they were a few years ago, during the 2016 US election or the Brexit referendum. However, managing content moderation in a military conflict like Ukraine is easier said than done.

There are strong arguments to support the removal of content that misleads people about the current situation and to prevent the publication of false pretenses to justify an invasion. On the other hand, especially in these circumstances, people must also have unrestricted access to these channels to be able to document and report everything that happens and communicate with their loved ones.

Perhaps the situation closest to this occurred last year in Myanmar, where after a military coup, YouTube took down channels run by military forces in the hope that it would prevent new incitements to violence. Ultimately, it also created additional challenges for humanitarian and legal efforts to bring the perpetrators to justice.

Content moderation and whether to hold digital platforms responsible for content posted on their platforms has been debated in Europe, the United States and elsewhere for the past few years, and we are now starting to see the first legislative proposals in this space.

In the United States, Senators Richard Blumenthal (D., Conn) and Marsha Blackburn (R., Tenn.) in February introduced bipartisan legislation, dubbed the Kids Online Safety Act, aimed at holding social media platforms liable for damages they cause to children. .

The proposed bill would also require tech companies to provide regular assessment of how their algorithms, design features and targeted advertising systems might contribute to harming minors. Companies should also offer miners the option to opt out of algorithmic recommendations.

If this proposal becomes law, it will represent a change from the immunity Big Tech companies have enjoyed since the passage of Section 230 of the Communication Decency Act, which essentially grants internet companies immunity from harmful content. published by their users.

Read more: US lawmakers propose bill to mandate platform content moderation

Europe recently approved the Digital Services Act (DSA) which will hold Big Tech companies liable for illegal content posted on their platforms – they will be required to put in place mechanisms to ensure content is removed in a timely manner. Even content deemed legal but harmful should be promptly removed.

Read more: European Parliament approves Digital Services Act targeting big tech

The UK is also proposing new legislation, the Online Safety Bill, with similar requirements to the European DSA, but adding new criminal offenses to the bill to ensure businesses do their best. best to ensure removal of harmful content.

Google, Meta, Twitter and others won’t have it easy in Ukraine, but this will be the last opportunity to prove they are prepared to deal with misinformation.

Register here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.

——————————

NEW PYMNTS DATA: ACCOUNT OPENING AND LOAN SERVICE IN THE DIGITAL ENVIRONMENT

On: Forty-two percent of US consumers are more likely to open accounts with financial institutions that facilitate automatic sharing of their bank details upon sign-up. The PYMNTS study Account opening and loan management in the digital environmentsurveyed 2,300 consumers to explore how FIs can leverage open banking to engage customers and create a better account opening experience.


Source link