Media platforms

Social media platforms accept new European rules on online publications

Social media platforms accept new European rules on online publications

New EU code of practice on disinformation aims to prevent advertising from appearing alongside messages deemed intentionally false or misleading

Mark Zuckerberg speaks at a technology conference in Paris on Thursday as the company he leads, Meta Platforms Inc, agrees to comply with tougher European standards.

The owner of Facebook, Meta Platforms Inc., Twitter Inc. and other social media platforms have agreed to abide by the European Union’s tougher standards for controlling online postings, offering insight into the type of rules big tech companies will face under an upcoming digital content law. .

The EU code of conduct on disinformation, unveiled on Thursday, replaces an earlier set of voluntary guidelines for dealing with content online that officials consider to be deliberately false or misleading.

Officials said they intend to make parts of the new code mandatory for major platforms under the new law, the Digital Services Act.

Under the new code – accepted by a range of companies, including ByteDance Ltd’s TikTok. and Alphabet Inc.’s Google – social media platforms will need to take steps to prevent advertising from appearing alongside what policymakers are describing. as intentionally false or misleading information.

Platforms will also need to provide users with more tools to identify such content online.

Platforms that volunteered to comply with the new code before certain elements of it became mandatory will have to submit an initial report explaining how they implemented it early next year.

The new code of practice is part of a wider EU effort to limit the power of big tech companies, from how they handle user data to how they treat competitors to what they do with potentially dangerous content.

Earlier this year, the EU passed a separate law called the Digital Markets Act, which imposes fairness obligations on a handful of big tech platforms, backed by the ability to impose hefty fines.

Digital content has been the focus of particular attention – and debate. Europe is seeking to play a leading role in addressing what policymakers see as a deluge of misinformation on topics ranging from Covid-19 to the war in Ukraine, which they say can be amplified via social media platforms.

But defining harmful lies and deciding what to do about them is difficult.

Some EU officials have expressed concern in 2021 over then-US President Donald Trump’s suspension from platforms such as Twitter and Facebook.

Officials say these issues are addressed in the new Digital Services Act, which requires companies to have strong appeal mechanisms to challenge content removals.

Representatives of several major tech companies said they welcomed the new EU standards.

Meta, which also operates Instagram, said the company would continue to use research and technology to combat the spread of misinformation. Twitter said it remained committed to addressing the issue, including through the new EU code.

Google has called code an important tool in combating deliberately false and misleading information.

“The global pandemic and the war in Ukraine have shown that people need accurate information more than ever, and we remain committed to making the Code of Practice a success,” a Google spokesperson said.

A TikTok representative said the company helped draft the new code and will continue its efforts “to combat misinformation and promote authentic online experiences for our communities.”

EU officials said the code of standards would be linked to the bloc’s new digital services law, which EU lawmakers and member states agreed earlier this year.

The law, which could come into effect for the biggest online platforms as early as next year, sets new rules for removing illegal content. It will also require the biggest social media platforms to carry out risk assessments on content that regulators consider potentially harmful.

The largest platforms, defined as those with more than 45 million users in the EU, that repeatedly breach the code and fail to properly manage risk could face fines of up to 6% of their turnover worldwide annual business once the new law comes into force. , officials said.

“We now have very important commitments to reduce the impact of online disinformation and much more robust tools to measure how these are implemented in the EU, in all countries and in all its languages. “said Vera Jourova, EU Vice-President for Values ​​and Transparency. .

One of the aims of the new code of practice is to limit the financial incentives to deliberately spread false information by making it harder for those spreading the material to profit from related advertising revenue, EU said.

Companies must also show what they are doing to combat fake accounts and provide users with tools to recognize and report deliberately false or misleading information.

A new task force comprising civil society groups and regulators will act as a watchdog and assess companies’ compliance.


Source link