Media platforms

Social media platforms are stepping up hateful content

[ad_1]

Facebook, YouTube and Twitter have agreed to adopt a common set of definitions for hate speech and other harmful content and to work together to monitor industry improvement efforts.

This decision follows 15 months of intensive talks within the Global Alliance for Responsible Media, a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) that brings together advertisers, agencies and platforms.

The first changes are expected to be introduced this month and have been well received by senior executives at major advertisers. “This is an important step in the process of restoring trust online,” said Luis Di Como, executive vice president, Global Media, Unilever.

Four key action areas have been identified, designed to enhance consumer and advertiser safety, with individual timelines agreed for each platform to implement in the different areas.

1. Adoption of GARM Common Definitions for Harmful Content

Definitions of harmful content vary from platform to platform, making it difficult for brand owners to make informed decisions about where to place their ads and hold platforms to account.

Common definitions will create a common baseline on harmful content. These have been developed to add more depth and breadth regarding specific types of harm such as hate speech and acts of aggression and intimidation.

All platforms will now apply these standards consistently as part of their ad content standards and will consistently label and apply common definitions.

2. Development of GARM reporting standards on harmful content

Having a harmonized reporting framework is an essential step in ensuring that harmful content policies are effectively enforced. All parties have now agreed to harmonize measures on issues of consumer safety, advertiser safety and platform effectiveness in tackling harmful content.

Over the next two months, we will continue to harmonize metrics and reporting formats, with the system to be launched in the second half of next year.

3. Commitment to Independent Oversight of Operations, Integrations and Reporting

An independent perspective on how individual participants categorize, remove and report harmful content will support better enforcement and build trust. The goal is to have all major platforms fully audited or in the process of being audited by the end of the year.

4. Commitment to developing and deploying tools to better manage advertising adjacency

Advertisers must have visibility and control so that their advertising does not appear next to harmful or unsuitable content and take corrective action if necessary and be able to do so quickly.

Platforms that have not implemented an adjacency solution will provide a development roadmap in Q4 2020. Platforms will provide a solution through their own systems, through third-party vendors, or a combination thereof. this. In addition to Facebook, YouTube and Twitter, TikTok, Pinterest and Snap have made firm commitments to provide development plans for similar controls by the end of the year.

The WFA believes that the standards should be applicable to all media given the increased polarization of content across channels, not just digital platforms, and encourages its members to apply the same adjacency criteria for all their content decisions. media spending, regardless of medium.

Sourced from WFA

[ad_2]
Source link