Media content

Markey-Matsui Bill Would Force Transparency Of Social Media Content Algorithm

Rep. Doris Matsui, D-Calif., Left, and Senator Ed Markey, D-Mass.

Bill Clark | Call CQ | Getty Images; Countess Jemal | Getty Images

A new federal bill seeks to demystify the way social media platforms determine which posts users see, without touching a law that has become a lightning rod in Congress.

Algorithmic Justice and Online Platform Transparency Act 2021 – announced Thursday by Sen. Ed Markey, D-Mass., And Representative Doris Matsui, D-Calif. – seeks to expose and correct social injustices that are exacerbated by online algorithmic amplification.

In this particular use of the word, “algorithms” are part of the software programs that sites like Facebook, Twitter, and Google use to determine what content and ads to show to users.

The bill would prohibit platforms from using algorithms that discriminate based on protected characteristics such as race and gender, empower the Federal Trade Commission to review platform algorithmic processes, and create a new interagency task force to investigate the discrimination in algorithms.

Platforms should also explain to users how they use algorithms and what information they use to execute them.

“It’s time to open the hood of Big Tech, enact tough bans on harmful algorithms, and prioritize justice for communities that have long been discriminated against as we work on platform accountability. “Markey said in a statement.

However, an industry group backed by companies such as Amazon, Facebook, Google and Twitter has warned that exposing the platforms’ processes could be risky.

“No one wants technology to exacerbate racial inequalities or rob people of opportunities,” Chamber of Progress founder and CEO Adam Kovacevich said in a statement. “One approach would be to expand our existing laws on civil rights and discrimination in housing, employment and credit. There is some danger that lifting the hood of technological algorithms completely could provide a roadmap for hackers, Russian trolls and conspiracy theorists. “

Researchers and government agencies have accused platforms of using discriminatory algorithms in the past. For example, in 2019, the Department of Housing and Urban Development accused Facebook of breaking housing discrimination laws with its ad targeting. Soon after, researchers at Northeastern University, the University of Southern California, and the nonprofit group Upturn found that Facebook’s ad serving algorithm could discriminate based on race and gender. , even if that wasn’t what the advertisers wanted.

Facebook said at the time that it was opposed to “discrimination in any form” and highlighted changes it had made to its ad targeting tools to address some of the concerns.

Leave section 230 untouched

The new bill is a notable take on technology reform, in part because of what it doesn’t do: tamper with the highly controversial legal shield that protects businesses from liability for what users post online.

Section 230 of the Communications Decency Act is a law from the 1990s that states that online platforms are not responsible for the speech of their users and allows platforms to moderate their services essentially as they see fit. In recent years, both Democrats and Republicans have criticized the shield for being too broad.

But amending section 230 is not an easy task. Democrats and Republicans disagree on its problems and how to solve them. Progressives advocate removing liability protection from platforms that fail to moderate certain types of content, fearing the proliferation of hate speech. Conservatives say the law should limit which platforms are allowed to moderate, saying platforms remove posts expressing conservative views (the companies have denied).

Many legal scholars have warned of the potential unintentional damage that could result from cutting Section 230. Platforms might actually be tricked into limiting speech much further than expected, for example.

Progressive digital rights group Fight for the Future sees the new bill as a responsible way to tackle the damage caused by big tech companies “without digging holes in Section 230,” according to a statement.

Although introduced by two Democrats, the bill addresses a key principle put forward by Republicans earlier this year about how they seek to manage technology reform. In an April memo, Republican staff at the House Energy and Commerce Committee urged an emphasis on transparency in content moderation practices. Markey and Matsui’s bill would require online platforms to publish annual reports to the public on their content moderation practices.

WATCH: The messy business of content moderation on Facebook, Twitter, YouTube


Source link