Social media platforms like Twitter must be held liable for negligence
The opinions expressed in the opinion columns are those of the author.
Shortly after taking the reins of Twitter, Elon Musk started massive layoffs, including employees responsible for US elections and content moderation. But his timing it couldn’t have been worsecasting immense doubt on the ability of the social media platform to remove content created by malicious actors.
During last week’s midterm elections, Twitter faced a proliferation election-related misinformation – and provided an unacceptably inadequate response.
Sufficient content moderation is essential to prevent interference and manipulation of elections, both foreign and domestic. And the maintenance of this moderation seems entirely dependent on the alleviation of the negative effects of Section 230 of the Communications Decency Act, a law that protects technology companies from liability for illegal user-generated content, which permits such negligence.
While desires for immediate reform exist on both sides of the aislethe federal government should start by requiring companies in charge of social media platforms disclose certain data to researchers – or face the loss of legal protections – to reduce the margin of negligence.
The outright repeal of section 230 could create a host of problemsand a lot reasonable reviews previously proposed reforms exist. Instead, more controlled changes to federal regulations would ensure companies don’t risk disastrous outcomes in a hastily constructed attempt to maximize profits. The exploration of new legislation is also necessary because the Supreme Court consider the challenges it could upset section 230 as we know it.
A content moderation team with limited capacity has a capacity less than locate and label content that falsifies or misrepresents the positions or character of candidates – a domain where Twitter is already struggling. Fifteen people, the number remaining with access to the platform’s moderation toolsthey cannot reasonably be expected to support the hundreds of millions of Tweets sent every day.
Intelligence agencies and tech companies detected Russian efforts both in 2020 and as recently as this month which look a lot like information war conduct via social media platforms, including Twitter, during the 2016 presidential election.
And Holocaust deniers are increasingly using social media platforms spread conspiracy theories and sow doubt in legitimate electoral processes to push stories of stolen elections.
Fraudulent campaigns have also been known to spread incorrect voting information through more difficult-to-track means, such as automated call and text messages. Musk continues to run the serious risk of allowing Twitter to become another compromised sphere of influence.
The imperative that Twitter must ensure rigorous content moderation extends beyond election integrity – content moderation is widely used to prevent and eliminate hate speech and other illegal or disturbing content.
After Election Day, Twitter also launched a paid verification service, which was paused a day later because it has become more difficult for users to discern the authenticity of the sources they rely on. If Twitter takes over this service without seriously reassessing its approach, the same problem will arise. political informationwhen candidates, elected officials and journalists are usurped at critical moments in elections.
While Musk argues that widely available verification brings “power to the people“, it will hurt the average user more than all influential users.
Verification offers few tangible benefits for the wearer. However, it does help unverified users navigate the volley of information they see when using Twitter by ensuring accounts actually represent the entity they claim to be.
The removal of this certainty will cause profound damage to the electoral process. For example, a user impersonating a news agency could post fake election calls, which could dampen turnout later in the day if voters think a race outcome is set.
What Twitter executives have to deal with is that open communication is not the service provided by social media platforms – organized communication is.
Creating a social media platform where users can exchange ideas and express their feelings about those ideas is a basic task. Ensuring that the platform is genuine and cannot be abused is the real challenge.
Technology companies and policymakers share a mutual responsibility in promoting truly free speech that protects the unfettered exercise of the right to vote and ensures the safety of all users. Data transparency is the first step in enabling lawmakers to deal with the unintended consequences of Section 230.
Dhruvak Mirani is a freshman majoring in computer science, government, and politics. Mirani can be reached at [email protected].