Media content

Social media content moderator sues TikTok over PTSD

A social media content moderator sues TikTok, a popular video app, for psychological trauma developed from 12-hour shifts moderating endless graphic videos.

Candie Frazier works for Telus International, a Canadian outsourcing company providing moderation services to social media applications like TikTok. Frazier filed a lawsuit with the California Central District Court in December alleging that TikTok and parent company ByteDance were not providing sufficient support for the psychological well-being of their contract moderators, whose job it is to remove content. violent, graphic and otherwise inappropriate of the Platform.

TikTok’s popularity has exploded in the wake of pandemic lockdowns, especially among millennials and Gen Z. As of September, TikTok was reporting one billion users every month.

In his complaint, Frazier explains that moderators are required to watch “three to ten videos at the same time,” with only 25 seconds to review each. The complaint says the videos include violent content such as “animal cruelty, torture, suicides, child abuse, murder, beheadings and other graphic content.”

As a result, Frazier developed symptoms of PTSD, including anxiety, depression, trouble sleeping, and “horrible nightmares.” The complaint reads: “She often stays awake at night trying to fall asleep, replaying videos that she has seen in her head. She has severe and debilitating panic attacks.

According to Frazier, only one 15-minute break is allowed in the first four hours of their workday, with an additional 15-minute break every two hours thereafter. Further, Frazier alleges that ByteDance “punishes heavily” any overtime taken out of video moderation, despite the emotional turmoil many workers experience throughout the day.

Hilary McQuaide, a spokesperson for TikTok, said The edge:

Our security team partners with third-party companies for the essential work of helping protect the TikTok platform and community, and we continue to develop a range of wellness services to make moderators feel mentally supported. and emotionally.

James Vincent, “TikTok sued by former content moderator for allegedly failing to protect his sanity” at The edge

The lawsuit calls for TikTok to provide more frequent pauses as well as more visual and audio tools (such as blur and mute options) for moderators to protect themselves from whatever they watch.

Psychological trauma is nothing new to content moderation

TikTok is not unique among social media platforms for these issues. Facebook, Google and YouTube moderators reported similar issues. In 2020, content moderators were awarded $ 52 million in a settlement against Facebook for psychological trauma.

Casey Newton at The edge has collected the stories of social media content moderators over the past few years, sharing their experiences and raising awareness about the dark undersides of social media operations. In a scary article, Newton explained that there are fifty-fifty chances for Facebook moderators to develop mental health issues as a result of their work.

In a 2019 talk titled “The Trauma Floor,” Newton documented the panic attacks, anxiety and depression experienced by these workers. Lack of support and empathy from leaders has created a toxic work environment, in which many employees turn to black humor, alcohol, marijuana, and even sex during working hours in order to to deal with the violence, abuse and hatred that they regularly examine.

According to these former moderators, Google, YouTube and Facebook did not disclose during the application and professional training processes how regularly they would moderate disturbing content.

“You still see death, every day,” a former Facebook content moderator told Newton in a short YouTube documentary (see below). “You see the pain and the suffering. And that just makes you angry because they don’t do anything. The stuff that gets deleted, they go back there anyway. “

Is AI the solution?

If the human psyche is too fragile to handle the volume of graphic content posted daily on the Internet, what is the solution? A Wild West type internet in which violent photos and videos and graphics circulate freely? Or, could artificial intelligence effectively replace these workers?

Social media apps have started using more artificial intelligence algorithms to automatically remove inappropriate content without human oversight. The technology, however, is not perfect, requiring humans to continue doing the work where AI fails.

Facebook’s use of AI to moderate its platforms has come under intense scrutiny in the past, with critics noting that artificial intelligence lacks the human ability to judge the context of many communications by line. Especially with topics like misinformation, bullying, and harassment, it can be nearly impossible for a computer to know what it’s looking at.

Facebook’s Chris Palow, a software engineer on the company’s Interaction Integrity team, agreed AI has its limits, but told reporters technology can still play a role in removing unwanted content. . “The system is all about marrying AI and human examiners to make fewer total errors,” Palow said. “AI will never be perfect.”

When asked what percentage of posts the company’s machine learning systems ranked incorrectly, Palow did not give a straightforward answer, but noted that Facebook does not allow automated systems to operate without human oversight. only when they are as precise as human examiners. “The bar for automated action is very high,” he said. Nonetheless, Facebook regularly adds more AI to the moderation mix.

James Vincent, “Facebook is now using AI to sort content for faster moderation” at The edge

In the meantime, the Internet remains as free from graphic material as it can be in an imperfect world, due to the work of people, not machines. “But the risk to human lives is real,” Newton writes, “and it’s not going to go away.”


Further reading:

The AI ​​is not ready to moderate the content! In the face of COVID-19 quarantines for human moderators, some are turning to AI to keep bad things out of social media. Large social media companies have long wanted to replace moderators of human content with AI. The COVID-19 quarantines have only intensified this discussion. (Brendan Dixon)

Facebook moderators are not who we think they are. Companies offer appalling working conditions in part because they believe AI will take over soon. And if that doesn’t – and maybe can’t – happen, what’s the backup plan? Prosecutions?

Yes, there are ghosts in the machine. And one of them is you. You power the AI ​​every time you prove your humanity to the CAPTCHA challenges that pervade the web. AI systems are not an alien brain evolving among us. (Brendan Dixon)


Source link