Media content

Social media content ‘likely’ contributed to Molly Russell’s death

[ad_1]

Content from social media sites including Instagram and Pinterest are ‘likely’ to have contributed to the death of British teenager Molly Russell, who took her own life after viewing thousands of posts about suicide, depression and self-harm, a coroner said. Friday.

Delivering his findings nearly five years after Russell died in November 2017, aged 14, Chief Coroner Andrew Walker said she died “of an act of self-harm while suffering from depression and the negative effects of online content”.

The result marks a toll for social media platforms as authorities around the world grapple with how to make the internet safe for children, and will put renewed pressure on companies building apps used by young people.

Although not a trial, the investigation has put social media in the dock, with executives from Instagram-owner Meta and Pinterest being questioned in court for the first time English about the potential harm to a generation of young people growing up online.

It also puts pressure on the UK government at a time when it should water down the already long-delayed security rules that will govern how tech sites are vetted.

The pressure increased further on Friday night when the Prince of Wales tweeted about the case, in a rare intervention in such a matter.

“No parent should ever have to endure what Ian Russell and his family have been through,” Prince William wrote, referring to Russell’s father. “They were incredibly brave. Online safety for our children and young people should be a prerequisite, not an afterthought. W”

In the last six months of his life, Russell liked, saved or shared 2,100 posts about depression, suicide or self-harm on Instagram, and went just 12 days without engaging with this harmful content on the website.

Ian Russell told the inquest that social media had “helped kill my daughter”.

“You can see what your child is doing in the [offline] world much more easily,” he said. “You can see if they come into the corner store. . . smell the alcohol on his breath. . . The effects of the digital world are invisible.

Molly Russell © PA

Ian Russell, Molly Russell's father

Ian Russell, father of Molly Russell © Joshua Bratt/PA

According to Ofcom, the UK communications regulator, the majority of children under 13 now have a profile on at least one social media site, although 13 is the minimum age. Russell had a secret Twitter account where she documented her true state of mind and asked celebrities for help.

Walker said on Friday that it was “likely that the materials used by Molly, already suffering from a depressive illness and vulnerable due to her age, adversely affected her mental health and contributed to her death more than minimal”.

Platform design

Over the past year, social media companies have come under pressure as society grows concerned about how the design of platforms could affect vulnerable minds.

Instagram and Pinterest are visual apps known for showcasing bright ambitious images, where individuals post idealized and often edited photos.

Last year, Frances Haugen, a former product manager at Meta-owned Facebook, leaked a trove of internal documents showing how algorithms can lead people into psychological burrows. In particular, Instagram’s internal research suggested it could negatively impact the well-being of teenage girls – findings which Instagram said were skewed.

A few weeks later, Instagram announced that it was suspending plans to introduce Instagram Kids, a product for under-13s.

On Thursday, Walker said he was concerned that children and adults weren’t separated on Instagram and that children’s accounts weren’t linked to an adult’s.

Algorithms, the computer rules that control the order of posts that social media users see, were at the center of the Russell case. Depression-related content was emailed to him by Pinterest, and Instagram suggested accounts to follow that referenced suicide and self-harm.

Russell may have been “gorging” on harmful videos, images and clips “some of which were selected and provided without Molly asking for them,” Walker said.

Engagement is often a key indicator for designing algorithms: promoting content that users are likely to comment on, like or share. Meta has described previous recommendation systems as “content agnostic,” but its technology now aims to proactively identify harmful content and not promote anything permitted on the platform related to self-harm.

Elizabeth Lagone, Meta Health and Wellness Manager © Beresford Hodge/PA

Judson Hoffman, Global Head of Community Operations at Pinterest © James Manning/PA

Meta and Pinterest both apologized to Molly’s family during the investigation, for allowing her to view content that violated their policies in 2017. They claimed to have updated their technology and content policies since then. during.

Former Meta AI researcher Josh Simons, a technology and democracy researcher at Harvard University, said what happened to Russell “isn’t just about the platforms’ responsibility to police harmful content.”

“It’s about the algorithms that push content and decide what our kids see and hear every day – what drives those algorithms, how they’re designed, and who can control them,” he said.

Moderation efforts

Since 2019, Instagram has banned all graphic images of self-harm or suicide, having previously removed only images that encouraged it, and stepped up automated technology that detects such content and flags it to human reviewers. Meta said the company took action on 11.3 million posts related to suicide and self-harm between April and June 2022 on Instagram and Facebook.

Some self-harm and suicide content, such as healed self-harm scars, is permitted as individuals seek supportive online communities on Instagram.

“Clumsy and ill-informed approaches to social media moderation risk removing content that, while sensitive on the surface, enables important social conversations that might not be able to take place elsewhere,” said Ysabel Gerrard, senior lecturer at the University of Sheffield and an unpaid adviser. on Meta’s Suicide and Self-Harm Advisory Committee.

“Although people attribute social media [negatively] impacting their mental health, there are also a lot of people who say it helped save theirs,” she added.

Meta said the moderation of this type of content was nuanced, making it difficult for both AI systems and humans to detect.

It has 15,000 moderators around the world, covering financial scams and political misinformation as well as self-harm. Last year, the company announced that it would hire 10,000 people dedicated to building the Metaverse, its virtual world.

“It seems unlikely that Facebook will have adequate resources on the technical product side or on the human reviewer side to handle the issue if the number is roughly the same as they plan to spend on gamers.” , Haugen told the Financial Times.

Meta and Pinterest both admit that their moderation will never catch everything. On a site with over a billion users, in the case of Instagram, failing to identify even 1% of harmful posts can mean that there are millions left.

Users can also thwart algorithms by misspelling words, mixing them with numbers, and using code words.

Within minutes of scrolling through Instagram, using terms previously reported to the company, the FT was able to identify self-harming content that violated Instagram policies. It was subsequently deleted.

The investigation ends as the government amends the Online Safety Bill, legislation that will require tech platforms to tackle harmful content on the internet. In its current version, it is expected that companies will have to comply with age verification standards as well as comprehensive risk assessments or independent algorithm audits.

Last year, the UK introduced the Children’s Code, also known as the Age Appropriate Design Code, which sets stricter restrictions for companies that process children’s data. The legislation inspired similar regulation in California, Europe, Canada and Australia.

Baroness Beeban Kidron, who proposed the code, said: “There is a version of the technology that puts the welfare and safety of children before the bottom line. . . It is not ambitious to insist that the well-being of children come before growth, it is simply the price to pay for doing business. »

Anyone in the UK affected by the issues raised in this article can contact the Samaritans free of charge on 116 123.

[ad_2]
Source link