Media content

Everything in Moderation: Artificial Intelligence and Social Media Content Analysis | Pillsbury – Internet and Social Media Law Blog

Interactive online platforms are now an integral part of our daily life. While user-generated content, freed from traditional editorial constraints, has boosted dynamic online communications, improved business processes, and expanded access to information, it has also raised complex questions about how to moderate the problem. harmful online content. As the volume of user-generated content continues to grow, it has become increasingly difficult for internet and social media companies to keep pace with the needs of moderation of information posted on their platforms. Content moderation measures supported by artificial intelligence (AI) have become important tools to address this challenge.

Whether you run a social media platform or an e-commerce site, reducing harmful content is essential to the user experience. Such harmful content can include everything from messages promoting violence to child abuse. In fact, the range and scope of potentially harmful content has been shown to be too broad for human moderators to consider comprehensively. AI systems, designed to mirror the way humans think and process information, may be able to improve the speed and accuracy of this process. AI technology can use large data sets and teach machines to identify patterns or make predictions on certain inputs. Ultimately, this ability allows computers to recognize and filter out certain words or images more efficiently than humans can process that information. As an added benefit, this reduces or could potentially eliminate the need for human moderators to be directly exposed to harmful content.

While AI systems hold promise, they are not without challenges. By some estimates there are 2.5 quintillion bytes of data created each day. As such, while AI offers a way to process large amounts of data more efficiently, the volume of content involved is now so vast that it has become essential that AI models both work with it. speed and precision. And to achieve optimum accuracy, an AI model must not only be based on precise data and images, but also be able to appreciate the nuances of the content being examined to distinguish satire from disinformation. Additionally, questions have been raised as to whether these models remove the inevitable biases of human content moderators, or whether the AI ​​models themselves actually introduce, reinforce, or amplify the biases against certain types of users. One study, for example, found that AI models trained to deal with hate speech online were 1.5 times more likely to identify tweets as offensive or hateful when written by African-American users. .

This tension demonstrates the difficult balance between designing models to address human inefficiencies and eliminating human errors in moderation of content while ensuring that new systematic problems are not introduced into the models themselves. In fact, U.S. policymakers have conducted numerous hearings and brought forward legislative proposals to address concerns about biases within AI systems and the unintentional discrimination that could result from the use of such systems.

AI systems undeniably provide enhanced capabilities for online platforms to effectively moderate user-generated content, but they present their own set of challenges that must be considered as these systems are designed and deployed as tools. of moderation.

[View source.]


Source link