Social Media Content Moderation
Conclusion

Social media content moderation as a part of media governance has become an important issue which is worth further exploration, especially with the development and popularization of the use of social media platforms in people’s daily life. The First Amendment of Constitution of the United States of America protects the freedom of speech. In addition, the Section 230 provides a safe harbor for social media companies to be not legally responsible for the content which were created and disseminated on their platforms by their users, while social media platforms are allowed to conduct content moderation and restrict access to certain content for good faith. The relevant laws made how social media platforms ought to conduct content moderation become more complex and controversial. On the other hand, this project reviewed the phenomenon that social media platforms would try to recommend content to their users based on their algorithmic recommendation systems, while the algorithmic recommendation systems would analyze the users’ previous activities on their platforms to try to recommend users with content corresponding to the users’ understanding and recognition to certain issues to try to attract users to spend more time on their platforms to maximize their profits. Furthermore, the popularization of social media broke up the monopoly of traditional media for editing and disseminating news information as gatekeepers, which made it much easier for people to disseminate some unverified information on social media platforms. Therefore, people gradually stepped into the post-truth era with the widespread use of social media and the rise of nationalism and even populism. This project also distinguished post-truth, lies and bullshit. Therefore, it’s very important and necessary for social media platforms to conduct content moderation for stopping the dissemination of misinformation.
This project conducted content analysis on the oversight board cases which were selected from January to September in 2024 on Meta Transparency Center website, which provided people with an important and rare opportunity to know more about how Meta conducts content moderation on some controversial content which might have important social influence on Facebook, Instagram and Threads platforms, from a micro perspective. On the other hand, this project collected data from X Global Transparency Report from January to June in 2024, analyzed the data and made some data visualization to show the general landscape of how X conducted content moderation on its platform, and the situation of segmentation of moderation actions by policy area during the six-month period. In addition, the project conducted text analysis on the Reddit Transparency Report in 2023, collected data from the report and made some data visualization to show the general situation of how Reddit moderated content on its platform, since Reddit has very different functions when comparing to the platforms that were mentioned before. Furthermore, five news articles or interviews on content moderators or people who researched on the content moderation of social media giants were selected to conduct text analysis to explore the general situation of those people who work as content moderators for social media platforms and how this job would significantly influence those moderators’ daily life. This project found that the psychological and wellness support which was offered to moderators was not enough and effective, which cannot help them remove the long-term psychological trauma from those moderators’ life in most cases. On the other hand, artificial intelligence with machine learning algorithms has been considered as a solution for future content moderation of social media platforms, but many people working in or researching on relevant fields doubted the possibility artificial intelligence tools completely replacing human moderators in the foreseeable future.
Besides, there are some limitations in this project, which is worth further exploration in future research. First of all, future research could make a comparison to the relevant laws and policies on social media governance, especially regulation about social media content moderation, between the United States of America and some other countries with similar mature democratic systems, such as Network Enforcement Act in Germany. Secondly, future researchers could further focus on relevant policies of content moderation of social media platforms to help these corporations reach a balance between stopping the dissemination of misinformation and attracting users to spend more time on their platforms to generate profits, through the design and enforcement of relevant policies. Last but not least, there could be some research to explore how to better regulate social media giants to require them to provide more effective psychological and wellness support and better working conditions for the content moderators through legislation and relevant policy-making in the future.
The picture above is generated with an embedded artificial intelligence tool on the Wix platform.
​
​