Collecting Data from X Global Transparency Report from January to June in 2024 and Conducting Data Analysis
- Nibin Zheng
- Dec 15, 2024
- 5 min read
This research collected relevant data about how X platform which was known as Twitter before conducted content moderation from January to June in 2024. The measures that this platform took to conduct content moderation include suspending an account which created and disseminated content violating the X Rules, removing or adding user informed labels to the posts which violated the X Rules and relevant policies. Then I made data visualization to clearly and intuitively show how X platform conducted content moderation, which involves in different policy areas from a macro perspective, such as abuse and harassment, child safety, hateful conduct and so on.

The X Global Transparency Report provides an overall landscape to view how X platform conducts content moderation which is involved in different policy areas. To be specific, the X Global Transparency Report broke down the content moderation actions by different policy areas and specific measures.
Conducting Content Moderation for Protecting Child Safety
The X platform has zero tolerance for any content involved in child sexual exploitation and is committed to remove any content involved in physical child abuse. In addition, it cooperates with the National Center for Missing and Exploited Children to make efforts to ensure child safety. As shown in the data visualization below, X platform ususally suspended accounts which violated relevant policy for protecting child safety in most cases, some of them were reported to the National Center for Missing and Exploted Children, while only a small proportion of them were just removed content for protecting child safety. Specifically, most of content being reported to the National Center for Missing and Exploited Children was actioned by human moderators, while most accounts which were suspended were actioned by automated moderators. Besides, most content which was removed for ensuring child safety was conducted content moderation by human moderators.




Conducting Content Moderation for Stopping Abuse, Harassment and Hateful Conduct
The X platform prohibits any direct attack based on race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. The X platform tries to provide a respectful community and a safe space for every user. In addition, the X platform prohibits any content which targets other people with abuse or harassment, or encourages others to do these behaviors. Most content which was involved in violating relevant policies for stopping abuse, harassment and hateful conduct were added user informed labels. Furthermore, more than 70% of user informed labels were added by automated moderators, while most moderation measures of suspending accounts or removing content which targeted people with abuse, harassment or hateful conduct were taken by human moderators.




Conducting Content Moderation for Preventing Spam and Platform Manipulation
The X platform does not allow spam or any type of platform manipulation. They "define Platform Manipulation as using X to engage in bulk, aggressive, or deceptive activity that misleads others and/or disrupt their experience. Most accounts that disseminated content which violated relevant policies about preventing manipulation and spam were suspended. To be specific, most suspended accounts were actioned by automated moderation system. Besides, some content which might violated relevant policies might be added user informed labels. Most of user informed labels were added by automated moderation system, too.



Conducting Content Moderation for Stopping Violent and Hateful Entities
The X platform doesn't allow any threats or promotion of terrorism and violent extremism. The violent entities including "terrorist organizations, extremist groups, perpetrators of violent attacks or individuals promoting attacks" are not allowed to use the X platform. The vast majority of the accounts which broke relevant rules would be suspended, while a small part of them would be just removed the content which violated relevant policies.



Conducting Content Moderation on Violent Content
The X platform might remove or reduce the visibility of Violent Content to ensure the safety of their users' safety "and prevent the nomalization or glorification of violent actions". In addition, they "do not allow sharing Violent Content in highly visible places such as profile photos, banners or bio". The most common measure to conduct content moderation on violent content is removing content which might violate relevant policies about Violent Content. A vast majority of accounts which violated relevant policies about Violent Content were suspended by human moderators, while all of the user informed labels were added by human labors for removing or reducing the visibility of Violent Content.




Conducting Content Moderation for Supporting Mental Health and Preventing Harm
The X platfom doesn't allow any content which "promotes or encourages suicide or self-harm". The X platform removed content which violated relevant policies in most cases, while some accounts were suspended for disseminating the content which violated relevant policies. Most content moderation measures on content which "promotes or encourages suicide or self-harm" were actioned by human moderators.



Conducting Content Moderation on Misleading and Deceptive Identities
The X platform does "not tolerate the misappropriation of identities or the use of fake identities to deceive others". The X platform designed policies "to prevent any form of identity fraud or misleading behavior". Most accounts which violated relevant policies would be suspended, more than two thirds of them were actioned by automated moderators. Besides, all the content which were removed for violating relevant policies were actioned by human moderators.



Conducting Content Moderation Based on Non-consensual Nudity Policy
The X platform strictly prohibited "sharing explicit sexual images or videos of someone without their consent", no matter "whether the content is AI-generated or organically created". The most content moderation action for this policy is removing content which violated this policy, while some of accounts were banned due to violating this policy. A vast majority of accounts which were suspended for this policy were actioned by human moderators, and most content which was removed for this policy was actioned by human moderators.



Conducting Content Moderation for Protecting Privacy
The X platform considered "protecting privacy" as "a priority". It is not allowed to "threaten, publish, or share other people's private information or media without their express authorization and consent". The X platform enforced "strict measures to safeguard personal privacy and prevent unauthorized disclosures". They would remove content which violated relevant policies for protecting privacy in most cases, while a small number of accounts were suspended due to violating relevant policies. Most of content moderation actions were conducted by human moderators.



Conducting Content Moderation for Stopping Illegal or Regulated Goods/Services
The X platform made efforts to stop using their service "for any unlawful purpose or in furtherance of illegal activities" including "selling, buying, or facilitating transactions in illegal goods or services, as well as certain types of regulated goods or services". The X platform took measures including suspending accounts and removing content to enforce relevant policies for stopping using their service to facilitate unlawful purpose or in furtherance of illegal activities. Specifically, most of these content moderation actions were conducted by human moderators.



The Reactions of Government, Legal, & Law Enforcement Requests
The X platform has established guidelines to provide procedures for managing "government, legal and law enforcement requests". The guidelines include "clear procedures for law enforcement seeking account information and content removal. The X platform received a lot of requests of disclosing account information from the United States of America, Japan, European Union, the United Kingdom and so on. The disclosure rate of the government requests for X account information is hight than 50% in the United States, Japan and the European Union. In addition, the X platform received many removal requests from Turkey, Japan, South Korea and European Union, and this platform had relatively high action rate of removal requests in these regions.






Conducting Copyright and Trademark Notices
The X platform attached much importance to intellectual property rights "including both copyright and trademark protections". Therefore, they sent many copyright notices, copyright counter notices and trademark notices from January to June in 2024.



Comments