Amid growing concerns that X has become less safe under billionaire Elon Musk, the platform formerly known as Twitter has sought to assure advertisers and critics that it still monitors the platform for harassment, hate speech and other objectionable content.
Between January and June, X suspended 5.3 million accounts and removed or labeled 10.7 million posts for violating its rules banning the posting of child sexual exploitation, harassment and other harmful content, the company said in a 15-page transparency report to be released on Wednesday. X said it received more than 224 million user reports in the first half of this year.
This is the first time X has released a formal global transparency report since Musk completed his acquisition of Twitter in 2022. The company said last year that it was reviewing how it approached transparency reporting, but still released data on the number of accounts and amount of content it removed.
Safety issues have long plagued the social media platform, and the company has been criticized by advocacy groups and regulators for not doing enough to curb harmful content, but those concerns have intensified since Musk bought Twitter and laid off more than 6,000 employees from the company.
The release of X’s transparency report comes as the company intensifies its battle with regulators as advertisers plan to cut spending on the platform next year. Earlier this year, X CEO Linda Yaccarino told the U.S. Congress that the company was restructuring its trust and safety team and building a trust and safety center in Austin, Texas.
Musk, who last year said advertisers who boycotted his platform should “fuck off,” has since softened his rhetoric, saying at this year’s Cannes Lions International Festival of Creativity that “advertisers have the right to advertise next to content that aligns with their brand.”
When Musk bought Twitter, some of the changes he made raised alarm among safety experts. Twitter reinstated previously suspended accounts, including those belonging to white supremacists, stopped enforcing policies against coronavirus misinformation and abruptly disbanded its Trust and Safety Council, an advisory group that included human rights activists, child safety groups and other experts.
Company X has also faced criticism that it has become less transparent under Mr Musk’s leadership. The once publicly traded company became private after Mr Musk bought it for $44 billion.
The change meant the social media platform no longer released quarterly user numbers and revenue, and last year X began charging for access to the data, making it harder for researchers to study the platform.
Concerns about X’s lack of moderation also threaten the company’s advertising business: In September, the World Bank suspended paid advertising on the platform after X’s ads appeared under racist posts. Nearly 25% of advertisers plan to cut spending on X next year, and only 4% believe the platform’s ads are brand safe, according to a survey by market research firm Kantar.
Some of the main issues users reported on X involved posts that allegedly violated the platform’s rules on harassment, violent content and hateful conduct, the platform’s transparency report showed.
Mr Musk has described himself as a “free speech absolutist” and said on Facebook’s X show that his approach to enforcing the platform’s rules is to limit the reach of potentially offensive posts rather than removing them. He also sued California last year over a state law aimed at making social networks more transparent, citing free speech concerns.
According to X’s transparency report, about 2.8 million accounts were suspended for violating the platform’s rules banning child sexual exploitation, making up more than half of the 5.3 million accounts removed.
But the report also found that X had in some cases resorted to labeling users’ content rather than removing or suspending accounts.
X made heavy use of automated technology to apply 5.4 million labels to content reported for abusive, harassing or hateful conduct, and removed approximately 2.2 million pieces of content for violating these rules.
The platform’s rules prohibit media depicting hateful imagery, such as the Nazi swastika, in live videos, account bios, profiles, or header images, but require that media be marked as sensitive in other cases. This week, X also made changes to a feature that lets users block others on the platform, allowing them to see posts but not respond to them.
X also suspended about 464 million accounts for violating the platform’s rules against manipulation and spam. Musk had vowed to “defeat spambots” on Twitter before taking over the platform. The company’s report also included a metric called “post violation rate,” which shows how unlikely users are to encounter content that violates the site’s rules.
Meanwhile, X continues to face legal challenges in several countries, including Brazil, where the country’s Supreme Court blocked the site after Musk failed to comply with a court order to suspend certain accounts for posting hate speech. The company bowed to legal demands this week and is trying to reinstate the site. The company is also reporting content moderation data to regulators in Europe, India and elsewhere.
The report also included the number of requests X received from governments and law enforcement agencies. The company received 18,737 government requests for user account information, and disclosed the information in about 53% of cases.
Twitter began publishing user information and the number of government requests to remove content in 2012. The company’s first transparency report, which also included data on copyright takedown notices, came after Google began publishing this data in 2010.
Since revelations in 2013 that the National Security Agency had accessed user data collected by major tech companies including Apple, Google and Facebook, online platforms have increasingly been willing to disclose more information about requests they receive from government and law enforcement agencies.