X continues to suck at moderating hate speech, according to a new report::A new report from the Center for Countering Digital Hate (CCDH) suggests X is failing to remove posts that violate its own community rules regarding misinformation and hate speech.
This is the best summary I could come up with:
According to the CCDH, the reported posts, which largely promoted bigotry and incited violence against Muslims, Palestinians, and Jewish people, were collected from 101 separate X accounts.
Just one account was suspended over their actions, and the posts that remained live accrued a combined 24,043,693 views at the time the report was published.
X filed a lawsuit against the CCDH in July earlier this year over claims the organization “unlawfully” scraped X data to create “flawed” studies about the platform.
In a statement to The Verge, X’s head of business operations, Joe Benarroch, said that the company was made aware of the CCDH’s report yesterday and directed X users to read a new blog post that details the “proactive measures” it has taken to maintain the safety of the platform during the ongoing Israel-Hamas war, including the removal of 3,000 accounts tied to violent entities in the region and taking action against over 325,000 pieces of content that violate its terms of service.
X claims that by choosing to only measure account suspensions, the CCDH has not accurately represented its moderation efforts and urged the organization to “engage with X first” to ensure the safety of the X community.
After publication, Benarroch questioned the methodology of the CCDH’s study and claimed the organization only considers a post “actioned” after the account has been suspended.
The original article contains 476 words, the summary contains 224 words. Saved 53%. I’m a bot and I’m open source!