MENLO PARK, Calif. (Reuters) - The number of posts on Facebook showing graphic violence rose in the first three months of the year from a quarter earlier, possibly driven by the war in Syria, the social network said on Tuesday, in its first public release of such data.
Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late last year.
The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, nearly triple the 1.2 million a quarter earlier, according to the report.
Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook’s vice president of data analytics.
“Whenever a war starts, there’s a big spike in graphic violence,” Schultz told reporters at Facebook’s headquarters.
Syria’s civil war erupted in 2011. It continued this year with fighting between rebels and Syrian President Bashar al-Assad’s army. This month, Israel attacked Iran’s military infrastructure in Syria.
Facebook, the world’s largest social media firm, has never previously released detailed data about the kinds of posts it takes down for violating its rules.
Facebook only recently developed the metrics as a way to measure its progress, and would probably change them over time, said Guy Rosen, its vice president of product management.
“These kinds of metrics can help our teams understand what’s actually happening to 2-plus billion people,” he said.
The company has a policy of removing content that glorifies the suffering of others. In general it leaves up graphic violence with a warning screen if it was posted for another purpose.
Facebook also prohibits hate speech and said it took action against 2.5 million pieces of content in the first quarter, up 56 percent a quarter earlier. It said the rise was due to improvements in detection.
The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism. It said it disabled 583 million fake accounts.
Reporting by David Ingram; Editing by Clarence Fernandez
Our Standards: The Thomson Reuters Trust Principles.