Groups Similar Search Look up By Text Browse About

Facebook’s first content moderation report finds terrorism posts up 73 percent this year


Facebook took enforcement action on 1.9 million posts related to terrorism by Al Qaeda and ISIS in the first quarter of this year, the company said, up from 1.1 million posts in the last quarter of 2018. The increased enforcement, which typically results in posts being removed and accounts being suspended or banned from Facebook, resulted from improvements in machine learning that allowed the company to find more terrorism-related photos, whether they were newly uploaded or had been on Facebook for longer. Facebook found 99.5 percent of terrorism-related posts before they were flagged by users, it said. In the previous quarter, 97 percent of posts were found by the company on its own. Facebook made the data available as part of its first ever Community Standards Enforcement Report, which documents content moderation actions taken by the company between October and March. Other findings in the report include: Graphic violence. Posts that included graphic violence represented from 0.22 percent to 0.27 percent of views, up from 0.16 to 0.19 percent in the previous quarter. The company took action on 3.4 million posts, up from 1.2 million in the previous quarter. It said violent posts appeared to have risen in conjunction with the intensifying conflict in Syria. Nudity and sex. Posts with nudity or sexual activity represented 0.07 to 0.09 percent of views, up from 0.06 to 0.08 percent in the previous quarter. The company took action on 21 million posts, about the same as the previous quarter. Hate speech. Facebook took action on 2.5 million posts for violating hate speech rules, up 56 percent from the previous quarter. Users reported 62 percent of hate speech posts before Facebook took action on them. Spam. Facebook took action on 837 million spam posts, up 15 percent from the previous quarter. The company says it detected nearly 100 percent of spam posts before users could report them. Fake accounts. Of Facebooks monthly users, 3 to 4 percent are fake accounts, the company said. It removed 583 million fake accounts in the first quarter of the year, down from 694 million in the previous quarter. The data, which the company plans to issue at least twice a year, is a move toward holding ourselves accountable, Facebook said in its report. This guide explains our methodology so the public can understand the benefits and limitations of the numbers we share, as well as how we expect these numbers to change as we refine our methodologies. Were committed to doing better, and communicating more openly about our efforts to do so, going forward. The company is still working to develop accurate metrics that describe how often hate speech is seen on the platform, said Guy Rosen, a vice president of product management, in an interview with reporters. The companys machine-learning systems have trouble identifying hate speech because computers have trouble understanding the context around speech. Theres a lot of really tricky cases, Rosen said. Is a slur being used to attack someone? Is it being used self-referentially? Or is it a completely innocuous term when its used in a different context? The final decisions on hate speech are made by human moderators, he added. Still, people post millions of unambiguously hateful posts to Facebook. In March, the United Nations said Facebook was responsible for spreading hatred of the Rohingya minority in Myanmar. Facebooks lack of moderators who speak the local language has hampered it in its effort to reduce the spread of hate speech. We definitely have to do more to make sure we pay attention to those, Rosen said, noting that the company had recently hired more moderators in the area. The enforcement report arrives a month after Facebook made its community standards public for the first time. The standards document what is and isnt allowed on Facebook, and serves as a guide for Facebooks global army of content moderators. Facebook is releasing its enforcement report at a time when the company is under increasing pressure to reduce hate speech, violence, and misinformation on its platform. Under pressure from Congress, Facebook has said it will double its safety and security team to 20,000 people this year.

Facebook has already removed 583 million fake accounts this year


It published a report on its community guideline enforcement efforts. Last month, Facebook published its internal community enforcement guidelines for the first time and today, the company has provided some numbers to show what that enforcement really looks like. In a new report that will be published quarterly, Facebook breaks down its enforcement efforts across six main areas -- graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. The report details how much of that content was seen by Facebook users, how much of it was removed and how much of it was taken down before any Facebook users reported it. Spam and fake accounts were the most prevalent and in the first quarter of this year, Facebook removed 837 million pieces of spam and 583 million fake accounts. Additionally, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of hate speech and 1.9 million pieces of terrorist content. In some cases, Facebook's automated systems did a good job finding and flagging content before users could report it. Its systems spotted nearly 100 percent of spam and terrorist propaganda, nearly 99 percent of fake accounts and around 96 percent of posts with adult nudity and sexual activity. For graphic violence, Facebook's technology accounted for 86 percent of the reports. However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there. " As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse," Facebook's VP of product management, Guy Rosen, said in a post. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important." Throughout the report, Facebook shares how the most recent quarter's numbers compare to those of the quarter before it, and where there are significant changes, it notes why that might be the case. For example, with terrorist propaganda, Facebook says its increased removal rate is due to improvements in photo detection technology that can spot both old and newly posted content. "This is a great first step," the Electronic Frontier Foundation's Jillian York told the Guardian. " However, we don't have a sense of how many incorrect takedowns happen -- how many appeals that result in content being restored. We'd also like to see better messaging to users when an action has been taken on their account, so they know the specific violation." "We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too," wrote Rosen. "This is the same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback."