Groups Similar Search Look up By Text Browse About

Facebook says it took down 583 million fake accounts in Q1 2018


As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day. Facebook today published its first-ever Community Standards Enforcement Report, detailing what kind of action it took on content displaying graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, and spam. Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. That doesnt include what Facebook says are millions of fake accounts that the company catches before they can finish registering. The report comes just a few weeks after Facebook published for the first time detailed internal guidelines for how it enforces content takedowns. The numbers give users a better idea of the sheer volume of fake accounts Facebook is dealing with. The company has pledged in recent months to use facial recognition technology — which it also uses to suggest which Facebook friends to tag in photos — to catch fake accounts that might be using another persons photo as their profile picture. But a recent report from the Washington Post found that Facebooks facial recognition technology may be limited when it comes to detecting fake accounts, as the tool doesnt yet scan a photo against all of the images posted by all 2.2 billion of the sites users to search for fake accounts. Facebook also gave a breakdown of how much other undesirable content it removed during Q1 2018, as well as how much of it was flagged by its systems or reported by users: The numbers show that Facebook is still predominately relying on other people to catch hate speech — which CEO Mark Zuckerberg has spoken about before, saying that its much harder to build an AI system that can determine what hate speech is then to build a system that can detect a nipple.   Facebook defines hate speech as a direct attack on people based on protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. The problem is that, as Facebooks VP of product management Guy Rosen wrote in the blog post announcing todays report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content. But hate speech is a problem for Facebook today, as the companys struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.   And the companys failure to properly catch hate speech could push users off the platform before it is able to develop an AI solution. Facebook says it will continue to provide updated numbers every six months. The report published today spans from October 2017 to March 2018, with a breakdown comparing how much content the company took action on in various categories in Q4 2017 and Q1 2018.

Facebook’s first content moderation report finds terrorism posts up 73 percent this year


Facebook took enforcement action on 1.9 million posts related to terrorism by Al Qaeda and ISIS in the first quarter of this year, the company said, up from 1.1 million posts in the last quarter of 2018. The increased enforcement, which typically results in posts being removed and accounts being suspended or banned from Facebook, resulted from improvements in machine learning that allowed the company to find more terrorism-related photos, whether they were newly uploaded or had been on Facebook for longer. Facebook found 99.5 percent of terrorism-related posts before they were flagged by users, it said. In the previous quarter, 97 percent of posts were found by the company on its own. Facebook made the data available as part of its first ever Community Standards Enforcement Report, which documents content moderation actions taken by the company between October and March. Other findings in the report include: Graphic violence. Posts that included graphic violence represented from 0.22 percent to 0.27 percent of views, up from 0.16 to 0.19 percent in the previous quarter. The company took action on 3.4 million posts, up from 1.2 million in the previous quarter. It said violent posts appeared to have risen in conjunction with the intensifying conflict in Syria. Nudity and sex. Posts with nudity or sexual activity represented 0.07 to 0.09 percent of views, up from 0.06 to 0.08 percent in the previous quarter. The company took action on 21 million posts, about the same as the previous quarter. Hate speech. Facebook took action on 2.5 million posts for violating hate speech rules, up 56 percent from the previous quarter. Users reported 62 percent of hate speech posts before Facebook took action on them. Spam. Facebook took action on 837 million spam posts, up 15 percent from the previous quarter. The company says it detected nearly 100 percent of spam posts before users could report them. Fake accounts. Of Facebooks monthly users, 3 to 4 percent are fake accounts, the company said. It removed 583 million fake accounts in the first quarter of the year, down from 694 million in the previous quarter. The data, which the company plans to issue at least twice a year, is a move toward holding ourselves accountable, Facebook said in its report. This guide explains our methodology so the public can understand the benefits and limitations of the numbers we share, as well as how we expect these numbers to change as we refine our methodologies. Were committed to doing better, and communicating more openly about our efforts to do so, going forward. The company is still working to develop accurate metrics that describe how often hate speech is seen on the platform, said Guy Rosen, a vice president of product management, in an interview with reporters. The companys machine-learning systems have trouble identifying hate speech because computers have trouble understanding the context around speech. Theres a lot of really tricky cases, Rosen said. Is a slur being used to attack someone? Is it being used self-referentially? Or is it a completely innocuous term when its used in a different context? The final decisions on hate speech are made by human moderators, he added. Still, people post millions of unambiguously hateful posts to Facebook. In March, the United Nations said Facebook was responsible for spreading hatred of the Rohingya minority in Myanmar. Facebooks lack of moderators who speak the local language has hampered it in its effort to reduce the spread of hate speech. We definitely have to do more to make sure we pay attention to those, Rosen said, noting that the company had recently hired more moderators in the area. The enforcement report arrives a month after Facebook made its community standards public for the first time. The standards document what is and isnt allowed on Facebook, and serves as a guide for Facebooks global army of content moderators. Facebook is releasing its enforcement report at a time when the company is under increasing pressure to reduce hate speech, violence, and misinformation on its platform. Under pressure from Congress, Facebook has said it will double its safety and security team to 20,000 people this year.