Groups Similar Search Look up By Text Browse About

ID: 107651

URL: https://techcrunch.com/2018/11/08/facebook-removed-14-million-pieces-of-terrorist-content-this-year-and-the-numbers-are-rising/

Date: 2018-11-08

Facebook removed 14 million pieces of terrorist content this year, and the numbers are rising

Facebook must exert constant vigilance to prevent its platform from being taken over by neer-do-wells, but how exactly it does that is only really known to itself. Today, however, the company has graced us with a bit of data on what tools its using and what results theyre getting — for instance, more than 14 million pieces of terrorist content removed this year so far. More than half of that 14 million was old content posted before 2018, some of which had been sitting around for years. But as Facebook points out, that content may very well have also been unviewed that whole time. Its hard to imagine a terrorist recruitment post going unreported for 970 days (the median age for content in Q1) if it was seeing any kind of traffic. Perhaps more importantly, the numbers of newer content removed (with, to Facebooks credit, a quickly shrinking delay) appear to be growing steadily. In Q1, 1.2 million items were removed; in Q2, 2.2 million; in Q3, 2.3 million. User-reported content removals are growing as well, though they are much smaller in number — around 16,000 in Q3. Indeed, 99 percent of it, Facebook proudly reports, is removed proactively. Something worth noting: Facebook is careful to avoid positive or additive verbs when talking about this content, for instance it wont say that terrorists posted 2.3 million pieces of content, but rather that was the number of takedowns or content surfaced. This type of phrasing is more conservative and technically correct, as they can really only be sure of their own actions, but it also serves to soften the fact that terrorists are posting hundreds of thousands of items monthly. The numbers are hard to contextualize. Is this a lot or a little? Both, really. The amount of content posted to Facebook is so vast that almost any number looks small next to it, even a scary one like 14 million pieces of terrorist propaganda. It is impressive, however, to hear that Facebook has greatly expanded the scope of its automated detection tools: Our experiments to algorithmically identify violating text posts (what we refer to as language understanding) now work across 19 languages. And it fixed a bug that was massively slowing down content removal: In Q2 2018, the median time on platform for newly uploaded content surfaced with our standard tools was about 14 hours, a significant increase from Q1 2018, when the median time was less than 1 minute. The increase was prompted by multiple factors, including fixing a bug that prevented us from removing some content that violated our policies, and rolling out new detection and enforcement systems. The Q3 number is two minutes. Its a work in progress. No doubt we all wish the company had applied this level of rigor somewhat earlier, but its good to know that the work is being done. Notable is that a great deal of this machinery is not focused on simply removing content, but on putting it in front of the constantly growing moderation team. So the most important bit is still, thankfully and heroically, done by people.



ID: 107734

URL: https://venturebeat.com/2018/11/08/facebook-claims-it-deleted-3-million-pieces-of-isis-and-al-qaeda-propaganda-in-q3-2018/

Date: 2018-11-08

Facebook claims it deleted 3 million pieces of ISIS and Al Qaeda propaganda in Q3 2018

In congressional hearings over the past year, Facebook executives including CEO Mark Zuckerberg have cited Facebooks success in using artificial intelligence and machine learning to take down terrorist-related content as an example of how it hopes to use tech to proactively take down other types of content that violate its policies, like hate speech. Now, the company in a blog post today shed some light on some of the new tools its been using. In the post, attributed to Facebook vice president of global policy management Monika Bickert, Facebook said that it took down 9.4 million pieces of terrorist-related content in Q2 2018, and 3 million pieces of content in Q3. Thats compared to 1.9 million pieces of content removed in Q1. Its important to note that Facebook is defining terrorist-related content in this report as pieces of content related to ISIS, Al Qaeda and their affiliates, and doesnt address any takedown efforts regarding content from other hate groups. Facebooks own internal guidelines define a terrorist organization more broadly, describing it as any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim. That increase in the amount of content Facebook had to take down from Q1 to Q2 might immediately seem concerning, but thats because Facebook said during Q2 it was taking more action on older content. For the past three quarters, Facebook said that it has proactively found and removed 99 percent of terrorist-related content, but the amount of content surfaced by user reports continues to rise — from 10,000 in Q1 to 16,000 in Q3. More statistics on how much terrorist-related content Facebook has removed in recent quarters, and how old it is, is below: More importantly, Facebook also gave some new details on what tools its using, and how it decides when to take action. Facebook says it now uses machine learning to give posts a score indicating how likely it is that that post signals support for the Islamic State group (aka ISIS), al-Qaida, or other affiliated groups. Facebooks team of reviewers then prioritizes posts with the highest scores. If the score is high enough, sometimes Facebook will remove the content even before human reviewers can look at it. Facebook also said that it recently started using audio and text-hashing techniques — previously it just used image and video-hashing — to detect terrorist content. Its also now experimenting with algorithms to identify posts whose text violates its terrorist policies across 19 languages. Facebook hasnt said what other types of content it may soon use these systems to detect, though it acknowledges that terrorists come in many ideological stripes. But its clear that — if Facebook is using machine learning to determine whether or not a post expresses support for a certain group — those same systems could likely be trained in the future to spot support for other well-known hate groups, such as white nationalists. Its also worth noting that even though Facebook is viewing the decrease in the amount of time terrorist-related content has spent on the platform as a success, the company itself acknowledges that thats not the best metric. Our analysis indicates that time-to-take-action is a less meaningful measure of harm than metrics that focus more explicitly on exposure content actually receives, Bickert wrote. Focusing narrowly on the wrong metrics may disincentives [sic] or prevent us from doing our most effective work for the community.