Groups Similar Search Look up By Text Browse About

Facebook’s AI makes little headway in the fight against hate speech


Facebook today published its annual transparency report, and for the first time included the number of items removed in each category that violated its content standards. While the company seems to be very proficient at removing nudity and terrorist propaganda, its lagging behind when it comes to hate speech. Of the six categories mentioned in the report, the number of hate speech posts Facebooks algorithms caught before users reported them was the lowest: For hate speech, our technology still doesnt work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38 percent of which was flagged by our technology. Are you doing business in Amsterdam in May? Compare that percentage with the number of posts proactively purged for violent content (86 percent), nudity and sexual content (96 percent), and spam (nearly 100 percent). But thats not to say the relatively low number is due to an defect from Facebook. The problem with trying to proactively scour Facebook for hate speech is that the companys AI can only understand so much at the moment. How do you get an AI to understand the nuances of offensive and derogatory language when many humans struggle with the concept? Guy Rosen, Facebooks Vice President of Product Management, pointed out the difficulties of determining context: Its partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isnt good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. If a Facebook user makes a post speaking about their experience being called a slur in public, using the word in order to make a greater impact, does their post constitute hate speech? Even we were all to agree that it doesnt, how does one get an AI to understand the nuance? And what about words which are offensive in some language, but not another? Or homographs? Or, or, or — the caveats go on and on. When its being asked to read that kind of subtlety, it shouldnt be a surprise Facebooks AI has only thus far had a success rate of 38 percent. Facebook is attempting to keep false positives to a minimum by having each case reviewed by moderators. The company addressed the issue during its F8 conference: Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? … Our teams then review the content so whats OK stays up, for example someone describing hate they encountered to raise awareness of the problem. Mark Zuckerberg waxed poetic during his Congressional testimony about Facebooks plans to use AI to wipe hate speech off its platform: I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate. With that estimate, itd be absurd to expect the technology to be as accurate as Zuckerberg hopes itd be now. Well have to check Facebooks transparency report in the next couple of years to see how the companys progressing. The Next Webs 2018 conference is almost here, and itll be . Find out all about our tracks here.

Facebook says it took down 583 million fake accounts in Q1 2018


As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day. Facebook today published its first-ever Community Standards Enforcement Report, detailing what kind of action it took on content displaying graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, and spam. Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. That doesnt include what Facebook says are millions of fake accounts that the company catches before they can finish registering. The report comes just a few weeks after Facebook published for the first time detailed internal guidelines for how it enforces content takedowns. The numbers give users a better idea of the sheer volume of fake accounts Facebook is dealing with. The company has pledged in recent months to use facial recognition technology — which it also uses to suggest which Facebook friends to tag in photos — to catch fake accounts that might be using another persons photo as their profile picture. But a recent report from the Washington Post found that Facebooks facial recognition technology may be limited when it comes to detecting fake accounts, as the tool doesnt yet scan a photo against all of the images posted by all 2.2 billion of the sites users to search for fake accounts. Facebook also gave a breakdown of how much other undesirable content it removed during Q1 2018, as well as how much of it was flagged by its systems or reported by users: The numbers show that Facebook is still predominately relying on other people to catch hate speech — which CEO Mark Zuckerberg has spoken about before, saying that its much harder to build an AI system that can determine what hate speech is then to build a system that can detect a nipple.   Facebook defines hate speech as a direct attack on people based on protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. The problem is that, as Facebooks VP of product management Guy Rosen wrote in the blog post announcing todays report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content. But hate speech is a problem for Facebook today, as the companys struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.   And the companys failure to properly catch hate speech could push users off the platform before it is able to develop an AI solution. Facebook says it will continue to provide updated numbers every six months. The report published today spans from October 2017 to March 2018, with a breakdown comparing how much content the company took action on in various categories in Q4 2017 and Q1 2018.