Groups Similar Search Look up By Text Browse About

Facebook’s AI makes little headway in the fight against hate speech


Facebook today published its annual transparency report, and for the first time included the number of items removed in each category that violated its content standards. While the company seems to be very proficient at removing nudity and terrorist propaganda, its lagging behind when it comes to hate speech. Of the six categories mentioned in the report, the number of hate speech posts Facebooks algorithms caught before users reported them was the lowest: For hate speech, our technology still doesnt work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38 percent of which was flagged by our technology. Are you doing business in Amsterdam in May? Compare that percentage with the number of posts proactively purged for violent content (86 percent), nudity and sexual content (96 percent), and spam (nearly 100 percent). But thats not to say the relatively low number is due to an defect from Facebook. The problem with trying to proactively scour Facebook for hate speech is that the companys AI can only understand so much at the moment. How do you get an AI to understand the nuances of offensive and derogatory language when many humans struggle with the concept? Guy Rosen, Facebooks Vice President of Product Management, pointed out the difficulties of determining context: Its partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isnt good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. If a Facebook user makes a post speaking about their experience being called a slur in public, using the word in order to make a greater impact, does their post constitute hate speech? Even we were all to agree that it doesnt, how does one get an AI to understand the nuance? And what about words which are offensive in some language, but not another? Or homographs? Or, or, or — the caveats go on and on. When its being asked to read that kind of subtlety, it shouldnt be a surprise Facebooks AI has only thus far had a success rate of 38 percent. Facebook is attempting to keep false positives to a minimum by having each case reviewed by moderators. The company addressed the issue during its F8 conference: Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? … Our teams then review the content so whats OK stays up, for example someone describing hate they encountered to raise awareness of the problem. Mark Zuckerberg waxed poetic during his Congressional testimony about Facebooks plans to use AI to wipe hate speech off its platform: I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate. With that estimate, itd be absurd to expect the technology to be as accurate as Zuckerberg hopes itd be now. Well have to check Facebooks transparency report in the next couple of years to see how the companys progressing. The Next Webs 2018 conference is almost here, and itll be . Find out all about our tracks here.

Facebook has already removed 583 million fake accounts this year


It published a report on its community guideline enforcement efforts. Last month, Facebook published its internal community enforcement guidelines for the first time and today, the company has provided some numbers to show what that enforcement really looks like. In a new report that will be published quarterly, Facebook breaks down its enforcement efforts across six main areas -- graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. The report details how much of that content was seen by Facebook users, how much of it was removed and how much of it was taken down before any Facebook users reported it. Spam and fake accounts were the most prevalent and in the first quarter of this year, Facebook removed 837 million pieces of spam and 583 million fake accounts. Additionally, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of hate speech and 1.9 million pieces of terrorist content. In some cases, Facebook's automated systems did a good job finding and flagging content before users could report it. Its systems spotted nearly 100 percent of spam and terrorist propaganda, nearly 99 percent of fake accounts and around 96 percent of posts with adult nudity and sexual activity. For graphic violence, Facebook's technology accounted for 86 percent of the reports. However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there. " As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse," Facebook's VP of product management, Guy Rosen, said in a post. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important." Throughout the report, Facebook shares how the most recent quarter's numbers compare to those of the quarter before it, and where there are significant changes, it notes why that might be the case. For example, with terrorist propaganda, Facebook says its increased removal rate is due to improvements in photo detection technology that can spot both old and newly posted content. "This is a great first step," the Electronic Frontier Foundation's Jillian York told the Guardian. " However, we don't have a sense of how many incorrect takedowns happen -- how many appeals that result in content being restored. We'd also like to see better messaging to users when an action has been taken on their account, so they know the specific violation." "We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too," wrote Rosen. "This is the same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback."