Facebook defends itself against report it allowed hate speech for financial gain

Facebook has denied allegations by a by a U.K. news outlet that it gave preferential treatment to some pages that promote hate speech because of financial interest, saying that creating a safe environment for its users remains a top priority.

The social media giant Tuesday defended itself against a TV report on Channel 4 in the United Kingdom that shows Facebook appearing to let far-right activists spread hate speech by not blocking their pages because they brought considerable revenue into the company. According to the report, Facebook instead appointed them internal moderators that allowed them to continue to publish.

“It has been suggested that turning a blind eye to bad content is in our commercial interests. This is not true,” Monika Bickert, vice president of global policy management at Facebook, said in a blog post.

Facebook content reviewers in Essen, Germany. (Source: Facebook)

“Creating a safe environment where people from all over the world can share and connect is core to Facebook’s long-term success,” she asserted. “If our services aren’t safe, people won’t share and over time would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content.”

Bickert’s comments are a response to a report in which Channel 4 Dispatches sent an undercover reporter to work as a content moderator in a Dublin-based Facebook contractor, CPL. CPL is one of several companies Facebook employs to report around the clock and across time zones any violations in the company’s Community Standards and user policies to ensure content on Facebook remains in line with them.

[See also: For Facebook’s Stamos, conflicts over breaches and disclosure a theme]

At CPL, the reporter discovered that while Facebook usually deletes pages if they are found to have five or more pieces of content that violate the site’s rules, it showed more leniency with pages from leading far-right activists. The report used Tommy Robinson, born Stephen Yaxley-Lennon–a controversial and polarizing far-right and pro-Brexit activist and the pro-Brexit Britain First political party–as examples.

In the documentary, a moderator in the Dublin office told the Dispatches reporter that Robinson’s and Britain First’s pages were left up even though they repeatedly broke Facebook’s rules, because “they have a lot of followers so they’re generating a lot of revenue for Facebook.”

Eventually, Facebook did ban Britain First’s Facebook page in March, but not until nearly six weeks after it no longer was registered as a political party and a week after its leaders were jailed for committing a number of hate crimes against Muslims.

Robinson also is jail on a 13-month sentence for contempt of court and no longer has an active Facebook page. He also was permanently banned from Twitter. Currently, both the page names “Britain First” and “Tommy Robinson” are being used as satirical pages to promote more liberal political agendas.

An image from the Britain First website decrying Facebook’s decision to ban the page earlier this year. (Source: Britain First website)

Bickert expressed gratitude to the journalists who brought attention to the situation and said the company is currently investigating what happened in the case of Robinson and Britain First “so we can prevent these issues from happening again.”

After the Channel 4 report, the company also required all trainers in Dublin to do a re-training session and plans a similar global move, Bickert said. Facebook also reviewed the policy questions and enforcement actions that the reporter raised and has corrected errors that were found, she said.

Facebook is taking a number of steps to deal with problematic content on its pages “more effectively,” Bickert said. The company is currently working to double the number of people working on its safety and security teams from 10,000 to 20,000, including more than 7,500 content reviewers.

The company also is employing new technology to expedite reporting procedures so questionable content such as hate speech, nudity or graphic violence is detected and removed more quickly than ever before, Bickert said. Facebook also has developed software internally to remove content such as terrorist propaganda or child-sexual-abuse images before they’ve even been reported.