Before Senate Facebook, Twitter Defend Efforts to Stop Fake News

Facebook and Twitter executives defended recent efforts to stop the use of their platforms by Russia, Iran and other countries to influence U.S. elections.

In testimony before the U.S. Senate, Facebook COO Sheryl Sandberg and Twitter Chief Executive Jack Dorsey on Wednesday defended their employers’ recent efforts to thwart influence campaigns by Russia and other U.S. enemies and promised  future efforts to silence accounts that could be considered “threats to democracy.”

Speaking before the bipartisan committee, the two executives acknowledged mistakes their respective companies made during the critical 2016 Presidential election cycle and said their firms had greatly improved the methods they use to spot influence campaigns.

[See also: Facebook defends itself against report it allowed hate speech for financial gain]

Their appearance is one of several hearings the Senate Intelligence Committee has held to get to the bottom of Russian interference using social media to suppress voters and sway the election results. Google, which was invited to testify at the hearing, failed to send a representative to speak.

Speaking for Facebook, Sandberg acknowledged that thousands of ads that ran on its site in 2015 and 2016 were part of Russian information operations and were designed to foment discord around a range of issues.  On Tuesday Sandberg admitted before the committee blame in allowing Facebook to be used as a platform in the debacle.

[You might also like: Podcast Episode 91: Fighting Fake News with or without Facebook and whats with all the Cryptojacking?]

Our bad

“We were too slow to spot this and too slow to act,” Sandberg said in her opening statement to lawmakers. “That’s on us. The interference was completely unacceptable. It violated the values of our company and of the country we love.”

In response, she said the company has doubled the number of people working on safety and security to try to ensure it spots influence campaigns happening in the future. The company now has more than 20,000 now employed on its safety and security team.

Facebook is also using machine learning and artificial intelligence technology to help the company identify what it calls “coordinated inauthentic behavior” and the spread of misinformation or “fake news.” Company actions include removing fake accounts and boosting ad transparency, Sandberg said.

Twitter CEO Dorsey said his firm is also taking steps to mitigate the use of its platform by bad actors using bots or false accounts to spread misinformation or interfere with democratic processes, as the platform’s health depends on the integrity of the service Twitter provides to its users, Dorsey said.

“Twitter continues to engage in intensive efforts to identify and combat state-sponsored hostile attempts to abuse social media for manipulative and divisive purposes,” he said in his opening statement. “We now possess a deeper understanding of both the scope and tactics used by malicious actors to manipulate our platform and sow division across Twitter more broadly….Our work on this issue is not done, nor will it ever be.”

Twitter also is working on new technologies for its platform to improve how quickly it can detect patterns of behavior that lead to what Dorsey called “malicious automation.” The company is using “machine learning and deep learning”  to recognize patterns and link automated behavior to other accounts showing similar patterns. That is far better than trying to identify a fake account or not, he said during his testimony.

Lawmaker scrutiny

Lawmakers of both parties were receptive to Sandberg and Dorsey, saving their ire for Larry Page, chief executive of Google’s parent company Alphabet, who was invited but did not appear next to Sandberg and Dorsey.

“I want to express my outrage that your counterpart at Google is not at the table as well,” Sen. Susan Collins (R-Maine) said.

Still, Senators from both parties pushed the social media company executives for more transparency. User privacy and political bias were recurrent themes of the Senators’ questioning, as was the companies’ responsibility and even potential liability for criminal events that occur because of social activity and commentary that occurs on the platforms.

Sen. Collins suggested to Dorsey that it’s not enough that Twitter merely remove pages that are connected to Russia accounts linked to election fraud. It also should let account followers know that they have been subject to conspiracies, false claims and other misinformation, she suggested.

A sheepish Dorsey acknowledged that there is definitely more the company can do to let users know they may have been duped. “We simply haven’t done enough,” he said. “We didn’t have enough communication going out in terms of what was seen and tweeted. We do believe transparency is where we need to do the most work and improvement.”

When faced with the same question, Sandberg said that Facebook already has improved in this area, informing users when appropriate that an account or event they followed was fake. She cited an election event in Washington that was promoted on Facebook by an inauthentic account, saying that when Facebook deleted the event and account, it also notified both users who marked themselves “Going” or Interested” in the event of the account’s inauthenticity.

Facebook also has more third-party auditors than ever looking into false claims and fake news on the site, Sandberg said. When it finds content that it deems as such, the platform now will include links to stories auditors believe have more accurate information to ensure users have an opportunity to be better informed, she said.

Liability, partnership and regulation

The executives also faced questions about whether they believe their companies should be liable for crimes or even deaths that occur because of the spread of information–false or otherwsie–on their platforms, and whether regulation should be enacted to promote this type of policy.

Sen. Joe Machin (D-West Virginia) went slightly off topic and queried Sandberg and Dorsey about the illicit sale of opioids using their platforms, asking them point blank if they feel they should be held liable for deaths that occur because of these illegal drug sales. Both companies were in attendance of an FDA-sponsored Opioid Summit held in June to combat the growing opioid addiction crisis–spurred by the distribution of prescription painkillers–in the United States.

Appearing surprised, both executives hastily insisted that illegal activity like this is not supported by either of their companies’, and that they are using the same technological and reporting tactics as they use for inauthentic content, hate speech and fake news to take down any accounts using Facebook or Twitter to sell drugs.

Committee members suggested that regulation is on the table as an option to help the social-media companies manage user privacy and any future interference by foreign states in U.S. democratic processes. However, they seem inclined–like the executives–to prefer to work together as partners to to see how collaboration can achieve desired results.

One senator–Sen. Angus King (I-Maine)–even noted that there is a fine line between protecting democracy and censorship, and encouraged lawmakers and executives alike to try to strike the right balance between the two.

“We have to be sure that we’re not censoring but that we’re providing customers and users the context from what they are seeing [on Facebook and Twitter],” he said. “I’d hate to see your platforms political in the sense that you’re censoring one side or the other.”

Dorsey said he welcomed more input from lawmakers on ways to “increase the health of the digital public square we’re helping to build,” and asked for “a more regular cadence of meetings” to help the social media companies improve efforts to get the balance right.

“As we are building a digital public square, we do believe expectations follow that, and freedom of speech is the default,” he said. “We need to understand when that default interferes with other fundamentally human rights such as physical security or privacy. I do believe that context does matter.”