Facebook’s terrible, horrible, no good, very bad year continued, with the social media company on the defense yet again over partnerships that granted high-tech companies extensive access to user data.
Personal user data that Facebook gave to business partners–including Amazon, Apple, Microsoft, Netflix, Spotify and Yandex–was actually meant to help users perform certain activities on Facebook that expanded their experience as well as the scope of the platform, said Konstantinos Papamiltiadis, director of developer platforms and programs at Facebook, in a blog post.
“First, people could access their Facebook accounts or specific Facebook features on devices and platforms built by other companies like Apple, Amazon, Blackberry and Yahoo. These are known as integration partners,” he wrote. “Second, people could have more social experiences–like seeing recommendations from their Facebook friends–on other popular apps and websites, like Netflix, The New York Times, Pandora and Spotify.”
Papmiltiadis’ post is in response to a New York Times article published Tuesday outlining how records generated in 2017 show that Facebook made so-called “special arrangements” with these and other partners to expand the scope of the service and what users themselves could access.
In the process, the company made questionable calls about sharing personal information of its 2.2 billion users, using control of that data to promote the platform itself and earn more advertising revenue.
With advertising the primary way Facebook makes money, “it makes business sense for them to provide advertisers with personal data that allows ads to be more targeted and effective,” said Franklyn Jones, CMO, Cequence Security. However, the security of that personal data then becomes an issue if the company isn’t careful, he said.
“It’s also a slippery slope that can quickly lead to personal data finding its way to the Dark Web, then being exploited for both data breaches and automated bot attacks,” he said.
A 2011 agreement with the Federal Trade Commission that prohibited the social network from sharing user data without explicit permission was aimed at protecting personal data from misuse. However, the Times suggests that Facebook may have been in violation of this agreement with its partnership deals, something Papmiltiadis denies in his post.
“We’ve been public about these features and partnerships over the years because we wanted people to actually use them–and many people did,” he wrote. “They were discussed, reviewed, and scrutinized by a wide variety of journalists and privacy advocates.”
Moreover, many of the data-sharing features highlighted by the Times’ article are now gone from the platform, Papmiltiadis added. These include allowing Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, and Netflix and Spotify’s ability to read Facebook users’ private messages.
One person who isn’t buying the company’s defense is former Facebook Chief Information Security Officer Alex Stamos–who recently left the company over its ties to Cambridge Analytica, a consulting firm that worked on behalf of Republican presidential candidate Donald Trump. Stamos used Twitter to immediately criticize Facebook for its response.
“This isn’t a good response to the NY Times story, because it makes the same mistake of blending all kinds of different integrations and models into a bunch of prose and it is very hard to match up the responses to the times’ claims,” Stamos tweeted.
The heart of the issue is transparency; Stamos’ contends that Facebook is being vague about how it uses data, shedding very little light on specific instances of data-sharing and not actually responding specifically to the paper’s claims.
His comment on the latest debacle reflects Stamos’ long-time stance on data-breach transparency and his clash with colleagues over Facebook’s handling of the Cambridge Analytica situation, something Stamos wanted the company to reveal much sooner than it actually did, though he initially defended Facebook’s actions publicly.
In that situation, data obtained from more than 50 million Facebook users was given to behavior research firm Strategic Communication Laboratories–a clear violation of the social network’s terms of service. Cambridge Analytica is Strategic Communication’s data-analytics firm and was involved in both the U.S. Presidential election and the U.K. Brexit referendum.
Data’s power demands transparency
The incident is just the latest in which Facebook has been made to feel the consequences of sketchy data-sharing policies it has maintained for years. Earlier this year executives, including founder and CEO Mark Zuckerberg and COO Sheryl Sandberg, spent hours testifying before Senate committees about privacy issues and use of data on the social network–an often painstaking process, as most lawmakers understand little about how Facebook even works.
See also: Is 2019 Privacy Rights’ Break Out Year?
The financial penalties also threaten to start adding up. In October, Facebook was fined 500,000 British pounds (about US$630,000) by the Information Commissioner’s Office for its role in the Cambridge Analytica breach. The company also faces potential fine in the billions in Europe thanks to the recently enacted General Data Privacy Regulation over a data breach due to a code vulnerability revealed in October. The GDPR–which went into effect in May–requires businesses operating in Europe to notify the authorities within 72 hours of confirming a data breach. Penalties are based on company revenue, which could spell billions in losses for tech giants like Facebook.