Whose fault is it anyway?
If you're wondering why you just received this email, it's because a long time ago you signed up for my newsletter -- "networked" -- and I've just done a terrible job of updating it (I sent my last edition in August 👀). I promise to (almost definitely) write these more often.
A common complaint with news stories is that the headline is misleading and fails to match up to the article text itself. In the case of the latest Wall Street Journal exposé on Facebook ("You Give Apps Sensitive Personal Information. Then They Tell Facebook."), I actually think the reverse is true: the headline accurately points the finger at sites and mobile apps that share intimate, private data with Facebook -- including users' ovulation cycles! -- while the article rather misleadingly suggests this is Facebook's fault. Then again, if you zoom out far enough, it may not be so misleading after all. This, I think, makes the article a perfect microcosm of our society's Facebook dilemma.
The third paragraph states:
The social-media giant collects intensely personal information from many popular smartphone apps just seconds after users enter it, even if the user has no connection to Facebook, according to testing done by The Wall Street Journal.
Well, yeah. But the word "collects" is doing a lot of work here. The Facebook feature in question is called "app events" and is documented on a public-facing developer site: "Facebook App Events allows you to track these events to view analytics, measure ad performance, and build audiences for ad targeting." (Side note: if the phrase "public-facing developer site" sounds a lot like "but it was mentioned in the Terms of Service!" -- well, that's because it is similar. In other words, neither developer documentation nor Terms of Service are particularly reasonable ways to inform the general public of what you're up to.)
In short, app events are remarkably similar to Google Analytics events in that they are a catch-all way for businesses to log information on their key customer actions or data. As stated in the developer docs, once it's been sent to Facebook, this data can then be used for analytics, ad targeting, or both. Importantly, this product allows businesses to define their own custom events: I could create an app today and send an event to Facebook called "Murderers" every time someone clicks a button. Facebook's own developer docs use a colorful naming example:
What data gets sent, and what it's called, is up to the individual app.
According to Mark Zuckerberg, Facebook's CEO, over 90 million businesses are on Facebook, so in a sense it was only a matter of time before one of them created wildly inappropriate digital audiences that their users wouldn't have approved. Antonio GarcÃa MartÃnez, an ex-Facebook product manager and frequent defender of the company, argues that Facebook "is basically a bean counter here," meaning that all it's doing is aggregating a bunch of almost context-less data points with no stake in, or control over, their meaning.
Zuckerberg has recently decried critical press coverage of Facebook as "bullshit," and it's likely he'd feel similarly about this piece, which describes a seemingly gross violation of common-sense data privacy expectations and heavily implies the blame should be placed on Facebook, not the apps sending it data.
And MartÃnez would concur: "FB was in no way involved with the data collection, nor do they store the data in usable form." But I think he's wrong here, even in a narrowly technical sense: if the FBI were running an investigation on a suspect and requested Facebook records associated with his account, Facebook would almost certainly be able to produce the app events tied to the suspect. (Just like Uber did with customer data in the Jussie Smollett case.) If law enforcement surveillance is not a "usable form" of private data, I'm not sure what is.
In fact, Facebook is better-positioned to tie quasi-anonymized app events data to real-life identities (like names and addresses) than just about any other digital advertising company in the world, precisely because it's a social network: the whole point of the service, from a user's standpoint, is to share (annoying, envy-inducing) details of your personal life (and those of your cat).
Moreover, even if the data were sufficiently anonymized as to be made untraceable to an individual, this is cold comfort to consumers who still have virtually no means of figuring out how much of their private data is being shared, with whom, in what form, how often, and for what use.
This is why I disagree with Antonio's claim (belied by contradictory evidence) that "the hard reality is that most people don't care enough about their data to be even minimally inconvenienced to save it." I think the more accurate take belongs to reporter Kyle Whitmire, who described users' frustration thusly: "The problem at the heart of our apps, our devices and the Internet in general is that there is no way to say 'No, you can’t have my data' and still participate in society."
In other words, in a distressingly complex online data ecosystem bookended by the twin behemoths Facebook and Google, Internet users are faced with the impossible choice of either A) becoming PhD-level experts in the online data economy or B) extracting themselves entirely from the most comprehensive social platform ever created. (And yet even then, as the Journal piece points out, non-Facebook users are still targeted within highly invasive online advertising audiences.) To a lesser extent, the businesses sending this private data to Facebook face an analogous dilemma: they can either keep up with their rivals in an arms race to the bottom by making use of the most cutting-edge and privacy-erasing targeting capabilities, or they can take a principled stand and lose their competitive advantage.
If only there were a way to prevent such a race to the bottom.