There ain't no rules in platform land
Just about every social platform has been heavily criticized over the past several months for their approaches to dealing with hateful, false, and derogatory speech online. (Perhaps Pinterest alone has emerged unscathed.) The contours of the debate can be roughly characterized as follows: from one angle, the social platforms stand for untrammeled and unencumbered free speech and they recoil at the prospect (from both an ethical and financial standpoint) of having to moderate what's acceptable behavior on their platforms.
Their critics, meanwhile, argue that the social platforms are already editors: they have built inscrutable algorithms that determine who sees what, their recommendation engines send vulnerable users down rabbit holes of radicalization and deep into echo chambers, and furthermore they already ban -- and have done so, for years -- certain kinds of objectionable content (such as child pornography) from their platforms, all of which suggests that a posture of nuanced moderation is in fact possible. Perhaps most importantly, the critics point out, the First Amendment protects against censorship by the government, not banishment by private-sector companies.
Even so, my initial kneejerk reaction was to sympathize with the platforms, whose entire business model to date has been predicated on the notion that they are indeed platforms, not publishers or media companies, and whose continued allure to shareholders rests in part on the premise that large-scale moderation is neither their responsibility nor in their users' best interest.
But the past few months have made me wobbly, and the past few days have only accelerated this.
There is an internally consistent logic to the absolutist argument Facebook and others were making. Or, there was, right up until the moment last Sunday when Apple pulled Alex Jones' podcasts from their app and several social platforms immediately followed suit by dumping some (not all!) of Alex Jones' content from their services, brushing aside months of carefully parsed language around freedom of expression.
Oddly, only Twitter -- whose content policy is a mosaic of unenforced guidelines constructed under duress following various bouts of public criticism and whose account-banning threshold seems designed solely to ensure that it could never inadvertently ensnare the president of the United States -- refused to play along this time. CEO Jack Dorsey left Alex Jones' Twitter presence largely intact and sniped at his rivals for "taking one-off actions to make [them] feel good in the short term."
This was at once surprising and not: surprising because it was a rare nod to first principles from a company historically devoid of them, and unsurprising because, as usual, they got the facts wrong. (Dorsey's unequivocal declaration that Alex Jones had not violated Twitter's policies was comically, and immediately, proved false, eliciting a just as predictable comms disaster. Fortunately for Twitter, no one takes their comms seriously anymore, least of all their former comms chief.)
But while the critics are probably right to argue, as Kevin Roose did, that "slippery-slope fears about mass censorship by social media platforms are probably overblown," this doesn't make them totally illegitimate either. Dorsey was right to emphasize the primacy of rules in decision-making: indeed, the lack of clearly-stated and -understood rules by Facebook, YouTube, Twitter and others has contributed greatly to the present confusion as to what content is acceptable to post on their platforms.
The slipshod manner in which Facebook and YouTube began "de-platforming" Alex Jones increases the long-term risk of bad outcomes envisioned by slippery-slope pessimists, even if the short-term impact -- banning a mentally unstable grifter -- clearly benefits everyone but Alex Jones. This is because of why they banned him: not due to anything new he said, but because Apple moved first. It was, in other words, an act of enormous corporate cowardice motivated primarily by political pressure. This is exactly the type of decision structure you do not want huge companies making.
Indeed, even all of the above would not a crisis make if Facebook and YouTube were small potatoes in a competitive digital attention economy. But they're not. Facebook boasts over 2.2 billion users per month, and YouTube has nearly as many. 94% of Americans between the ages of 18 and 24 self-report as YouTube users, and over a billion hours of its videos are watched every day. As of last year, two-thirds of U.S. adults consumed news from social media sites (although this figure appears to have since declined), and nearly half reported getting news from Facebook specifically.
Given this scale, any individual policy decision by any one of these market-dominating companies is bound to have significant ripple effects. (Facebook, for example, has singlehandedly decimated or financially ruined multiple companies via policy diktat, with nary a batted eye.) But even more disruptive than one quasi-monopolist making an abrupt policy decision on a whim is multiple ones doing so simultaneously.
If Apple, Facebook, Google, and Twitter were to follow through on their halfhearted Alex Jones ban, the vast majority of his audience would effectively disappear overnight. We can lament the legal and regulatory failures that allowed these platforms to amass such unprecedented control over our public dialogue, but as of right now, this is the world we have.
It's undeniable that banning Alex Jones is the ultimate victimless crime. But we may someday find that the guardrails preventing banishment of a less detestable character are flimsier than we'd imagined. This is not a wholesale argument against banning unsavory characters from tech platforms, but it is a reason to hope their future policy decisions demonstrate more consistency than the ones that ousted Alex Jones.