Size matters
Tech platforms' content policies are a mishmash of hastily-drafted reactions to real-world events. This might not matter if they weren't so damn big.
On Tuesday, Substack’s leadership team waltzed into a melee.
That may be a tad overdramatic. But their December 22nd newsletter, “Substack’s view of content moderation,” authored by co-founders Chris Best, Hamish McKenzie and Jairaj Sethi, vaults their popular newsletter platform directly into the center of some of the most contentious debates around the moderation of user-generated content.
In the piece, the founding trio make the case that Substack differs substantially from advertising-based social platforms like Facebook who algorithmically surface content to users and whose business models therefore rest on the resulting lowest-common-denominator engagement.
This environment is contrasted (unfavorably, of course) with Substack, where users are in complete control of the content they consume and where Substack’s financial incentives are tied to paid subscriptions: “Our entire business depends on holding writers’ trust, which is exemplified by how easy it is for a writer to leave the platform.”
This leads to their conclusion:
With that in mind, we commit to keeping Substack wide open as a platform, accepting of views from across the political spectrum. We will resist public pressure to suppress voices that loud objectors deem unacceptable…
Ultimately, we think the best content moderators are the people who control the communities on Substack: the writers themselves…
As the meta platform, we cannot presume to understand the particularities of any given community or to know what’s best for it. We think it’s better that the publisher, or a trusted member of that community, sets the tone and maintains the desired standard, and we will continue to build tools to help them to do that. Such an approach allows for more understanding and nuance than moderation via blunt enforcement from a global administrator.
At its core, the thesis advanced by Best, McKenzie, and Sethi lays bare the stark realpolitik at the heart of many tech platforms. Each of them employs a different set of philosophical justifications, but the underlying objective is usually the same: as little moderation as they can get away with.
A more honest argument, albeit a lower-brow one, might be: content moderation is expensive. (Just ask a newspaper! Or Facebook, for that matter.) But while this is a perfectly reasonable case to make to a venture capitalist, there’s little reason for this stance to win the hearts and minds of anyone else.
Indeed, it might help if tech founders betrayed more than a passing familiarity with their own content policies. In a podcast interview with The Verge’s Nilay Patel earlier this month, Chris Best airily described Substack’s content policy as banning “pornography, super illegal stuff. There’s a short list of really tightly construed things.” And who runs this moderation team? “Right now, it’s the founders.”
In another portion of the interview, Patel dug into the eligibility criteria for Substack Defender, a legal support program for writers confronting thorny legal questions or threats based on their pieces. When Best defined eligible candidates as “people that are obviously doing totally legit journalism,” Patel pounced:
I would only push back on you there because we live in the time that we live in, and say that the phrase “obviously legit journalism” is wildly up for debate.
Sure.
So what is your criteria for obviously legit journalism?
I mean, in this case, it’s really we know when we see it, and in practice it’s been, “I’m a local journalist, that’s writing something about the corruption of a local politician or a local business person that’s of clear public interest,” and is well-supported and this kind of thing.
So do you have editors who are looking at the work and evaluating it? Who’s making the decision?
In the earliest iterations of the program, it’s basically some legal counsel that we have, and us, the founders.
This nonchalance does not, to put it kindly, inspire confidence. But it does put Chris Best in very good company. Take a look at Facebook, Twitter, and YouTube. These companies aren’t just user-generated content hubs: they’re a global network of digital town squares.
And yet most of what we see is this trio constantly fighting the last war, careening reactively to new events and instituting revamped policies to address them, often long after the damage has already been done. (In several cases, the platforms’ whipsaw decisions, such as banning Alex Jones en masse or sanctioning far-right provocateurs on the same day, smack of herd instinct more than principle.)
For example, earlier this month Facebook announced it would remove COVID-19 vaccine disinformation, well after hoaxes and conspiracies had amassed a gigantic following on the platform and coincided with a marked drop in Americans’ stated willingness to take the vaccine once it was available.
Facebook also announced in October that it would remove Holocaust denialism from its platform, two years after Mark Zuckerberg had explicitly called out that content as a category he would not ban. In a brief post accompanying the policy reversal, Zuckerberg cited “data showing an increase in anti-Semitic violence” as an impetus for his about-face.
In September, Facebook instituted a ban on new political ads for the week leading up to the U.S. presidential election, while confirming that new ads could be created immediately afterward. The very next month, it reversed course, announcing an indefinite ban on all U.S. political ads post-election as well.
Twitter, whose content policy I described two years ago as “a mosaic of unenforced guidelines constructed under duress following various bouts of public criticism and whose account-banning threshold seems designed solely to ensure that it could never inadvertently ensnare the president of the United States,” has improved but still earns much of that description.
In October 2019, it announced a loosening of its content policies for tweets from world leaders “if there is a clear public interest value to keeping the Tweet on the service.” In February of this year, it declared that users “may not deceptively share synthetic or manipulated media that are likely to cause harm.” A month later, it expanded the “definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information.” In May, it said it would “provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content.”
In September, Twitter announced revisions to its Civic Integrity Policy: “Starting next week, we will label or remove false or misleading information intended to undermine public confidence in an election or other civic process.” A month later, it revised the policy yet again: “Starting next week, when people attempt to Retweet one of these Tweets with a misleading information label, they will see a prompt pointing them to credible information about the topic before they are able to amplify it.”
Twitter’s patchwork approach to content moderation was (inadvertently) best summed up in a blog post update this month on its hateful conduct policy:
In July 2019, we expanded our rules against hateful conduct to include language that dehumanizes others on the basis of religion or caste. In March 2020, we expanded the rule to include language that dehumanizes on the basis of age, disability, or disease. Today, we are further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.
On YouTube, meanwhile, a study found that more than one-quarter of the most-viewed coronavirus videos by late Match were inaccurate or misleading, to the tune of over 62 million views. In May, YouTube published a “COVID-19 medical misinformation policy” outlining a broad range of prohibited video content. By September, Business Insider was reporting that the platform was accidentally deleting channels and videos that debunked coronavirus misinformation.
YouTube’s approach to hate speech and white supremacists hasn’t been much better. In June 2019 — the year after Data & Society called YouTube “the single most important hub by which an extensive network of far-right influencers profit from broadcasting propaganda to young viewers” — YouTube adopted a new set of policies “to tackle hate.” But it wasn’t until over a year later that the video platform finally banned prominent neo-Nazis and white supremacists like David Duke, Richard Spencer, and Stefan Molyneux.
Like Substack, all of these platforms had previously gone to great lengths defending their hands-off approach as fundamental to their ethos. (“Hands-off” is a content moderation policy all the same, and one that has reliably benefited nefarious actors of a specific ideological cohort.)
But amid mounting public pressure, each of these three has folded to varying degrees, exposing their prior stances as little better than aspiring to be the global town square without having to pay to police it. It remains to be seen whether the same fate awaits Substack. (It should be noted that Reddit, which seems to have noticed these community dysfunctions far earlier than its peers, has also fared better — more recently, at least — in keeping the worst offenders off its platform.)
Each of these three oligopolistic town squares nevertheless remains authoritarian in its own way: the holder of the megaphone is determined by an opaque algorithm and narrowcast to specific individuals via feeds or recommendation engines, rather than decided via the long-running democratic tussle between the (ever-changing, implicit, but very real) “mainstream” and the fringe actors pushing hard against the stultifying strictures of the Overton window.
The real town square is, in short, our culture. It’s messy and loud and often angry. But we’ve never before confronted a situation where a small coterie (perchance a cabal) of private companies plays such a domineering role in determining who occupies the fringe, who remains acceptably conventional, and — perhaps most importantly — who is exposed to which messages. This makes the tech giants’ rulebooks enormously consequential — and especially maddening when they’re perceived to be revised capriciously and without due process.
Ultimately Substack isn’t sufficiently large or monopolistic at this point for its content decisions to merit widespread alarm. It’s built on top of an open protocol (email), and both its writers and readers have plenty of alternative options: TinyLetter, Lede, MailChimp, Wordpress, Medium, and so on. The reason no one cares which idiosyncratic rules govern the local bookstore’s inventory decisions, but we all scream bloody murder about Amazon’s, is the size.
And so this is ultimately a competition problem, not one of censorship or freedom of speech. At the very beginning of this year, pondering Facebook’s terrifying power to shape election narratives, I ventured that:
Part of the public's lingering dissatisfaction with Facebook's every policy move, then, is best understood as a non-verbalized expression of despair at our options: perhaps the problem is not whether Facebook chose A or B, but that it's in a position to make these decisions at all.
The main thing that’s changed between January, when I wrote that, and today is that the government is actually doing something about Facebook’s size. Whether it’s too late to prevent the boundaries of American online discourse from being defined by the whims of social platform executives like Joel Kaplan is an open question. But if Substack is lucky enough to reach a size that inspires similar public anxiety, it too will learn that neither the absence of content policies nor a revolving door of them is a panacea for resolving the conflicts inherent to democratic societies.