"If we see a bad thing in the world"
In the absence of a consensus content framework, ad-hoc self-regulatory regimes have sprouted amidst disinformation and political violence. But just who are the referees?
In August 2019, Cloudflare, a web infrastructure firm that protects sites from cyberattacks, effectively took the notorious imageboard 8chan offline after a shooting in El Paso, Texas that was inspired by posts on the site. In a thoughtful blog post accompanying the move, Cloudflare CEO Matthew Prince wrestled openly with the scope and arbitrariness of his company’s decision:
We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often…
Cloudflare's mission is to help build a better Internet. At some level firing 8chan as a customer is easy. They are uniquely lawless and that lawlessness has contributed to multiple horrific tragedies. Enough is enough.
What's hard is defining the policy that we can enforce transparently and consistently going forward. We, and other technology companies like us that enable the great parts of the Internet, have an obligation to help propose solutions to deal with the parts we're not proud of. That's our obligation and we're committed to it.
In a New York Times interview, Prince elaborated further:
Removing 8chan was not a straightforward decision, Mr. Prince said, in part because Cloudflare does not host or promote any of the site’s content. Most people would agree, he said, that a newspaper publisher should be responsible for the stories in the paper. But what about the person who operates the printing press, or the ink supplier? Should that person be responsible, too?
“It’s dangerous for infrastructure companies to be making what are editorial decisions,” he said. “The deeper you get into the technology stack, the harder it becomes to make those decisions.”
Ultimately, Mr. Prince said, he decided that 8chan was too centrally organized around hate, and more willing to ignore laws against violent incitement in order to avoid moderating its platform. The realization, along with the multiple mass murders that the authorities have connected to 8chan, tipped the scale in favor of a ban.
“If we see a bad thing in the world and we can help get in front of it, we have some obligation to do that,” he said.
I’ve been thinking about this episode over the past four days, ever since a deranged mob of Donald Trump fans and conspiracy theorists stormed into the U.S. Capitol building in a riot that left five people dead and resulted in a host of rapid content policy decisions by major tech companies.
Cloudflare feels like it belongs to a corporate category that’s far removed from the content moderation debates we’ve litigated incessantly over the past five-ish years. Those arguments have primarily centered on well-known consumer-facing platforms like Facebook, YouTube, and Twitter. Cloudflare is a B2B company that’s effectively the digital equivalent of core infrastructure. Who knew it had a content moderation policy?
But that’s just the thing: it basically doesn’t. This is what Prince was getting at in his blog post and Times interview. “If we see a bad thing in the world” is not much of a ruleset, or at least not a confidence-inspiring one.
But it wasn’t much different than what we saw from the tech platforms in the wake of Wednesday’s madness in Washington, D.C. Twitter first suspended Trump’s account for 12 hours, then banned him permanently. (In a fittingly parochial coda to his small-bore presidency, Trump spent much of Friday evening in a farcical game of whack-a-mole with Twitter’s content team as he successively attempted to tweet from @POTUS, @TeamTrump, and other accounts, and Twitter just as quickly shut him down on each one.)
Mark Zuckerberg announced that Trump’s Facebook and Instagram accounts would be blocked “indefinitely and for at least the next two weeks” (whatever that means).
YouTube removed a video Trump recorded after the riot in which he made false claims about the election, but let his account — whose featured video is a 46-minute diatribe in which he repeatedly makes false claims about the election — stay on the platform.
Twitch deactivated Trump’s channel until at least the inauguration. Discord banned a well-known server populated by hardcore Trump fans. Reddit banned the subreddit r/DonaldTrump. Snapchat blocked Trump’s account. Apple initially told Parler (Twitter for conservatives with bad French pronunciation) it had 24 hours to come up with an acceptable moderation policy, then — when that didn’t happen — banned its app (currently the top-ranked one) from its App Store. And Google beat them to the punch:
But enough about the specifics: god knows I’ve written at length about the social platforms’ content decisions. The general story is that there was a flurry of content moderation decisioning happening over the past several days. (As Evelyn Douek aptly summarized, “these were more editorial and business decisions taken under fire than the neutral application of prior guidelines.”) The activity and the decisions broke new ground, sure, but the ones I mentioned above took place within the familiar, relatively well-defined, and constrained arena of consumer tech platforms.
Ripple effects
What was more interesting, though, was the fact that the shocking Capitol riot broke the content moderation universe wide open. Even e-commerce got into the mix: Shopify banned Trump’s online store. PayPal turned off an account “raising funds for Trump supporters who traveled to Washington, DC.” Payment processor Tipalti essentially strong-armed streaming site DLive into suspending channels linked to the mob. Email marketer Campaign Monitor suspended Trump’s account:
An Amazon employee activist group called for Amazon Web Services, a suite of cloud serving infrastructure and software, to fire Parler as a client:
Which it then did. AWS’s Trust & Safety Team explained its reasoning in an email to Parler:
Recently, we’ve seen a steady increase in this violent content on your website, all of which violates our terms. It’s clear that Parler does not have an effective process to comply with the AWS terms of service. It also seems that Parler is still trying to determine its position on content moderation. You remove some violent content when contacted by us or others, but not always with urgency. Your CEO recently stated publicly that he doesn’t “feel responsible for any of this, and neither should the platform.” This morning, you shared that you have a plan to more proactively moderate violent content, but plan to do so manually with volunteers. It’s our view that this nascent plan to use volunteers to promptly identify and remove dangerous content will not work in light of the rapidly growing number of violent posts. This is further demonstrated by the fact that you still have not taken down much of the content that we’ve sent you.
(If your first thought was, “wait, is this an AWS email to Parler or every tech journalist’s emails to Facebook?” you’re not alone.)
Matthew Prince wondered in 2019: “But what about the person who operates the printing press, or the ink supplier? Should that person be responsible, too?”
With Shopify, PayPal, Tipalti, Campaign Monitor, and AWS taking punitive actions against Trump-aligned actors, it’s clear the answer to that question is increasingly yes. Content moderation has expanded into the infrastructure stack. (As Jillian York of the Electronic Frontier Foundation has observed, these companies have actually kicked out a wide array of clients before, but coverage of these actions pales in comparison to the press’s obsessive quest to chronicle every perceived slight suffered by white conservatives.)
With both B2C social platforms and B2B infrastructure companies joining efforts to stop the spread of violence and disinformation, it’s an open question how far into the “stack” the chain of responsibility might grow.
Accountability leaps from the tech stack to distribution and financing
Recent clues point to a potentially broad expansion. CNN’s “Reliable Sources” newsletter reported this week on a new angle (emphasis in original):
"Fox and Newsmax, both delivered to my home by your company, are complicit," NJ state Assemblyman Paul Moriarty texted a Comcast executive on Thursday. "What are you going to do???"
"You feed this garbage, lies and all," Moriarty added to the executive, according to a screen grab of the texts he provided me. Moriarty was referring to the fact that Comcast's cable brand, Xfinity, provides a platform to right-wing cable networks that have for weeks been disseminating disinformation about the November election results to audiences of millions.
Moriarty has a point. We regularly discuss what the Big Tech companies have done to poison the public conversation by providing large platforms to bad-faith actors who lie, mislead, and promote conspiracy theories. But what about TV companies that provide platforms to networks such as Newsmax, One America News -- and, yes, Fox News?
Somehow, these companies have escaped scrutiny and entirely dodged this conversation. That should not be the case anymore. After Wednesday's incident of domestic terrorism on Capitol Hill, it is time TV carriers face questions for lending their platforms to dishonest companies that profit off of disinformation and conspiracy theories.
Even further afield, a slew of recent journalism (some of which was published prior to Wednesday’s riot) has raised the specter of holding companies publicly responsible for their alliance with conservative groups and causes, especially as large swathes of the institutional Republican Party are populated by extremists and conspiracy theorists.
In the wake of the insurrection, The New York Times’ business columnist Andrew Ross Sorkin penned a piece titled “What About the Enablers?” that asked:
Yet in this moment when our democracy is under siege, important questions must be asked about business leaders who enabled Mr. Trump and, in turn, share some degree of responsibility for the disgraceful acts that took place in Washington yesterday.
And there were many enablers — educated, smart, articulate, often wealthy people who were willing to ignore Mr. Trump’s threat to democracy in the name of economic growth, lower taxes, lighter regulations, or simply access and proximity to power.
At a time when business leaders tout their “values” and “social responsibility,” how should those enablers — and the institutions they run — be considered after all this?
In a follow-up piece published yesterday, Sorkin interviewed former Goldman Sachs CEO Lloyd Blankfein, a prominent Trump critic, who took no mercy in describing the motivations of his fellow titans of high finance:
So people did know what they were doing. They did it because of their self-interest. Think of another historical example: How about those plutocrats in early 1930s Germany who liked the fact that Hitler was rearming and industrializing, spending money and getting them out of recession and driving the economy forward through his stimulus spending on war material? I don’t want to go too far with that, but just to show you how I’m thinking about it.
So, yes, they supported him. And I think that support is not undone by some one-minute-to-midnight deathbed confession that, “Oh my God, this was too much for me.”
This message was perhaps lost on Trump megadonor Dan Eberhart, who (extremely belatedly) told the Financial Times: “I’m done. I don’t want my mom to think I’m involved with this.” (Hi, Dan’s Mom. He very much was.) These “one-minute-to-midnight” about-faces are widespread, FT reported:
Companies were realising that they needed “to stop supporting those who enabled the slow and steady rot” in US democracy, said Aron Cramer, chief of BSR, a group which advises companies on their social responsibilities. That, he added, would mean cutting campaign contributions to those such as Republican senators Ted Cruz and Josh Hawley who encouraged what the Business Roundtable called “the fiction of a fraudulent 2020 presidential election”…
In a straw poll of 33 chief executives this week, Yale School of Management professor Jeffrey Sonnenfeld found unanimous support for the idea that companies should warn their lobbyists that they would no longer fund “election result deniers”.“There’s not a major chief executive who’s a Trump supporter now,” Prof Sonnenfeld declared.
(The political newsletter “Popular Info” found that “over the last six years, 20 prominent corporations — including AT&T, Comcast, Deloitte, Amazon, Microsoft, and Pfizer — have donated at least $16 million to the Republican lawmakers seeking to undermine the democratic process.”)
Even before the riot, (#resistance heroes / endless grifters) The Lincoln Project produced a new ad calling out AT&T, Citibank, UPS, and Charles Schwab by name for funding politicians like Josh Hawley, Ted Cruz, Rand Paul, and Tom Cotton who lied about election fraud or irregularities. The ad concludes: “Is this what you support, corporate America? Are these your values?”
This is the same question posed by The New Republic’s Osita Nwanevu in a piece published the day of the riot:
Well, from here on out, companies that profess a respect for the essential tenets of our democracy should have to answer for donations to the Republican Party and its associated groups and institutions. So, too, should companies that mashed out pro forma statements in support of Black Lives Matter last year. Coca-Cola, for example, whose CEO declared it would be putting its “resources and energy toward helping end the cycle of systemic racism” in June should probably stop putting its resources toward the reelection of Republican state legislators—including in states like Georgia where it is based and where the Republican Party has been particularly dogged in its efforts to disenfranchise African Americans. If they don’t, people who’d like those efforts to end should probably stop giving Coca-Cola their money or labor.
The business publication Forbes took things a step further on Thursday. Its chief content officer, Randall Lane, fired a preemptive shot across the bow to warn corporations and media outlets not to hire any of the Trump administration’s most prominent mouthpieces like Kellyanne Conway and and Sean Spicer:
Let it be known to the business world: Hire any of Trump’s fellow fabulists above, and Forbes will assume that everything your company or firm talks about is a lie. We’re going to scrutinize, double-check, investigate with the same skepticism we’d approach a Trump tweet. Want to ensure the world’s biggest business media brand approaches you as a potential funnel of disinformation? Then hire away.
If you have to ask if you’re complicit, you probably are
I include all of these examples because the impact of Wednesday’s mob action seems to have broken through an implicit “accountability barrier” in a way that many, many past calls for moderation and responsibility didn’t. The field of potential content arbiters is dynamic and up for grabs and, more importantly, seems to be much larger now (again, with all of Jillian York’s caveats that this moderation was happening “elsewhere” already).
For instance, most people — prior to this week, at least — would agree that an ISP should bear no responsibility for the content that passes through its pipes — unlike, say, The New York Times, which is presumed to control virtually all content that appears across its numerous platforms, including, to some degree, even their comment sections (and certainly their editorial pages).
But another more basic question might be: if an ISP shouldn’t be responsible for third-party content, but Facebook and YouTube and Twitter should, what about all the other intermediaries?
All of the hateful speech and vaccine hoaxes and election-related disinformation posted on Facebook, YouTube, Twitter, Parler, and other user-generated content sites first had to travel through broadband infrastructure owned by companies like Comcast, Spectrum, and Verizon. A DNS provider had to map the domain to an IP address. The site’s images, JavaScript, and CSS may well have been served by a CDN. The server might have been hosted by a cloud provider like AWS. If there was payment processing, it would be handled by a company like Stripe. If it’s a mobile app, it had to be approved by Google or Apple. If it’s a web site, a browser had to display the content on the screen. (Content moderation in the browser, you ask? It might be a bad idea, but there’s no question it’s technically possible — unlike, say, ISP-level moderation of encrypted packets.) There are, it turns out, plenty of intermediaries between content producers and consumers. (Even Slack. Maybe.) Any one of them could, at least in theory, establish a content moderation policy.
As the events of this past week make clear, where the line falls isn’t simply an academic thought exercise. There are an infinite number of ways to skin this cat. Just one example of a possible framework for content moderation responsibilities could be whether a firm’s users comprise a “community” of some sort. This would largely exclude B2B services like cloud providers and payment processors, and even some B2C companies like Internet service providers. (Perhaps an exception can be made for Comcast users, who do form a large and aggrieved community of dangerously irate people.) The consumer-facing social platforms, meanwhile, would remain on the hook. The usual caveats would still apply: for example, any company dealing in financial transactions is also legally restricted from facilitating certain types of illegal commerce.
But a likelier scenario, and the one that appears to be emerging in the wake of Wednesday’s disturbing spectacle, is just a larger, more chaotic, but still very much ad-hoc amalgamation of disparate content policies forged in the wake of increasingly horrifying behavior.