(Disclaimer: this newsletter represents my own views only, and not those of the advertising industry at large or my employer.)
What is brand safety?
When people in ad tech talk about it, they generally mean “efforts to keep brand messaging (that is, ads) from appearing next to content that's inappropriate and thus damaging to the brand's preferred image.”1
Why is this damaging, though? What are the main risks of a brand showing an ad next to something inappropriate online?
Based on numerous conversations with brand safety experts over the years, two of the most commonly perceived risks to digital ads showing up next to inappropriate content are:
The end user who sees the ad will be upset at its placement next to potentially unsafe content
Journalists may discover the inappropriate placement and write about it for news publications, damaging the brand’s public image
Both of these perceptions of risk are flawed in ways that have profound implications for the practice of keeping brands as safe as possible online.
Let’s take that first perceived risk. Keep in mind that digital content consumption — much more so than on TV — is entirely self-directed. As a consumer of linear TV, the only control you have over content is the ability to change the channel, and even on cable TV the number of channels is finite.
Online, you have a theoretically almost infinite number of choices about what to look at or do. This limitless buffet means that a user who has visited a web site or app almost certainly did so with a specific intent to view the content now on their screen. Which begs the question: why would they be upset that an ad appeared there?
Intuitively, in most cases the Morally Offended User theory doesn’t really add up: a user casting judgment on an advertiser for appearing next to a questionable web page necessarily implies self-judgment too. After all, the only reason he saw the ad appear in an inappropriate context is because he decided to look at that content in the first place. Now, there are still cases where — regardless of any end user’s state of mind — a brand may decide that’s not a place they want to show up anyway.2 But that’s a different argument than claiming that the user himself would perceive the brand negatively for appearing next to content he self-selected.
This is not to say there are no risks to a user’s brand perception based on subpar ad placement. But some of the research routinely bandied about in support of this risk profile doesn’t quite say what its boosters think it does. Even the 2017 CMO Council report “How Brands Annoy Fans,” which was primarily focused on brand safety risks, found that the negative ad experiences cited by the most consumers were “obnoxious or intrusive ads,” “discriminatory or hateful ads, “ “irritating or annoying ads,” and “ads that are racist or stereotype people.” That is, the problem was the ads, not the adjacent content.
The press release for a joint 2019 report by the Trustworthy Accountability Group (TAG) and the Brand Safety Institute (BSI) was headlined: “More than 80% of Consumers Say They Would Reduce/Stop Purchases of Products That Advertised Near Extreme or Dangerous Content.” But none of the eight actual survey questions directly asked whether an ad that the user herself viewed next to offensive content would negatively impact her views of the brand. This leaves open the possibility of conflating the first risk (users’ firsthand negative ad experiences) with the second (critical media coverage).
The severity of this first risk falls into further doubt based on the (limited) published research into this area over the past several years.
For example, in 2017 Australian research and analytics agency Nature published its conclusions regarding ads appearing next to undesirable content on YouTube:
Critically, there was no statistically significant difference in results between the three groups, providing evidence that an ad followed by undesirable content performs no less favourably than others. In fact, the ad was just as engaging and had the same brand impact (on positivity and consideration likelihood) for those who saw it when followed by the undesirable video clip, as it was for those who saw the ad on its own, or followed by a more innocuous video. Similarly, we saw no significant differences in brand perceptions between the three groups after exposure…
Our suspicion is that the brands that have pulled their advertising from YouTube have not only done so for ‘in principle’ reasons, but also to avoid potential negative washback on their brand. Our small experiment on this big issue suggests the latter concern is largely unfounded and could be a storm in a teacup.
Similarly, a 2018 study3 of brand safety effects on pre-roll video ads found…none:
There was no brand safety effect on the brand's reputation, measured by brand attitude. These results suggest that the content seen after a pre-roll ad does not interfere with processing that ad, even controversial—violent, sexually arousing, or extremist—content following the ad.
…This study suggests brand safety is not an issue for pre-roll ad effectiveness.
A 2020 Twitter study (see below chart) found that:
…on our platform, adjacency between ads and divisive content has not yet been found to affect brand favorability…
Results showed no evidence of any effect on brand favorability when ads were adjacent to any of the studied categories of divisive content, regardless of the distance of the adjacency.
And in what appears to be a reference to more recent data from this same research project, three months ago Twitter’s head of global brand safety strategy reiterated one of its central findings in an AdAge piece:
Even when brands appeared adjacent to content in the categories of political or sensitive news, EyeSee saw no impact on brand favorability or consideration. [A]nd while this research may go against some conventional wisdom, it is encouraging.
Of course, one cannot simply take the public statements of a platform like Twitter (or research funded by other ad tech giants or news publishers) at face value about the safety of their own user-generated content: they are, after all, financially invested in proving it. Nevertheless, even outside of platform-sponsored research, consumer-facing surveys have revealed user preferences to be almost charmingly inscrutable and idiosyncratic, rather than didactically moralizing: as just one example, an AdColony EMEA report found that more users viewed an advertiser positively than negatively when their ads appeared next to coronavirus-related content.
In short, while consumers generally respond to questionnaires with conventional viewpoints on how advertisers should avoid brand risk, there is vanishingly little hard evidence that these stances necessarily equate to reduced value for the advertisers attempting to reach them.
This leads us to the second perceived risk: the threat of negative news coverage of the inappropriate ad placements. The logic underpinning this one is no less dubious than that of the first risk, although for a different reason. The canonical example — cited in countless articles — of a brand safety breach leading to bad press is when advertisers like Procter & Gamble, Toyota, and Anheuser-Busch inadvertently ran YouTube ads directly in front of ISIS videos.
It’s important to note that this did lead to a flood of negative press coverage — which, in turn, led to major brands temporarily fleeing YouTube. However, much of the negative tenor of media coverage was directed at Google (YouTube), rather than the advertisers.
But that caveat isn’t the main reason to be skeptical about how dangerous this risk is. It’s that the type of brand safety breach most likely to trigger a negative reaction from the end user, or critical news coverage from journalists, is when an ad appears (spatially or chronologically) adjacent to actual unsafe content — just like those ads in front of ISIS videos. This is precisely the type of scenario brand safety technology should be designed to avoid.
The map is not the territory
And yet much of the brand safety conversation today rests on the unspoken assumption that journalistic coverage of an unsafe entity or event — say, a newspaper covering a murder trial (and thus using flagged words like “murder” and “death”) — is equivalent, from a brand risk standpoint, to the underlying event itself: a video of the murder, for example. This verdict owes more to the limitations of current brand safety technology (which far too often relies on simplistic keyword matching) than it does to actual evidence of safety breaches. It’s also a textbook case of the questionable cause fallacy: conflating the symptom for the disease. It reminds me of this classic Donald Trump tweet:
The underlying problem was the large number of people infected with COVID-19. Reducing the number of positive reported cases by testing fewer people would do nothing to alter that.
Brand safety concerns fall prey to this same fallacy (albeit with less catastrophic consequences). For the most part, the risk isn’t that a brand’s image may become associated with news organizations covering crime, or terrorism, or hate speech. Rather, a risk occurs if a brand becomes associated with underlying crime, terrorism, or hate speech itself — like those ads that appeared directly before ISIS videos. But many brand safety technologies treat these two vastly different threat models as if they are interchangeable.
While high-profile mishaps like the YouTube ISIS calamity certainly generate negative press attention, I struggle to recall a single prominent example of a brand getting pilloried for appearing on a high-quality news publisher’s site next to an article about something broadly controversial. (You’re far likelier to see this as a topic of amused discussion in r/adops than you are among ‘normies.’) Even the activist group Sleeping Giants, the scourge of the ad tech industry whose work largely centers on public-shaming brands for advertising on Breitbart.com, does so because Breitbart itself — which famously had a story tag on its site labeled “Black Crime” — engages in hateful rhetoric, not because it reports on it.
Moreover, even when an unsafe ad placement does occur, the financial consequences are often negligible. A 2018 GumGum report on “The New Brand Safety Crisis” acknowledged that just six percent of ad tech industry respondents reported revenue loss of over $10,000 due to a brand safety incident.
Despite all of this evidence, existing brand safety solutions seem to have a limited ability to keep advertisers away from actual unsafe content, even while they routinely block news coverage of it. And the consequences for journalistic organizations are deeply concerning.
This is all precisely backwards.
For a variety of reasons I’ve covered earlier, third-party brand safety products on the “walled garden” social platforms — Facebook, YouTube, Twitter, and so on — have, for most of these platforms’ existence, ranged somewhere between highly limited and nonexistent. This is unfortunate, since user-generated content (UGC) giants like these — contra the highly curated environment of, say, The New York Times — are precisely where actual brand reputational risks are the highest:
Meanwhile, the decreasingly lucrative open web is far easier for brand safety companies to crawl, measure, and potentially block ad placements on. This results in a perverse scenario where high-quality news publishers that invest heavily in covering important social, political, and cultural stories — like the coronavirus, police brutality, women’s health, LGBTQ issues, and so on — are penalized by brand safety vendors, while ad-bloated puff piece content farms4 thrive (so long as they don’t mention the word “Trump”). As Mantis general manager Benjamin Pheloung explained:
“Almost everything about the Black Lives Matter movement is going to fall afoul of some kind of blocking. George Floyd's murder will be caught by segments blocking violence. Concerns about gatherings spreading coronavirus will be blocked by segments looking for Covid-19. Perhaps most perniciously, segments blocking hate speech will also be triggered when discussing racism..."
“News publishers create the densest pages out there. Sites with lists, carousels of images and the like become the default home for open marketplace programmatic spend.”
Again, this is completely backwards. Leaving aside the potential deluge of free PR that could accrue to any brand taking a vocal stance in favor of supporting journalism, it’s also just smart business to do so.
As more and more high-quality news organizations migrate to a subscription-first business model, news is becoming a highly lucrative self-selected advertising audience: unlike the internet’s early years, when most news sites offered some or all content for free online, a nytimes.com reader today is likelier than ever to be paying for the content. This signifies relative affluence (newspaper subscriptions are effectively a luxury good) and high educational attainment, a perfect duo for big-spending brand advertisers interested in associating themselves with journalistic titles favored by the moneyed set:
Indeed, many of these brands are already deep-pocketed benefactors of the newspaper’s lower-class cousin, TV news. It defies reason that top brands like Verizon, BMW, and Wayfair that run ads on CNN — a news channel that once devoted weeks of (often absurd) wall-to-wall coverage to the grim disappearance of Malaysia Airlines Flight 370 and its 239 passengers — should balk at, say, The Washington Post’s careful, meticulous online coverage of the El Chapo trial in Brooklyn.
Or to use an even more extreme example, brands like Pfizer, Procter & Gamble, Amazon, and Kraft Heinz regularly advertise on FOX News, a TV channel that routinely broadcasts disinformation and conspiracy theories on everything from the COVID-19 vaccines to election fraud. In what universe would it make sense to keep these same brands “safe” from coverage of the Black Lives Matter movement in The New York Times?
As News UK’s Bedir Aydemir wryly noted:
“The fact is no marketer or agency has got sacked for blocking too many words, but they certainly could if their ad is seen on dodgy content, so they just load on more,” said Bedir Aydemir, head of audience and data, commercial at News UK. “It basically means they are blocking ‘news.’ Where on earth do you think your ads are appearing if you are blocking thousands of words? I can only imagine it must be recipe sites.”5
Indeed, one senior advertising executive for a major national news publisher once told me: “Our site is brand safe almost by definition.” And he was basically right. As he then pointed out, there is a yawning chasm between, on the one hand, contextual classification technology designed to ascertain the broad, directional topic of a news article — say, tagging a piece as relating to COVID-19 or basketball — versus, on the other, repurposing that methodology to render a far more difficult and subjective decision about its safety for a brand.
The reductionist use of technology that commoditizes a storied institution like The Wall Street Journal into the same (or worse!) risk tier as “sites with lists, carousels of images and the like” is a worrying trend.
Part of the problem here stems from foundational methodological beliefs — canonicalized by industry trade groups — that underpin digital brand safety conventional wisdom. One of them is that brand safety is best measured at the page or content level, rather than as part of a holistic assessment of a site or domain overall.
This is sensible for generalized contextual classification: as a brand, I might want to know what percentage of my impressions ran on articles about healthcare, and I may even be OK with a 5-10% error in either direction. But it’s considerably less obvious how well this works for determining safety, a concept that’s as much about the publisher’s credentials and long-term trust-building than it is about the ephemerality of a single ad impression appearing on an individual page.
Indeed, a 2018 Edelman study on brand messaging asked respondents which attributes they considered most important when deciding whether to trust information they saw on social media:
That list looks a lot like…a newspaper. And it aligns perfectly with other research that reached similar conclusions, like the 2017 CMO Council report which found that “64% [of consumers] respond better to ads delivered from a trusted news site than those that appear on social media, search or fake news channels.”
In other words, it matters less whether an advertiser’s message runs in the sports section or in real estate than whether it appeared in The New York Times versus, say, The Daily Mail. It’s the institutional gravitas of the Times as a brand that lends instant credibility to any marketer’s messaging associated with it. Atomizing this trust down to the article level misses the forest for the trees, and it’s not how risk actually works in the real world.
So, how does risk work?
First, it’s important to recognize that high-quality journalism doesn’t scale (that’s another shameless link to my prior writing on this): there’s not an infinite supply of trustworthy publishers out there to run ads against. When it comes to buying ads online, there’s huge scale, brand-safe inventory, and low cost: pick two.
Second, moving away from a model that equates terrorist content to coverage of terrorism because they share keywords, and towards one that incorporates publisher-level analysis as part of a broader safety solution, will help reorient brand advertisers towards substantive brand safety. The question immediately shifts from “Which news articles are safe to run on?” to “Which sites are (high-quality) news sites?”
That is, we need to start identifying which sites count as news — including, importantly, local news! — and which sites don’t. To do this we may be able to (at least partially) rely on membership lists of news industry trade groups that have ethics and/or standards requirements for joining. Alternatively, the advertising trade group Interactive Advertising Bureau (IAB) launched a project to curate an allow list of news sites (based on IAB Canada’s earlier efforts). The Local Media Consortium hosts the “Local News Advertising Inclusion List” on its site.
Whatever form it eventually takes, and regardless of the dissatisfaction it may cause avowed proponents of relying solely on page-level brand safety tech, lists like these can help stem the tide of journalism demonetization, by replacing the myopic brand safety controls that preceded them. It can even be coupled with measures of quality that analyze attributes, such as page loading time and ad clutter, in order to restrict brand messaging to inventory that boasts the best user experience.
That still leaves the question of what to do everywhere else. Page-level brand safety classification can continue to play a dominant role for the programmatic long tail, where publisher branding is virtually nonexistent, managing allow lists is infeasible, and adhering to brand safety standards is just about the only differentiating factor for much of the content. And page-level filtering can still be useful on high-end publisher sites when used to avoid custom topics specific to a brand’s sensitivities (e.g. an airline wishing to avoid articles about plane crashes). But as a general rule, a hybrid approach — with publisher quality signals dominating the top tier and page-level classification handling the remainder — is promising.
As to the social platforms, major challenges remain. The platform whose early brand safety crises set the stage for the last few years is YouTube, an environment that current brand safety methodologies are almost singularly ill-designed to address. Brand safety technology applied to YouTube, unless it incorporates accurate, sophisticated object- and scene-level recognition that even today’s most vaunted technology struggles to achieve at scale, relies in part on sparsely populated metadata like the video title, description, tags, and (in some cases) the audio transcription.
Solutions built on such foundations are blunt indeed: for example, they will over-block music videos since the lyrics are often posted in the description, increasing the odds of a brand safety flag. And it will fail to block just about any unsafe videos that don’t loudly proclaim themselves as such in a keyword-friendly way (and why would they?). Just as with news sites, this technology is unequipped to understand the larger controversy around a publisher like PewDiePie, whom brands may justifiably want to avoid entirely even if many of his individual videos lack any offensive content.
Worthwhile brand safety measurement and blocking solutions on the social platforms must be highly customizable and likely platform-specific. Ad adjacency on YouTube videos raises few of the privacy concerns that apply to Facebook’s News Feed. Risk detection on text-heavy Twitter will necessarily operate using different models than ones that would work for the image-centric Pinterest. Ads on TikTok and Snapchat conspicuously don’t share screen space with organic content the way ads and editorial do on the pages of any news site. In other words, there’s a lot of work left to do here.
This all may sound like a radical departure from the brand safety zeitgeist. But the evidence in favor of the status quo is worryingly thin, and it’s incumbent on brand safety and verification vendors to follow the data. One need not be a cynic to observe that a cookie-obsessed ad tech industry that spent a decade arguing that environment doesn’t matter — “why pay a $15 CPM on ESPN.com when you can target the same users on the long tail for $1 or $2?” — has abruptly reversed course in the post-GDPR/-CCPA era and now warns of contextual dangers around every corner.
The question for marketers, then, is whether these dangers primarily threaten their brand reputations or, rather, the business models of the ad tech providers selling to them.
This is me shamelessly self-quoting from a prior newsletter about brand safety.
Pornhub is an obvious example.
It should be noted that research for this study was “sponsored by a consortium of companies that includes Google (owned by Alphabet like YouTube) but also Google's competitors, such as television networks and Facebook.”
For more on this phenomenon, read Ryan Barwick’s excellent deep dive into “made for advertising” sites.
Turns out even recipe sites aren’t safe: