More thoughts on brand safety
(Necessary disclaimer: this newsletter represents my own views and not those of the advertising industry at large or my employer.)
Following up on my newsletter from last week, I wanted to get into some other aspects of brand safety I find fascinating. But first, I'd be remiss if I didn't mention that an acquaintance just shared this study from a few months ago, which evaluated the interaction between pre-roll ads (that is, video ads that play immediately before organic video content) and the videos that follow them:
This study suggests brand safety is not an issue for pre-roll ad effectiveness. Video content had no interference effects on ads seen before program content. A brand’s reputation might suffer negative effects from pre-roll advertising in other ways, however. Journalists could report that the brand has (accidentally) supported extremist groups with pre-roll ad income. This would be a brand scandal effect via the media rather than an effect on the (few) consumers who saw the ad before the extremist content.
(Emphasis mine.) In other words, for pre-roll ads at least, a brand's reputation is likelier to be harmed not via the user's direct judgment, but due to indirect effects via media coverage. This corroborates the notion that brand safety is in large part about managing the press and the public, not avoiding the anger of specific individual end users who may have been exposed to a brand safety issue.
That said, one brand safety risk I'd neglected to mention in my last newsletter (and which my brother pointed out), which relates to the above, is the risk of directly funding offensive content. If a brand's ads appear on, say, Breitbart, then brand safety risks stem not only from the negative association, but also from the fact that the advertiser has helped monetize hateful content. (Due to the wonders of the modern advertising ecosystem, it's actually quite common for an advertiser to have mediocre-to-low visibility into where their ads are running online. You're welcome.) I would ultimately place this into a similar bucket as the media coverage risk, but it's worth calling out separately.
* * *
The main thing I want to get to today, though, is the difficulty of measuring brand safety effectively on the so-called "walled gardens" (e.g. Facebook, Instagram, Snapchat, etc.) as a third-party verification vendor. This is especially timely given the big splash made last week at Cannes by the introduction of the "Global Alliance for Responsible Media," a new consortium of industry heavies who have come together to solve all of our brand safety problems and definitely aren't just announcing something as an excuse to expense a trip to the French Riviera.
But I digress. Brand safety measurement is particularly tricky in walled garden environments. Unlike on the open web, where page crawling is available to virtually any technology provider in order to classify content against broadly-accepted brand safety categories, the walled gardens are (true to their moniker) especially withholding about releasing content data for third-party analysis.
These platforms are able to maintain this isolation in multiple key ways.
First, they require users to log in before accessing most (or all) principal content on the platform. Anytime you do this on a site -- whether it be a social network or a newspaper paywall -- you're making it much harder for a brand safety vendor to crawl the page and determine whether it's safe or not. Now, in the case of a newspaper paywall, this is probably not a dealbreaker: a news site is likely to welcome a brand safety vendor (especially since the alternative is often to be blacklisted entirely by cautious advertisers looking to avoid Trump-related content), so they can probably work something out (by whitelisting the crawler IPs, granting the vendor a dummy login, or something similar).
This wouldn't work on Facebook, though. The problem with Facebook, which doesn't apply to a news site, is that there is no public-facing identifier for the discrete content surrounding an ad. What, for example, is the persistent, public-facing identifier for a New York Times article? The page URL! As a brand safety vendor, I can store a classification against the URL https://www.nytimes.com/2019/06/25/us/politics/border-funding-vote.html, and from then on any ad that appears on that page will be automatically measured as brand-safe or brand-unsafe depending on the content of the page itself.
Try running that mental exercise with Facebook. It doesn't work because there's no context permanence. (Here, and in all of the below walled garden examples, I'm referring to standalone ads that appear adjacent to the principal content on that platform -- in Facebook's case, the News Feed. Obviously, Instant Articles ads, in-stream video ads, and other auxiliary products on this and other platforms often work differently.)
Unlike a news article, the context in which an ad appears in the Facebook News Feed is completely personalized and ephemeral: there's a virtually infinite array of combinations of News Feed material that may surround your ad at any time. So it's not enough to know whether individual Facebook posts are safe: you have to know which of those specific pieces of content your ad appeared next to.
Which is the second way walled gardens restrict third-party access: they generally don't provide information to third-party vendors about the context that surrounds ads. This is partially a legitimate privacy constraint: unlike, say, a Bloomberg article, which is meant to be publicly consumed by as many people as possible, a Facebook post (unless it's set to public sharing) carries with it some expectation (however misplaced) of privacy. So Facebook can't exactly swing open the doors to just anyone looking to analyze users' posts. (There are also UX and security concerns around running third-party code inside their extremely popular apps and web sites.)
But it's also a matter of strategic self-protection. Facebook and similar platforms want as much of their inventory as possible to remain monetizable -- that is, safe for ads to appear next to. Third-party brand safety scrutiny could -- and has, on many occasions -- threaten(ed) this by alerting marketers to the crazy things appearing next to their ads. (Generally, the advertisers come crawling back eventually. But that may say as much about the market share controlled by Facebook and Google as it does about marketers' concerns.)
So what do you do if you're a brand safety vendor and want to analyze the context surrounding an ad? Therein lies the transparency catch-22: any brand safety measurement solution that purports to analyze, say, Facebook News Feed, Instagram Feed, or Snapchat Stories is necessarily going to have to source much (if not all) of their context data from the platforms themselves.
But how, and from where, are the platforms providing this data? As mentioned above, they're not going to send the raw content from private posts to external parties, which means they would have to process and classify the data before transmitting a "cleaned-up" version externally.
This necessarily implies that they could -- and in fact, must, for their own reputations' sake -- conduct some sort of brand safety or contextual analysis themselves. Almost by definition, therefore, if one of these platforms is passing content data to verification companies for brand safety measurement, they can (and likely have) removed all of their self-perceived "unsafe" inventory from the ad market already. (Without naming names, one of the platforms I've spoken with actually told me this is how a brand safety integration with them would work.) Thus, under this methodology, the measurement of unsafe inventory will invariably be at or near zero percent, a situation that provides vanishingly little value to advertisers and agencies, and is simply not believable to boot.
This is why you see vendors' carefully-worded announcements (I won't link to them here) about brand safety integrations with walled gardens that tiptoe around some of these limitations -- by highlighting, for example, measurement of other ad types that don't face the same constraints.
There are some nuances as you go down the platform list. Below I've compiled a quick-and-dirty (and, I think, mostly accurate? let me know if something is amiss) table illustrating various characteristics of some key walled gardens:
For example, just about all of the platforms require users to log in to view most content. YouTube, however, does not. It's also the only one of the platforms listed below that features context permanence: that is, a public-facing ID (in this case, a page URL or a video ID) will persistently identify a unit of content over time.
In general, you can't use an API to access most or all content on platforms like Facebook, Instagram, LinkedIn, or Snapchat, where posts are frequently private. But on YouTube, Twitter, and Pinterest, you can.
Finally, the definition of ad adjacency itself varies. In an open web context, ad adjacency is nearly always spatial: that is, an ad impression is said to be unsafe if it appeared on the same page as offensive content, i.e. it was spatially adjacent to the bad content. On YouTube and Snapchat, however, adjacency is largely chronological: by definition, a Snapchat Stories ad cannot appear onscreen next to unsafe content because the ad takes up the entire screen. It may, however, appear chronologically before or after something unsafe, as with YouTube ads. (This distinction has not gone unnoticed by advertisers, per AdAge: "Snapchat has also gotten a pass from many advertisers because ads run between videos instead of on them, and look less connected to the content as a result.")
I'll stop there because this has gotten much too long. To summarize, there isn't yet a clear-cut solution to the problem of how to achieve truly independent brand safety measurement of ads appearing on platforms that feature context impermanence. And given how much of the advertising market has already migrated to these platforms, it's possible that many enterprise marketers have simply calculated there aren't any better alternatives, so the reward is worth the risk.