Why DSPs need to open up, or shut up
I'll come right out and say it: Many demand-side platforms (DSPs) and ad networks that claim to use "sophisticated algorithmic optimization" to determine the value of impressions actually do no such thing. Until now, they've been able to throw around these fancy words in their marketing presentations, while masquerading behind the opaque veil of so-called "black-box IP," when in fact the box was empty. Hopefully, that's about to change.
To be clear, what I mean by "sophisticated algorithmic optimization" is a robust machine-learning modeling approach that makes a prediction of the value of each impression based on myriad media and user variables associated with that impression. "Always bid $X if an impression meets certain targeting criteria" is not an algorithm, and neither is a human exporting data to an Excel spreadsheet to sort performance by a few variables.
It's easy to understand that bidding the right price for a real-time bidding (RTB) impression is of critical importance. Bid too high, and you risk paying too much, failing to meet the advertiser's performance goals. Bid too low, and you risk not winning the inventory at all, under-pacing against the advertiser's spending goals. But it may be less obvious why it's so important to implement an algorithmic machine-learning approach. Consider the following simple formula:
CPM bid = Goal Value x Action Rate x 1,000
It says the cost per thousand (CPM) you should bid to achieve the desired campaign goal is equal to the advertiser's stated goal value multiplied by the predicted "action rate." If the advertiser goal is oriented around engagement, such as a cost-per-click (CPC), then the action rate is a click-through rate; if the advertiser goal is direct-response oriented, such as a cost-per-action (CPA), then the action rate is a response rate. The objective of a machine-learning algorithm is to accurately predict the action rate for every impression, based on all the associated data. But here's the challenge:
-
The data is enormous — There are now tens of billions of RTB impressions available daily. Each one of those carries dozens, if not hundreds, of non-personally identifiable data points describing both the media (e.g., publisher, content category, ad size, time of day, day of week, etc.) and the user (including first-party advertiser data, third-party data from various data providers, and non-cookie-based IP data). Moreover, each of those variables can take on many different values. For example, "designated market area (DMA)" and "site" are two variables that come on each impression — there are hundreds of possible DMAs and hundreds of thousands of possible sites. On top of that is the complication that those variables aren't all the same across all the exchanges (e.g., one exchange may pass content category data for its sites, while another may not). Only an algorithmic approach can digest the sheer amount of data in question and properly "normalize" data across different RTB sources.
-
The data is dynamic — If you've ever purchased RTB inventory to achieve a campaign objective, you know (frustratingly so) that the same things that work one day don't necessarily work the next. And that's not just due to fluctuating consumer behaviors, week-in/week-out variations in product sales, and factors like creative burnout. It's also due to the highly variable interaction of supply and demand on the exchanges. For example, RTB supply spontaneously moves into and out of the exchanges inversely with publisher guaranteed sales. And don't forget about competing demand: The same impressions may be valued very differently by a tax advertiser on April 10 vs. May 10. The variation in each advertiser's demand has an effect on all bidders on the exchanges. This ever-changing environment means you don't just need to come up with an answer once, but rather every day, for every advertiser, and for every creative that advertiser may be running. That's something only an algorithmic approach can do at this scale.
-
The data is non-linear — In plain English, that means you can't just look at individual variables in isolation; you need to look at how they interact. For example, one might conclude from a simple spreadsheet analysis that "Mondays are good for this campaign" in general, when in fact, Mondays are actually bad for that campaign on certain publishers. But those publishers do actually perform well on Mondays from 1-3 p.m. But not in certain sizes. And so on. In fact, failing to account for interactions between variables is often why buyers find it hard to reproduce results. They're just looking at a one-dimensional view of the data and failing to glean the critical relationships between variables. No human, no matter how bright or how much time they have, can begin to scratch the mathematical complexity of all these inter-relationships.
- The data is real-time — The required round trip time between an RTB bid request going out and a bid response being received is typically about 0.05 to 0.1 seconds. If you don't respond within that interval, you "time out" and lose the impression. So needless to say, the algorithm in question has to be fast, which has implications for both the modeling and coding approach. That in turn speaks to the underlying bidding infrastructure on which the algorithm sits — the queries per second (QPS) that the DSP technology can support and the timeout rate that measures how often it fails to provide a response within the necessary window. So it's not just the algorithm that has to be fast, but the entire system within which it resides.
The above challenges make building an algorithm for RTB a uniquely complex problem. In fact, I would argue there are no "off the shelf" techniques well-suited to the particular needs of RTB. Not only do you need a custom algorithm that can handle billions of daily impressions, with possibly hundreds of variables per impression, and thousands of values per variable, but it will also need to normalize across disparate data sources, dynamically adjust to new, rapidly varying data on a daily basis, and account for nonlinear relationships between variables. And above all that, your RTB algorithm needs to provide transparent insight into what it's doing.
About The Author: MediaMath
More posts by MediaMath