6 ways to put DSPs to the test
If you're a marketer or an agency that controls display dollars, you've likely been choking on the vapor that has filled the demand-side-platform (DSP) space over the last few years. It's not just the sheer number of entrants claiming to be DSPs (be they pure plays, re-positioned networks, or growth-minded media companies), but the cacophony of claims being made that they are the "first," "best," or "only" DSP that does this or that.
In truth, most of these contenders sound great in the first meeting. Their slides use all the right buzzwords, like "audience," "optimization," "real-time," and "insights." But in many cases, they are writing PowerPoint checks that their technology can't cash (that is, if they really have any technology at all).
So what happens if you can't separate the reality from the hype? (And who could?) For those who use DSPs the way they use networks, i.e., as outsourced media execution, they simply either conclude that the DSP performed (ideally better than any other "line item" on the media plan) and continue to send them a monthly IO, or they conclude the DSP didn't perform and move on to other outlets for spending the budget. And that's fine for some.
But many advertisers and agencies looking to use a DSP are looking to make digital media execution an internal core competency, not outsource it to a network-like entity. Their choice of a DSP is about choosing a flexible and robust technology platform on which they can build that competency and make it a competitive advantage. If you're in this category, then finding out your DSP doesn't deliver as promised — after months of internal and external DSP cheerleading, contract redlines, team training, and technological integration — can be catastrophic.
So how do you separate fact from fiction? You ignore what they say and put them to the test — against each other. The notion of DSP "bake-offs" isn't new, but it does bring with it a set of challenges, notably direct-bid competition and ensuring a level playing field. Having participated in many dozens of such tests, I'd like to propose some best practices for head-to-head DSP testing that in our experience help ensure a fair fight, clean results, and most importantly, no regrets.
1) Pre-vet candidates to narrow the list. For practical reasons such as finite budgets and sheer management complexity, you can only test so many DSPs. As with any such test, a lot of the work comes up front in working to ensure you've got the right candidates to begin with, which is why this part of the recipe is the longest. In our experience, most tests involve two or three partners, because most savvy advertisers and agencies have already done their homework to understand which DSPs are even worth testing, in terms of:
-
Breadth of supply, data, and ecosystem integration — Not just the number and scope of ad exchange and SSP integrations, but the ability to bring your own seat, to integrate premium display and guaranteed buys, video, rich media, social, and other formats. Just as important as supply is data, in particular the ability to seamlessly integrate and globally manage any and all first-party data as well as third-party data sources, offline as well as online. A DSP should also provide simple access to other value-added capabilities, like dynamic creative, ad verification, and brand studies. For a platform, broad integration into the larger landscape is key.
-
Technical infrastructure — Often overlooked, but of top importance in choosing a platform, is the robustness of the underlying technology. Ask about global infrastructure — how many bidders do they have, and where are they located. Ask what QPS (queries-per-second) levels they can support, and ask to see the proof, because this is a critical scaling factor. Ask to see their pixel response times from neutral third-party reports like GOMEZ, because they may be slowing client pages down. Ask if they have their own user database and their success rate for matching users. Ask how much data they process daily. Put them in a room with your statisticians and ask them how their bidding algorithm works (hint: many major DSPs actually don't have one). Ask about their APIs. Really understand the technology you may be building your business on.
- Performance and service — These are the aspects of a DSP that will most strongly impact the day-to-day business, but ironically can be much harder to assess from the outside-in than the breadth of partner integrations and technology infrastructure mentioned above. Of course, that's why you'll be conducting the bake-off, but to help decide on the candidates, seek out references that can confirm or deny the rumors. How did they perform vs. competitors? At what scale? What was the service like? See if you get the same answers from industry contacts that were not on the DSPs reference list. And talk not just to the DSPs customers, but to their partners (the exchanges, data providers, etc.).
2) Pick a large brand with "deep" goals. You've now got your lineup of two or three DSPs to test, so what's the test? Ideally, it's for a single large brand so results are directly comparable across DSPs. Importantly, the test should have measurable and "deep" goals, meaning goals that are closer to the end-client's bottom line. For DR advertisers, this means goals like CPA, or ideally ROI, as opposed to shallow goals like clicks, which are much easier to drive but don't necessarily impact the bottom line. Deep goals are very well suited for DSP bake-offs, because they invoke the entire system, including their breadth of supply and data, as well as algorithmic optimization capabilities to meet challenging CPA or ROI objectives. Brand advertisers can be the subject of bake-offs as well, but that scenario is less common due to the historical challenge of identifying deep and measurable brand goals. However, that is rapidly changing, and we anticipate seeing more brand-oriented bake-offs, where the goal is to drive measurable lift in awareness or interest (as measured, e.g., by in-banner surveys) among a desired target audience (as measured by first- or third-party data) at a desired reach and frequency level, and at the best CPM. A truly ideal test to highlight broad DSP capabilities may allow for several different objectives to be tested, for example, across a range of campaigns with different goals (reach, engagement, ROI, etc.). However, it is important that the principles laid out below are applied to each campaign being tested, to avoid the risk of ending up with a portfolio of tests that are all inconclusive.
About The Author: MediaMath
More posts by MediaMath