main

Analytics

The Attention Checklist

January 24, 2023 — by MediaMath

The-Attention-Checklist.jpg

Attention metrics’ has been generating buzz for a little while now, but its evolution within the advertising industry is only just getting started. As attention metrics and measurement continue to mature, there are a few essential things that you can do as a brand to ensure you are keeping up with changes and ensure your ads continue to grab people’s attention.

Learn more about attention 

As attention continues to evolve, we need to make sure that we are regularly keeping abreast with changes as it relates to the advertising industry. The Attention Council provides a valuable knowledge base with various papers and webinars to help you learn more about attention as a concept in advertising. Read their paper, Linking Attention Metrics and Outcomes, and review the next steps suggested to pick dependent variables and select a methodology that’s best for your marketing efforts.

Plan for attention 

Capturing attention throughout the purchase cycle is paramount, but depending on your brands’ key business outcomes, you may implement specific techniques at different consumer touchpoints.

 

As you plan for attention, consider asking the below questions to help you identify where to focus on capturing attention:

  • How will attention be measured?
  • Is the metric a proxy attention metric or a true attention metric?
  • If working with a panel, are the participants opt-in?
  • What attributes about the panel are shared?

As consumer attention is dynamic, there is no single approach to success. The team at Playground.xyz have an interactive tool to help you evaluate benchmarks in attention across a variety of facets: https://playlist.playground.xyz/benchmarks/

To learn more about attention metrics or to find out how MediaMath can help you implement them into your media buying strategy, download our attention metrics whitepaper or contact your MediaMath rep.

AnalyticsDataDIGITAL MARKETINGIntelligenceMediaTechnologyUncategorized

Machine Learning Without Tears, Part Two: Generalization

August 22, 2016 — by MediaMath

ML2.jpg

In the first post of our non-technical ML intro series we discussed some general characteristics of ML tasks. In this post we take a first baby step towards understanding how learning algorithms work. We’ll continue the dialog between an ML expert and an ML-curious person.

Ok I see that an ML program can improve its performance at some task after being trained on a sufficiently large amount of data, without explicit instructions given by a human. This sounds like magic! How does it work?

Let’s start with an extremely simple example. Say you’re running an ad campaign for a certain type of running shoe on the NYTimes web-site. Every time a user visits the web-site, an ad-serving opportunity arises, and given the features of the ad-opportunity (such as time, user-demographics, location, browser-type, etc) you want to be able to predict the chance of the user clicking on the ad.  You have access to training examples: the last 3 weeks of historical logs of features of ads served, and whether or not there was a click. Can you think of a way to write code to predict the click-rate using this training data?

Let me see, I would write a program that looks at the trailing 3 weeks of historical logs, and if N is the total of ad exposures, and k is the number of those that resulted in clicks, then for any ad-opportunity it would predict a click probability of k/N.

Great, and this would be an ML program! The program ingests historical data, and given any ad-serving opportunity, it outputs a click probability. If the historical (over the trailing 3-weeks) fraction of clicked ads changes over time, your program would change its prediction as well, so it’s adapting to changes in the data.

Wow, that’s it? What’s all the fuss about Machine Learning then?

Well this would be a very rudimentary learning algorithm at best: it would be accurate in aggregate over the whole population of ad-exposures. What if you want to improve the accuracy of your predictions for individual ad-opportunities?

Why would I want to do that?

Well if your goal is to show ads that are likely to elicit clicks, and you want to figure out how much you want to pay for showing an ad, the most important thing to predict is the click probability (or CTR, the click-through-rate) for each specific ad opportunity: you’ll want to pay more for higher CTR opportunities, and less for lower CTR opps.

Say you’re running your ad campaign in two cities: San Francisco and Minneapolis, with an equal number of exposures in each city. Suppose you found that overall, 3% of your ads result in clicks, and this is what you predict as the click-probability for  any ad opportunity. However when you look more closely at the historical data, you realize that all ad-opportunities are not the same: You notice an interesting pattern, i.e. 5% of the ads shown to users in San Francisco are clicked, compared to only 1% of ads shown to users logging in from Minneapolis. Since there are an equal number of ads shown in the two cities, you’re observing an average click-rate of 3% overall, and …

Oh ok, I know how to fix my program! I will put in a simple rule: if the ad opportunity is from San Francisco, predict 5%, and if it’s from Minneapolis, predict 1%. Sorry to interrupt you, I got excited…

That’s ok… in fact you walked right into a trap I set up for you: you gave a perfect example of an ad-hoc static rule: You’re hard-coding an instruction in your program that leverages a specific pattern you found by manually slicing your data, so this would not be an ML program at all!

So… what’s so bad about such a program?

Several things: (a) this is just one pattern among many possible patterns that could exist in the data, and you just happened to find this one; (b) you discovered this pattern by  manually slicing the data, which requires a lot of time, effort and cost; (c) the patterns can  change over time, so a hard-coded rule may cease to be accurate at some point. On the other hand, a learning algorithm can find many relevant patterns, automatically, and can adapt over time.

I thought I understood how a learning algorithm works, now I’m back to square one!

You’re pretty close though. Instead of hard-coding a rule based on a specific pattern that you find manually, you write code to slice historical data by all features. Suppose there were just 2 features: city (the name of the city) and IsWeekend (1 if the opportunity is on a weekend, 0 otherwise). Do you see a way to improve your program so that it’s more general and avoids hard-coding a specific rule?

Yes! I can write code to go through all combinations of values of these features in the historical data, and build a lookup table showing for each (city, IsWeekend) pair, what the historical click-through-rate was. Then when the program encounters a new ad-opportunity, it will know which city it’s from, and whether or not it’s a weekend, and so it can lookup the corresponding historical rate in the table, and output that as its prediction.

Great, yes you could do that, but there are a few problems with this solution. What if there were 30 different features? Even if each feature has only 2 possible values, that is already 2^30 possible combinations of values, or more than a billion (and of course, the number of possible values of many of the features, such as cities, web-sites, etc  could be a lot more than just two). It would be very time-consuming to group the historical data by these billions of combinations,  our look-up table would be huge, and so it would be very slow to even make a prediction. The other problem is this: what happens when an ad opportunity arises from a new city that the campaign had no prior data for? Even if we set aside these  two issues, your algorithm’s click-rate predictions would in fact most likely not be very accurate at all.

Why would it not work well?

Your algorithm has essentially memorized the click-rates for all possible feature-combinations in the training data, so it would perform excellently if its performance is evaluated on the training data: the predicted click-rates would exactly match the historical rates. But predicting on new ad opportunities is a different matter; since there are 30 features, each with a multitude of possible values, it is highly likely that these new opportunities will have feature-combinations that were never seen before.

A more subtle point is that even if a feature-combination has occurred before, simply predicting the historical click-rate for that combination might be completely  wrong: for example suppose there were just 3 ad-opportunities in the training data which had this feature-combination: (Browser = “safari”, IsWeekend = 1, Gender = “Male”,  Age = 32, City = “San Francisco”, ISP = “Verizon”), and the ad was not clicked in all 3 cases. Now if your algorithm encounters a new opportunity with this exact feature-combination, it would predict a 0% click-rate. This would be  accurate with respect to the historical data your algorithm was trained on, but if we were to test it on a realistic distribution of ad opportunities, the prediction would almost certainly not be accurate.

What went wrong here? Suppose the true click-rate for ads with the above feature-combination is 1%, then in a historical sample where just 3 such ad-opportunities are seen, it’s statistically very likely that we would see no clicks.

But what could the learning algorithm do to avoid this problem? Surely it cannot do any better given the data it has seen?

Actually it can. By examining the training data, it should be able to realize, for example, that the ISP and Browser features are not relevant to predicting clicks (for this specific campaign), and perhaps it finds that there are a 1000 training examples (i.e. ad-opportunity feature-combinations) that match the above example when ISP and Browser are ignored, and 12 of them had clicks, so it would predict a 1.2% click-rate.

So your algorithm, by memorizing the click-rates from the training data at a very low level of granularity, was “getting lost in the weeds” and was failing to generalize to new data. The ability to generalize is crucial to any useful ML algorithm, and indeed is a hallmark of intelligence, human or otherwise. For example think about how you learn to recognize cats: you don’t memorize how each cat looks and try to determine whether a new animal you encounter is a cat or not by matching it with your memory of a previously-seen cat. Instead, you learn the concept of a “cat”, and are able to generalize your ability to recognize cats beyond those that exactly match the ones you’ve seen.

In the next post we will delve into some ways to design true learning algorithms that generalize well.

Ok, looking forward to that. Today I learned that generalization is fundamental to machine-learning. And I will memorize that!

AnalyticsDataIntelligenceTechnologyUncategorized

Machine Learning: A Guide for the Perplexed, Part One

July 21, 2016 — by MediaMath

ML_resize.jpg

With the increasingly vast volumes of data generated by enterprises, relying on static rule-based decision systems is no longer competitive; instead, there is an unprecedented opportunity to optimize decisions, and adapt to changing conditions, by leveraging patterns in real-time and historical data.

The very size of the data however makes it impossible for humans to find these patterns, and this has lead to an explosion of industry interest in the field of Machine Learning, which is the science and practice of designing computer algorithms that, broadly speaking, find patterns in large volumes of data. ML is particularly important in digital marketing: understanding how to leverage vast amounts of data about digital audiences and the media they consume can be the difference between success and failure for the world’s largest brands. MediaMath’s vision is for every addressable interaction between a marketer and a consumer to be driven by ML optimization against all available, relevant data at that moment, to maximize long-term marketer business outcomes.

In this series of blog posts we will present a very basic, non-technical introduction to Machine Learning.  In today’s post we start with a  definition of ML in the form of a dialog between you and an ML expert. When we say “you”, we have in mind someone who is not an ML expert or practitioner, but someone who has heard about Machine Learning and is curious to know more.

Can we start at the beginning? What is Machine Learning?

Machine learning is the process by which a computer program improves its performance at a certain task with experience, without being given explicit instructions or rules on what to do.

I see, so you’re saying the program is “learning” to improve its performance.

Yes, and this is why ML is a branch of Artificial Intelligence, since learning is one of the fundamental aspects of intelligence.

When you say “with experience,” what do you mean?

As the program gains “practice” with the task, it gets better over time, much like how we humans learn to get better at tasks with experience. For example an ML program can learn to recognize pictures of cats when shown a sufficiently large number of examples of pictures of “cat” and “not cat”.  Or an autonomous driving system learns to navigate roads after being trained by a human on a variety of types of roads. Or a Real-Time-Bidding system can learn to predict users’ propensity to convert (i.e. make a purchase) when exposed to an ad, after observing a large number of historical examples of situations (i.e. combinations of user, contextual, geo, time, site attributes) where users converted or not.

You said  “without being given explicit instructions.” Can you expand on that a bit?

Yes that is a very important distinction between an ML program and a program with human-coded rules. As you can see from the above examples, an ML system in general needs to respond to a huge variety of possible situations: e.g., respond “cat” when shown a picture of a cat, or turn the steering wheel in the right direction in respond to the visual input of the road, or compute a probability of conversion when given a combination of features of an ad impression. The sheer variety of number of possible input pictures, or road-conditions, or impression-features is enormous. If we did not have an ML algorithm for these tasks we would need to anticipate all possible inputs and program explicit rules that we hope will be appropriate responses to those inputs.

I still don’t understand why it’s hard to write explicit rules for these tasks. Humans are very good at recognizing cats, so why can’t humans write the rules to recognize a cat?

That’s a great question. It’s true that humans excel at learning certain tasks, for example recognizing cats, or recognizing handwriting, or driving a car. But here’s the paradoxical thing — while we are great at these tasks, the process by which we accomplish these tasks cannot be boiled down to a set of rules, even if we’re allowed to write a huge number of rules. So these are examples of tasks where explicit rules are impossible to write.

On the other hand there are tasks at which humans are not even good at: for example trying to predict which types of users in what contexts will convert when exposed to ads. Marketing folks might have intuition about what conditions lead to more conversions, such as “users visiting my site on Sundays when it’s raining are 10% likely to buy my product”. The problem though is that these intuition-guided rules can be wrong, and incomplete (i.e. do not cover all possible scenarios). The only way to come up with the right rules is to pore through millions of examples of users converting or not, and extract patterns from these, which is precisely what an ML system can do. Such pattern extraction is beyond the capabilities of humans, even though they are great at certain other types of pattern extraction (such as visual or auditory).

I see, so ML is useful in tasks where (a) a response is needed on a huge number of possible inputs, and (b) it’s impossible or impractical to hard-code rules that would perform reasonably well on most inputs. Are there examples where the number of possible inputs is huge, but it’s easy to write hard-coded rules?

Sure: I’ll give you a number, can you tell if it’s even or odd? Now you wouldn’t need an ML program for that!


In a future post we will discuss at a conceptual level how ML algorithms actually work.

AnalyticsEventsIntelligenceTrendsUncategorized

Make This Your Best Back-to-School Season

July 20, 2016 — by MediaMath

iStock_68605481_SMALL.jpg

The back-to-school season is one of the biggest retail events in the US—in fact, the $68 billion industry comes second only to the winter holidays in terms of spend. Back-to-school shoppers in 2015 planned to spend an average of $630, with most of the spend going toward apparel and electronics, according to data from the National Retail Federation’s annual back-to-school survey. To help marketers capitlize on this popular shopping period, MediaMath analyzed 110 campaigns from previous back-to-school campaigns to see what trends and performance results stand out. Some highlights of our short guide include:

  • 30% of conversions happen in two weeks in August
  • Consumer goods and clothing and accessories make up 65% of all campaigns
  • Back-to-school and mom segments vastly outperform college segments

To download the full ebook, click here.

AnalyticsIntelligenceTechnologyUncategorized

The Other Half of the Battle Against Fraud

June 11, 2015 — by Ari Buchalter

Battle_Ad_Fraud_MediaMath.jpg

It’s been well-established that fraud, and in particular non-human traffic, is a problem in the digital advertising industry, but I’d like to spend a few moments exploring why it is such a problem. No, I’m not asking why there are unscrupulous people out there looking to hack the system to make a dishonest buck (that part I recognize from every other commercial endeavor ever undertaken). And no, I’m not asking about the industry norms and perverse incentives that can motivate publishers, intermediaries, and yes, even agencies and advertisers, to turn a blind eye to the problem. I’m asking why our marketing programs are so easily fooled by bots in the first place.

There’s no doubt the fraudsters are getting more sophisticated. While long-standing tactics like click fraud are still sadly alive and thriving, they have been joined by numerous other insidious new breeds of fraud. From visiting advertiser sites to attract retargeting dollars, to intentionally adhering to MRC-defined viewability criteria, the bots are getting better at blending in and looking like everyone else. No channel is unaffected and no publisher, no matter how premium or niche, is immune.

This state of affairs has led to a reactive mentality in our industry where the goal is to “avoid fraud,” which is completely rational and understandable. When you are under attack, you defend yourself. It’s why many major publishers, ad exchanges, and SSPs have implemented rigorous quality measures to filter fraud and other forms of undesirable traffic at the source. It’s why the leading DSPs have developed sophisticated algorithms to identify anomalous patterns at the user, site, IP address, and other levels, and quarantine fraud away from live buying environments before marketing budgets are exposed to it. And it’s why a wave of old and new verification & measurement vendors are offering an array of new fraud-related products. All with the determination to stay a step ahead of the increasing scale and growing variety of digital advertising fraud.

But is that it? As an industry, are we just to spiral forward in a never-ending arms race, trying to build new techniques to keep up with the ever-evolving new strains of fraud, playing a high-stakes game of whack-a-mole with hundreds of billions of dollars on the line?

Thankfully, the story doesn’t have to end there.

The reason it’s so easy for bots to mimic people is because the marketing definition of people is often so simplistic. Despite all the amazing advances in ad tech over the past decade, many digital campaigns are still just going after weakly-defined audiences characterized by generic demographic and/or broad-based behavioral targeting, overlaid with easily-mimicked behaviors like views and clicks. These approaches are, in effect, propagating the broadcast mentality of the old offline world, where targeting “18-49 year-old males who make over $100K/yr and are interested in electronics” might have been considered pretty decent (and pretty hard to fake, from an offline standpoint). But in the online world that’s about as easy to fake as the age on your dating profile.

Starting with a generic picture of a very broad audience is what I refer to as “guess-based marketing.” Those audiences, whether defined by characteristics like age, gender, and income (the estimation of which is often of dubious quality to begin with) or by simple behaviors like visiting sites or clicking on ads, are really just proxies to the advertiser’s desired business outcomes. The problem is those characteristics are easy for bots to fake and those behaviors are easy for bots to demonstrate, so a guess-based approach is playing right into the fraudsters’ sweet spot. If you’re doing that, you are broadcasting a signal that bots are tuned into, and you should work with a buy-side partner well-equipped to fight the fraud arms race with you, who combines proven proprietary pre-bid fraud detection and filtration with best-in-class third-party technologies.

But simply playing defense is not truly taking advantage of what programmatic is all about. The real power of programmatic is that it enables what I call “goal-based marketing.” Goal-based marketing is about applying the principles of marketing science across the entire funnel, with the realization that all marketers are performance marketers. What I mean by that is no matter whether you are a brand marketer, a direct-response marketer, a loyalty marketer, etc. there is some quantifiable business goal you are looking to drive (whether brand awareness, purchase intent, social engagement, customer loyalty, lifetime value, you name it), and against which you are judging success. And therein is the key to goal-based marketing: if it can be measured, it can be made better by math. Made better by exposing all available data – about audiences, about media, about creatives – to a smart system that can determine the optimal combination of those elements to drive your business goals at scale, automating the right decision at every consumer touchpoint, in real time.

If you are using programmatic technology to drive goal-based marketing, the fraud picture becomes very different. It shifts from a purely defensive and reactive mentality of “avoid fraud” to a proactive posture of “generate business outcomes.” The fact that bots are getting better at blending in and looking like everyone else is suddenly not their strength but rather their weakness, because your customers are not just like everyone else’s and the goals you are trying to drive are not the same as everyone else’s. Browsing and clicking are easy markers to fake, but the combined online and offline data you use to define truly actionable audiences, the category-, brand- and product-specific behaviors that become the triggers for your marketing actions, and the specific and measurable outcomes that matter to you as a marketer – these are things not known to the fraudsters and therefore much harder for bots to fake (not to mention economically infeasible, in the case of actual purchases). Moreover (and somewhat ironically), those true business outcomes are often more accurately and reliably measurable than the easily-spoofed guess-based audiences that were supposed to be the proxies for those outcomes in the first place.

A guess-based approach might simply be looking to buy those 18-49 year-old males who make over $100K/yr and are interested in electronics – an easy target for bots to fake (it’s also worth noting that even in a bot-free world, the accuracy of that kind of data is often extremely poor, based on coarse extrapolations from very limited data). By contrast, a goal-based approach might look to raise awareness for a particular brand by X%. Or do so specifically among consumers who have actually purchased a competitive brand, online or offline, in the past year. Or increase purchase intent among lapsed customers by X%. Or drive conversion of consumers who have expressed interest in a particular category or product, at an average $X cost per conversion. Or drive an overall return on ad spend of greater than X:1 from combined online & offline sales. Or convert X% of current customers to a loyalty program every month. And so on. Bots won’t easily show up in those audience definitions and won’t easily contribute to those outcomes. The avoidance of fraud simply becomes a natural consequence of goal-based marketing. And achieving your business goals at scale is what programmatic is all about.

Moreover, non-human traffic isn’t the only kind of fraud addressed by the use of goal-based marketing. Many of the various types of ad laundering and publisher misrepresentation tactics that can be perpetrated by malware or other forms of browser manipulation, even on the browsers of actual people, are also minimized. Common examples include “invisible ads” (either stacked atop each other or rendered as an invisible 1×1 pixel), or the impersonation of legitimate publishers via “URL masking”. But since ads never actually rendered to a user don’t drive true business outcomes, and impostor sites don’t actually drive business outcomes like the legitimate publishers they are spoofing, goal-based techniques naturally optimize away from such traffic and towards the quality environments that do generate those outcomes.

The evidence is in the data. Goal-based campaigns see no inhumanly high click-through rates, no droves of site visitation with little to no engagement, no lack of bona fide purchase events – all things commonly associated with fraudulent activity. Moreover, when fraudulent publishers are outed in the press, these campaigns see little to no delivery against such publishers. When conversion events have some post-conversion measure of quality, these campaigns strongly outperform their guess-based counterparts.

That’s not to say it’s an either/or proposition. The best results, by far, come when you combine goal-based marketing with powerful pre-bid anti-fraud technology. A guess-based approach invites an onslaught of fraud to begin with, relying solely on anti-fraud measures to take things from bad to good. By contrast, a goal-based approach aligned with your true business objectives intrinsically blunts the onslaught of fraud so you’re starting at good, targeting audiences bots can’t easily resemble and outcomes bots can’t easily reproduce. The overlay of industry-leading anti-fraud technology atop a goal-based approach then imposes additional filters to take good to great. We who build anti-fraud solutions believe the good guys will win the arms race – through technology, through the definition of standards & policies, through education, and through industry-wide data-sharing, transparency, and collaboration. In the meantime, fraud will continue to be a problem in the digital advertising industry, just a much smaller problem for those using programmatic technology built to drive goal-based marketing.

AnalyticsIntelligenceUncategorized

Move to the Head of the Class with OPEN Certification

January 23, 2014 — by MediaMath

slova-1-960x970.jpg

By definition, the word confusion means “disorder, jumble, bewilderment, perplexity, lack of clarity, indistinctness, abashment”–a term that is frequented in the ad tech industry. There is an abundance of information, systems, platforms, and technologies that are bandied about as buzzwords, tech talk, and sales-speak. The result is misinformation, posturing, and fear among digital marketers.

There is no question in any digital executive’s mind that knowledge gaps and deficiencies result in setbacks and redundancy rather than best-of-breed thinking and the boundless innovation the world expects from our industry. The prospect of creating industry experts with a knack for innovation is not a simple proposition, but MediaMath’s new OPEN Buyer Certification programis designed for any grade of buyer and organization that aspires to a higher level of technological competency and promote that competency to grow their business.

MediaMath has been a longstanding advocate of education within the market. In 2012, to remedy some of the head pounding and teeth grinding the average digital marketer experiences on any given day, MediaMath birthed the New Marketing Institute (NMI) with the intention of carrying over its core business tenets of driving marketer ROI, transparency, and creating automation across the board. The result was an educational certification program that cuts through the clutter and creates a new class of smarter, savvier marketers.

Based on a grassroots concept of educating entry-level buyers about technology and marketing decisions and how those collide with ROI, the next evolution was to further educate the industry with the OPEN program and portal which educates and connects MediaMath’s partners and buyers to breed innovation.

OPEN Buyer Certification, a three-tiered program, was built to acknowledge those who the experts are among us and give them the opportunity to showcase their capabilities and take advantage of lead referral opportunities from MediaMath. We’re creating a community of expert users of digital marketing software who can go out and represent their brands and be role models in the space.

There is no better way than to just jump in and start doing it.

OPEN aligns MediaMath with collaborative partnerships that strengthen its platform view of marketing initiatives and creates an entire channel of resellers for its technology. By disrupting the underlying framework of how marketing is thought about and how agencies function, we are changing and informing the ecosystem.

The Buyer Certification program is stacked in three levels: Silver, Gold, and Platinum, with each level evaluated on three distinct categories with qualifying criteria. The program levels focus on different marketplace components as expertise and skill-set progress, the first category focusing on establishing a marketplace presence, provisioning to lead the market, publishing case studies, keynote speaking, and cultivating thought leadership. The second category focuses on platform usage.  Are buyers/organizations doing everything they are talking about in the marketplace? Are they embracing programmatic, data integration, and other key industry initiatives? Finally, the third category is based on strategic initiatives; how buyers are innovating and bringing those strategies to market, and how they’re advancing the industry to get to a programmatic and platform viewpoint.

Highly motivated and accomplished buyers and their organizations can power through the certification’s curriculum in a short period of time, or it can be parsed out over several weeks or months to accommodate different schedules.

To date, six companies have participated in the certification program, including the first, platinum-certified partner Epsilon, as well as Adroit Digital, Mediasmith, The Big Lens, 3Q Digital, and Huddled Masses.  As of that, there are more than 35 at various stages of the process.

Early adopters are already incorporating certification benchmarks into their 2014 strategic plans, powerful assurance that, a few years out, program participants will move the needle within the industry and rise above the competition.

AnalyticsIntelligenceUncategorized

This Season, Moms Will Be Shopping on Mobile – At Home and in the Aisle

November 21, 2013 — by MediaMath

1384977714.jpg

This year’s Deloitte holiday survey revealed that 68 percent of smartphone owners will use their devices for holiday shopping. Mark Lewis, online director of UK retailer John Lewis recently stated, “We expect this Christmas to be a tipping point, where the majority of our online sales come from mobile devices.”

Adroit Digital delivers strong evidence in support of a mobile shopping season in “For Moms, It’s a Digital Holiday,” a report examining the online shopping habits of today’s moms. Moms, for those not in the know, hold the keys to holiday season success.  The 80 million moms raising kids today represent $2.4 trillion in buying power. And, they’re savvy, too: 90% of moms are online, versus 76% of women overall.

Now pair that data with the fact that Mom will be doing the lion’s share of the holiday shopping. According to the Adroit Digital report, 37% of respondents were fully responsible for household gift purchasing, and 34% were responsible for at least three-quarters of it.

It’s clear that retailers need to pay extra attention to the woman of the house this year. They should also pay special attention to her use of tablet and smartphones, because she will be using them quite a bit as she plans those holiday purchases. 21percent of moms surveyed plan to do 50 percent or more of all their online holiday shopping this season from their mobile device versus their computer. Younger moms will do even more shopping from their mobile devices: nearly a third of 18-24 year olds (31 percent) and a quarter of 25-34 year olds, will do at least half of their online shopping from a smartphone or tablet.

Those mobile devices will come into the store with Mom, too. 56 percent of women surveyed plan to use their phone or tablet to track down discounts or coupons, and 50 percent will be comparison-shopping. Of those in-store mobile users, half will be comparing prices to online retailers, and 42 percent will be comparing to prices in local stores. Note that the youngest moms surveyed, those 18-24, will spend 50 percent of their in-store time researching product information on their mobile devices.

Another important trend retailers should note: Half of the moms surveyed plan to shop on the mobile web versus shopping using a retailer’s app.  Not surprisingly, it’s the millennial moms who will be doing the app-based shopping.

So what are the takeaways for retailers here?

  • For starters, pay attention to your mobile shopping experience. Many of your most important and influential holiday shoppers will be engaging with it this season.
  • Be aware that there are three generations of women actively parenting children today: Millenials, Gen-Xers and Boomers. These three groups are leveraging online shopping in very different ways. Ensure you’ve built an appropriate experience for each segment. Note that millennials will be using mobile devices far more frequently than other moms, and they’ll be shopping via your app more frequently as well.
  • Be prepared for showrooming moms. Consider price-match offers, as many retailers have done, or special, mobile-only, in-store experiences and offers.

With Thanksgiving and Cyber Week just around the corner, are you ready for the mobile moms? What are you doing to prepare?

To download full report, click here.

AnalyticsIntelligenceUncategorized

Applying Insights for Optimization

September 6, 2013 — by MediaMath

add_gamer_3-e1421772915570-960x485.jpg

As digital marketing advances and we have more and more information at our disposal, we have greater opportunity to make marketing more effective. The information we receive at every data point is a chance to adjust levers to achieve ideal levels of relevance- right person, right message, right time, right device.

To get there, we have to not only collect that data, but also analyze it and actually act upon those “actionable insights.” Because data is derived from a variety of sources, it is important to understand it from both the user level—who your customer is and what their interests, needs, dislikes, etc. are—and the action level—what sites they visit, what searches they conduct, how long they spend on which pages, what they purchase. Combining these insights offer a more complete picture of your customer and enable you to take the appropriate actions to engage them.

For example, skincare brand Proactiv analyzed data across both personal and interest levels and found that many Proactiv users and prospects also were country music fans. This led the brand to begin featuring country music artists like Carrie Underwood in their ads, which led to better response rates.

In Cashing in on Customer Insights by Peppers & Rogers and IBM, IBM’s Deepak Advani notes, “Business leaders can leverage these insights to help them develop relevant offers and to design more customized channel experiences for high-value customers. For instance, understanding how most valuable or most growable customers use a company’s website and why they behave the way in which they do (e.g., pages visited, why they leave) during these interactions can help decision-makers to determine the types of functionality and capabilities that could further improve customer experiences. Such efforts can help companies to engage these customers more effectively and increase their loyalty and lifetime value.”

In other words, if you take the time to observe your customers , and truly analyze their behaviors, they will tell you everything you need to know about how to engage them. Growths in sales from a new demographic means that you should be looking at ways to further engage that audience. Unusually low response to your latest campaign by your target audience may mean you need to test new messaging. It’s a simple philosophy, but it does require the right tools and the right partners. But with the right technology and good clean data, marketers can create – and act upon – a virtual roadmap to their most effective marketing.

To learn more about the MediaMath solution, click here.

AnalyticsIntelligenceUncategorized

How to Move from Direct Response Analytics to Holistic Analytics

September 5, 2013 — by MediaMath

add_chi_6-960x640.jpg

In direct response, there are two metrics that ultimately matter: the click and the conversion. The click is a measure of consumer interest, and the conversion is usually equivalent to the sale.

As digital advertising matures, we know there are many points along the customer journey worthy of measurement. The click is not the only indicator of an interested consumer. Today, we can track and measure any number of consumer interactions: how long a consumer has watched a video, how frequently they interacted with an in-ad game, or an ad within a game app, or whether they loved a social ad so much they shared it with friends. We can also track business metrics that matter for brand marketers, such as brand lift, purchase intent and more.

The pressure is on to measure everything…but “everything” can mean all things to all people. While, according to a study by ClickZ and Effectyv, 82 percent of marketers are measuring multi-channel data …

  • 37.5 percent defined “multi-channel” as web, social and mobile only.
  • 35.6 percent defined “multi-channel”  as web, mobile, social, marketing spend, sales, back-office data, off-line channels (store, TV, radio, print).

For those of us who have been in the digital space for years without looking up, determining the additional intelligence you need may be a challenge. Many aren’t even aware of all the options available or how to begin measuring them – much less measuring “everything.”

The first step is defining the KPIs that matter most to your business, whether that’s brand awareness, sales, or engaging and activating existing customers. Next, secure the technical platform that can help you measure, analyze and optimize for those goals.

While it’s important to have a dashboard that clearly lays out your analytics, it’s not enough to simply scan the data – although that’s what most marketing organizations do. A recent Teradatastudy revealed that while a majority currently collect a range of data types, including demographic (80%), customer service (72%) and customer satisfaction (62%), only 19% report using the data to drive marketing efforts.

Of course, the benefit of holistic analytics is having that 360-degree customer view so that you can act on it, adjusting content, delivery and frequency as needed for optimal results. Holistic analytics require a balanced blend of people, process and platform – and people come first in that sequence for a reason.  Analytics, after all, are a decisioning tool, and it’s up to the marketer to make those decisions as they relate to optimizing to KPIs.

And of course, “people” also refers to consumers, who always must come first. Holistic analytics are ultimately a tool to ensure they receive the most relevant, timely and helpful messages possible.

To learn more about the MediaMath solution, click here.

AnalyticsIntelligenceUncategorized

Measure the Right Metrics at the Right Times

September 3, 2013 — by MediaMath

1378217622.png

“Measure everything” seems to be the current mantra in data-driven marketing, but with an ocean of data flowing around us, the idea of measuring everything can be positively overwhelming. To gain a holistic view of your programs, you don’t have to actually measure everything; you just have to measure the right things in the right way.

For CMOs figuring out a way to optimize all of their favorite metrics, within a single platform, is the key to success. As such, it is important to measure KPIs at many stages throughout the customer lifecycle, not simply at the last touch point. Examining campaigns throughout the funnel not only enables CMOs to evaluate multiple initiatives at once for a holistic view of customer engagement, but to make the necessary adjustments to drive engagement and conversions. It also provides context for those metrics when it comes time to present to the board.

For instance, analyzing a consumer’s engagement with your web site and determining possible reasons why they have not converted can help inform your efforts down the line to further engage them. And examining your digital efforts, like SEM, alongside online surveys, can provide insight into the effectiveness of your branding and awareness initiatives. The point is, no consumer conversion happens in a vacuum. Building awareness and fostering customer relationships involves several different touch points, as does converting, and they vary such that they cannot be measured in the same way.

With respect to measurement, there are a myriad of ways to arrive at the agreed-upon KPIs. It requires solid, reliable data and an analytics platform that can plug into all your campaigns and account for all the data types (online, offline, mobile, etc.) that are needed to measure your goals.

Ultimately, it’s not about examining every single available data point, but rather finding a way (an approach and a platform) to measure the right metrics at the right times, and to coalesce those data points into a complete view of campaign performance.

To learn more about the MediaMath solution, click here.