main

CareersCultureDIGITAL MARKETINGMediaPeopleUncategorized

Mathlete Values: Win/Win Wins

October 11, 2016 — by MediaMath

Last month, MediaMath redefined the values that we hold ourselves to for our clients, partners, and employees. In this series, Mathletes reflect on each of the values that MediaMath has adopted.

Success in this world is not a net-zero game, but finding ways to grow your business by offering advertisers ways to improve the return on their spend can translate to better value for publishers as well. In the most basic formulation, win/win wins.

 We take the same approach to how we work with each other within our teams – it is very difficult to coerce anyone to do something they do not want to do. Rather than expending the extra effort to get someone to do something against their interest, find a way to make them benefit from your desired course of action. What works for business relationships works for interpersonal relationships, too. People are happy to help you win when it helps them win.

DIGITAL MARKETINGMediaUncategorized

Monthly Roundup: Top 5 Most Popular Blog Posts for September

October 3, 2016 — by MediaMath

blog_typewriter.jpg

And then it was fall…another month (and season) over, another blog roundup due! Here’s what made the top five cut for September:

• #1 Does Header Bidding Benefit Buyers?

• #2 Employee Spotlight: From Law School to Leading a Global Operations Function

• #3 The Curator’s Guide to Dmexco 

• #4 Five Lessons Learned: A Look Back on Marketing Engineer Program’s Five Cohorts

• #5 Machine Learning Demystified, Part 3: Models

CultureDIGITAL MARKETINGMediaPeoplePROGRAMMATICUncategorized

Mathlete Values: Innovate to Scale

September 27, 2016 — by MediaMath

Last month, MediaMath redefined the values that we hold ourselves to for our clients, partners, and employees. In this series, Mathletes reflect on each of the values that MediaMath has adopted.

How do Mathletes innovate to scale?

Innovating to scale requires thinking about how to make solutions work across geographies and anticipating problems that have not come up yet. It’s knowing what to focus on and how to empower other people to innovate in useful and productive ways on the edge. 

CultureDIGITAL MARKETINGMediaPeopleUncategorized

Five Lessons Learned: A Look Back on Marketing Engineer Program’s Five Cohorts

September 21, 2016 — by MediaMath1

NMI_lessons.jpg

MediaMath’s Marketing Engineer Program (MEP) is about to graduate its fifth cohort on September 23rd. We have learned a lot about establishing and refining a training program over the past two years.  When MEP launched in June 2014, it was a six-month rotational program. However, it has evolved into an intense 12-week curriculum-based training program, with just under a four percent acceptance rate. Now that it has grown into an international training program, it continues to produce future leaders within digital marketing.

In honor of graduating over 50 marketing engineers, we decided to reflect on the biggest lessons learned since its launch two years ago.

• Everyone wants to make an impact (that doesn’t change in a training program)

Trainees wants to know that the work they’re doing is contributing to something, anything really. Every project and every assignment should have a purpose. Focus on real world scenarios so that trainees can apply the content quicker and understand how their decisions impact real people.

• Meet the learner where they are (because everyone learns differently)

This approach is one that the New Marketing Institute has always kept at the forefront and with our Marketing Engineer Program, it’s no different. We do this with a blended learning approach, creating different experiences for different learning styles. For instance, if the cohort is learning about uploading pixels, we’ll lead a training session, have them shadow a Subject Matter Expert (SME) and follow it up with a team project.

• Feedback is a gift (and a two-way street)

We start by teaching MEPs how to give and receive effective feedback. Starting with this foundation helps build a culture of constant feedback as well as a sense of empowerment. We continue to collect feedback from MEPs throughout the program, learning what works and what doesn’t, in efforts to make changes to future programs. Invest time into collecting honest, tangible feedback on a constant basis.  

• A longer program isn’t necessarily better (focus on hitting the right objectives)

Over the past two years we’ve taken the program from six months to 12 weeks. A shorter program has its benefits – trainees enter the workforce faster and trainers spend fewer hours in the classroom.  But it can also come with its challenges – increased participant workload and less time to meet desired learning objectives. By continuously reviewing and refining your learning objectives you can align projects, shadowing and trainings to the end goal and you can help trainees understand the content faster.

• Team building is imperative (establish the culture upfront)

Throughout the program we help MEPs identify how their strengths and weaknesses affect how they interact with each other and the world around them. By understanding themselves, they can become a more effective team member, and ultimately a more effective team. Techniques include trainings on MBTI, emotional intelligence, goal setting and effective feedback.

The process of launching a program hasn’t been simple, but it has been worth it. While the program has never looked the same, we have our participants to thank for making it stronger. We’ll be releasing more lessons learned in our millennial talent series coming out this fall. Stay tuned for more tips on recruiting, retaining and reviewing young talent.

CultureDIGITAL MARKETINGMediaPeoplePROGRAMMATICUncategorized

Mathlete Values: Obsess About Outcomes

September 14, 2016 — by MediaMath

Last month, MediaMath redefined the values that we hold ourselves to for our clients, partners, and employees. In this series, Mathletes reflect on each of the values that MediaMath has adopted.

How do Mathletes obsess about outcomes?

Like most of our company values, these are both a promise to clients as well as an expectation for ourselves. TerminalOne optimizes advertising spend towards media that actually improves a business’ bottom line. In the exact same way, we value actions that drive actual results for our business objectives and the business objectives of our clients. As employees and as partners, we obsess about outcomes. If it doesn’t affect the outcome, it doesn’t matter. 

DIGITAL MARKETINGMediaUncategorized

Monthly Roundup: Top 5 Most Popular Blog Posts for August

September 2, 2016 — by MediaMath

blog_aug.jpg

In the month of August, our top performing blogs covered everything from the 2016 Summer Olympics and our Marketing Champion campaign to addressing four fatal flaws of digital attribution.

• #1 Do You Have What it Takes to be a Marketing Champion?

• #2 Four Fatal Flaws of Digital Attribution and How to Address Them: Part I

• #3 Employee Spotlight: From Tech’s Infancy to Programmatic

• #4 5 Questions with Cardlytics

• #5 Four Fatal Flaws of Digital Attribution and How to Address Them: Part II 

AnalyticsDataDIGITAL MARKETINGIntelligenceMediaTechnologyUncategorized

Machine Learning Without Tears, Part Two: Generalization

August 22, 2016 — by MediaMath

ML2.jpg

In the first post of our non-technical ML intro series we discussed some general characteristics of ML tasks. In this post we take a first baby step towards understanding how learning algorithms work. We’ll continue the dialog between an ML expert and an ML-curious person.

Ok I see that an ML program can improve its performance at some task after being trained on a sufficiently large amount of data, without explicit instructions given by a human. This sounds like magic! How does it work?

Let’s start with an extremely simple example. Say you’re running an ad campaign for a certain type of running shoe on the NYTimes web-site. Every time a user visits the web-site, an ad-serving opportunity arises, and given the features of the ad-opportunity (such as time, user-demographics, location, browser-type, etc) you want to be able to predict the chance of the user clicking on the ad.  You have access to training examples: the last 3 weeks of historical logs of features of ads served, and whether or not there was a click. Can you think of a way to write code to predict the click-rate using this training data?

Let me see, I would write a program that looks at the trailing 3 weeks of historical logs, and if N is the total of ad exposures, and k is the number of those that resulted in clicks, then for any ad-opportunity it would predict a click probability of k/N.

Great, and this would be an ML program! The program ingests historical data, and given any ad-serving opportunity, it outputs a click probability. If the historical (over the trailing 3-weeks) fraction of clicked ads changes over time, your program would change its prediction as well, so it’s adapting to changes in the data.

Wow, that’s it? What’s all the fuss about Machine Learning then?

Well this would be a very rudimentary learning algorithm at best: it would be accurate in aggregate over the whole population of ad-exposures. What if you want to improve the accuracy of your predictions for individual ad-opportunities?

Why would I want to do that?

Well if your goal is to show ads that are likely to elicit clicks, and you want to figure out how much you want to pay for showing an ad, the most important thing to predict is the click probability (or CTR, the click-through-rate) for each specific ad opportunity: you’ll want to pay more for higher CTR opportunities, and less for lower CTR opps.

Say you’re running your ad campaign in two cities: San Francisco and Minneapolis, with an equal number of exposures in each city. Suppose you found that overall, 3% of your ads result in clicks, and this is what you predict as the click-probability for  any ad opportunity. However when you look more closely at the historical data, you realize that all ad-opportunities are not the same: You notice an interesting pattern, i.e. 5% of the ads shown to users in San Francisco are clicked, compared to only 1% of ads shown to users logging in from Minneapolis. Since there are an equal number of ads shown in the two cities, you’re observing an average click-rate of 3% overall, and …

Oh ok, I know how to fix my program! I will put in a simple rule: if the ad opportunity is from San Francisco, predict 5%, and if it’s from Minneapolis, predict 1%. Sorry to interrupt you, I got excited…

That’s ok… in fact you walked right into a trap I set up for you: you gave a perfect example of an ad-hoc static rule: You’re hard-coding an instruction in your program that leverages a specific pattern you found by manually slicing your data, so this would not be an ML program at all!

So… what’s so bad about such a program?

Several things: (a) this is just one pattern among many possible patterns that could exist in the data, and you just happened to find this one; (b) you discovered this pattern by  manually slicing the data, which requires a lot of time, effort and cost; (c) the patterns can  change over time, so a hard-coded rule may cease to be accurate at some point. On the other hand, a learning algorithm can find many relevant patterns, automatically, and can adapt over time.

I thought I understood how a learning algorithm works, now I’m back to square one!

You’re pretty close though. Instead of hard-coding a rule based on a specific pattern that you find manually, you write code to slice historical data by all features. Suppose there were just 2 features: city (the name of the city) and IsWeekend (1 if the opportunity is on a weekend, 0 otherwise). Do you see a way to improve your program so that it’s more general and avoids hard-coding a specific rule?

Yes! I can write code to go through all combinations of values of these features in the historical data, and build a lookup table showing for each (city, IsWeekend) pair, what the historical click-through-rate was. Then when the program encounters a new ad-opportunity, it will know which city it’s from, and whether or not it’s a weekend, and so it can lookup the corresponding historical rate in the table, and output that as its prediction.

Great, yes you could do that, but there are a few problems with this solution. What if there were 30 different features? Even if each feature has only 2 possible values, that is already 2^30 possible combinations of values, or more than a billion (and of course, the number of possible values of many of the features, such as cities, web-sites, etc  could be a lot more than just two). It would be very time-consuming to group the historical data by these billions of combinations,  our look-up table would be huge, and so it would be very slow to even make a prediction. The other problem is this: what happens when an ad opportunity arises from a new city that the campaign had no prior data for? Even if we set aside these  two issues, your algorithm’s click-rate predictions would in fact most likely not be very accurate at all.

Why would it not work well?

Your algorithm has essentially memorized the click-rates for all possible feature-combinations in the training data, so it would perform excellently if its performance is evaluated on the training data: the predicted click-rates would exactly match the historical rates. But predicting on new ad opportunities is a different matter; since there are 30 features, each with a multitude of possible values, it is highly likely that these new opportunities will have feature-combinations that were never seen before.

A more subtle point is that even if a feature-combination has occurred before, simply predicting the historical click-rate for that combination might be completely  wrong: for example suppose there were just 3 ad-opportunities in the training data which had this feature-combination: (Browser = “safari”, IsWeekend = 1, Gender = “Male”,  Age = 32, City = “San Francisco”, ISP = “Verizon”), and the ad was not clicked in all 3 cases. Now if your algorithm encounters a new opportunity with this exact feature-combination, it would predict a 0% click-rate. This would be  accurate with respect to the historical data your algorithm was trained on, but if we were to test it on a realistic distribution of ad opportunities, the prediction would almost certainly not be accurate.

What went wrong here? Suppose the true click-rate for ads with the above feature-combination is 1%, then in a historical sample where just 3 such ad-opportunities are seen, it’s statistically very likely that we would see no clicks.

But what could the learning algorithm do to avoid this problem? Surely it cannot do any better given the data it has seen?

Actually it can. By examining the training data, it should be able to realize, for example, that the ISP and Browser features are not relevant to predicting clicks (for this specific campaign), and perhaps it finds that there are a 1000 training examples (i.e. ad-opportunity feature-combinations) that match the above example when ISP and Browser are ignored, and 12 of them had clicks, so it would predict a 1.2% click-rate.

So your algorithm, by memorizing the click-rates from the training data at a very low level of granularity, was “getting lost in the weeds” and was failing to generalize to new data. The ability to generalize is crucial to any useful ML algorithm, and indeed is a hallmark of intelligence, human or otherwise. For example think about how you learn to recognize cats: you don’t memorize how each cat looks and try to determine whether a new animal you encounter is a cat or not by matching it with your memory of a previously-seen cat. Instead, you learn the concept of a “cat”, and are able to generalize your ability to recognize cats beyond those that exactly match the ones you’ve seen.

In the next post we will delve into some ways to design true learning algorithms that generalize well.

Ok, looking forward to that. Today I learned that generalization is fundamental to machine-learning. And I will memorize that!

CareersDIGITAL MARKETINGMediaPeopleUncategorized

Training Goes Global with Consistency and Relevance

August 15, 2016 — by MediaMath

global_training-.jpg

Training is a critical piece of the success of any organization because improved skills and knowledge at all levels increases competency and productivity. Training can improve skills and subject matter content knowledge, and offers relevant information to appropriate audiences.

New Marketing Institute (NMI) has committed to training our industry to increase growth and preparation of individuals and organizations as a whole. Our solution to the setting up our learner for success is our train-the-trainer approach to programming, where we expand the talent pipeline beyond our own enterprise’s growth by training others in ways that enable them to become trainers. Our ambition is to scale our training and certification programs and facilitate the development of professionals in all facets of digital marketing. With this model in mind, we are expanding our operations on a global scale.

In an article posted in April 2016 via TD Magazine we outlined our solutions and best practices for addressing globalization in training and talent development. We’ve outlined some of its key point here.

Challenges of Global Learning

Globalization and transformative scale brings with it some snags:

  • Localization of content
  • Localization of language and translation
  • Reaching learners at multiple knowledge and skill levels
  • Learners have multiple learning styles

Our Solutions

Adopt a blended learning approach. Treat blended learning like the fully integrated program it is rather than a solution that happens to have a little bit of one thing or another. NMI’s blended learning approaches include facilitator-led live sessions, e-learning, self-directed videos, games, and workbooks; for customized training we offer Q&A sessions and more.

Documentation. A robust offering of documentation is essential to ensuring successful global expansion of programs. This documentation should support all of the functions with a specific audience in mind, while taking into account the characteristics of processes and documentation in different locations.

Trusted advisers. We recommend cultivating local subject matter experts through a train-the-trainer program that trains and identifies local partners and trusted advisers. You can’t do it alone—and you shouldn’t either.

A robust set of resources. Adult learners must be involved in the planning and evaluation of their own instruction and training. Providing relevant and timely materials to accompany training enables learners to be in control. Additionally, updating workbooks and class resources on a regular basis will keep material fresh and training programs current.

Best practices

NMI’s mantra is to “meet the learner where they are.” That mantra speaks to all of the challenges by keeping in mind that all learners bring different backgrounds, expertise, experiences, and preferences; and encouraging planners and facilitators to think about all those levels in their implementation of programs. Just as MediaMath, promises advertisers “outcomes, transparency, and control” in their digital marketing efforts, NMI also subscribes to that promise when designing instructional programs.

Outcomes. We design with outcomes in mind, setting measurable goals and planning instruction that will meet those objectives. Keeping outcomes in mind will drive the direction of the program as well as drive the way a program is evaluated. In our opinion, measurable, data-driven outcomes are essential in scaling success globally.

Transparency. Learners should take an active part in expected outcomes and evaluation. Without transparency into results and evaluation, it is hard for professionals to improve and optimize in their work. Professional development and growth of individuals happens by receiving feedback at multiple points in learning and through subsequent self-reflection.

Control. We invite learners to take the reins of their own growth, development, and learning. Learning programs are designed with the resources, structure, and flexibility to allow learners to be in control of their progress. This control puts learners in the driver’s seat and empowers them to see the value of their development on their own.

For more information about NMI and its offerings, visit our homepage and download our Engagement Packet.

DIGITAL MARKETINGMediaPROGRAMMATICTechnologyUncategorized

Four Forces That Affect Consumer Adoption and How to Leverage Them in Programmatic Advertising

August 11, 2016 — by MediaMath

resize_targeting.jpg

This article by Parker Noren, Director, Programmatic Strategy & Optimization, MediaMath, originally appeared in MarTech Advisor. Noren suggests marketers can increase customer adoption through intelligent audience targeting and the right advertising strategy. Read an excerpt below: 

Attracting net new consumers generally produces greater brand growth than expansion among current customers. Yet, relatively high acquisition costs often result in prospecting being deprioritized in programmatic campaigns. We’ll review how you can improve prospecting efficiency by aligning your execution with how consumers adopt brands.

In many ways, programmatic is still in its infancy despite its genesis being more than a decade-old. Consider a Forrester report from last year in which only 23 percent of respondents said they understand programmatic buying and actually use it to execute campaigns. As the industry rapidly pursues new technology and ways of connecting with consumers, it remains inwardly focused. This is fair in part—the way we engage with consumers through digital is unique from mass broadcast channels. However, do not forget that marketing and consumer theory still applies. We can improve the success of our programmatic campaigns by applying a deeper understanding of how consumers adopt brands.

Market research and consulting firms have coined and validated many frameworks for explaining why consumers will adopt a product or service. The Progress Making Forces—developed by Bob Moesta of The Re-Wired Group and core to Jobs-To-Be-Done Theory—provides the most clarity for understanding why consumers will adopt a product or service and is applicable regardless of your brand’s industry or stage in lifecycle. It places the motivations, frustrations and anxieties of the consumer at the center.

Four forces determine whether a consumer will change her current behavior and adopt something new. Two motivate her to switch to something new, while the other two keep her set in current behaviors (see Figure 1). The Push of the Situation is the result of dissatisfaction with how current solutions fulfill a need, and the Magnetism of the Solution is how well your product or service resolves this circumstance of struggle. Even with strong motivating forces, a consumer may still not adopt a new solution. It’s easy for the consumer to fall back on the known solution due to habit, and there is risk and anxiety related to trying something new—no matter how compelling the new solution. These are the two negative forces that discourage adoption.

Figure 1: The Progress Making Forces

Marketers can improve their adoption rates by understanding their prospective customer’s four forces. Programmatic campaigns—in combination with smart tactics on-site—can be directly informed by this understanding, and ultimately drive greater incrementality as a result.

Read the rest of the article at MarTech Advisor.

DIGITAL MARKETINGMediaPROGRAMMATICUncategorized

Programmatic Education – For Today and into the Future

July 19, 2016 — by MediaMath

Adv_Programmatic_Class_Blog_Summer2016-960x346.jpg

With programmatic mechanisms now rapidly expanding beyond display and video inventory into native, digital audio, digital out-of-home, and even linear TV inventory, the need to stay abreast of these new tools sets is greater than ever. As with any evolution of technology, there are opportunities, challenges, and limitations. And there appears to be a wide disparity between those who “get it” and those who don’t. The volume of news and jargon around “programmatic” concepts is so vast it can be overwhelming to some while appearing overstated to others.

IAB is working to develop comprehensive, broadly accessible programmatic training to educate the marketplace around programmatic processes, tools, and strategic capabilities. We sat down with Ben Dick, Director of Industry Initiatives who manages the IAB’s programmatic and attribution initiatives to discuss what he and the Programmatic Council are focusing on.

  • What is the current state of the programmatic marketplace?

Marketplace activity is as strong as it’s ever been. According to a recent eMarketer Study, 2015 was the first year that discretionary investment in programmatically monetized inventory surpassed reserve buying across a few key areas, including display, video, native, social, and even “sponsorship” activity. And it’s only expected to grow higher and plateau at around 70% over the next few years. While there is most definitely still a place for reserve buying on media plans, execution and optimization across automated platforms is very much the “new normal” for digital strategists.

  • Who needs to be trained and educated around programmatic and what does that mean for employers?

Because of the central role that automated tools are playing across buying and selling, everyone in the strategic development process (clients, research teams, media strategists, analytics teams) as well as participants in the broader digital supply chain (operations, product, sales) need to be, at bare minimum, conversant with the tools and capabilities that programmatic affords.

And it’s not just the hands-on practitioners. Support staff are affected as well. For example, programmatic concepts are bleeding into how finance teams manage reconciliation and billing. These folks need to understand the new pricing, cost models, and platforms of record for this media delivery. HR teams need to recruit talent in a completely different way and identify skill sets from more technical backgrounds. The C-suite needs to acknowledge the operational changes programmatic processes necessitate as well as the investments needed to secure top talent.

As little as a year ago, when I was working in a trading desk and continuously onboarding new traders, there was very little comprehensive third-party training that managers could turn to. And because traders need to be conversant with a broader array of concepts than “traditional digital” planners, i.e. cross-functional across media strategy, analytics, and ad operations, the learning curve is steeper and time requirements exponentially greater. Often new traders need to wait four to six months before they can touch a bidding UI, whereas assistant media planners can start in a few weeks. While training is often rewarding work, it creates a burden for the more senior managers.

  • What types of programs has IAB created to meet the industry’s need for Programmatic education and professional development?

Right now, IAB offers a full-day, in-person, Advanced Programmatic class that been touring the country. It’s a very popular program created for all players in the marketplace. The interactive course covers all steps of the media process including stages of selling and buying, SSPs, DSPs, “Programmatic Guaranteed”, the tech stack, and much more.

The August the class will be hosted at MediaMath’s New Marketing Institute in New York City and it has additional stops in Chicago this July as well as Dallas and Atlanta in the fall.

Additionally, the IAB Programmatic Council and IAB Learning & Development team have been working diligently to develop deeper interactive, online curriculum around specific buy- and sell-side concepts to help expand the accessibility of the program.

  • What do buyers and sellers need to know about programmatic tools and capabilities to be successful in 2016?

If I had to highlight one thing, it’s that successful programmatic strategists think of their jobs more as the application of data and software to “guide a conversation” with consumers, rather than using it to find and target them at right moment with the right creative.

The word “conversation” implies that there’s a feedback loop or exchange between advertiser and consumer, which is often forgotten. Programmatic tools allow for always-on, real time consumer addressability across their devices. This data travels both ways, and allows consumers to provide an intimate portrait of their preferences relative to your product. This enables advertisers to constantly evolve their approach to ensure their messaging is as relevant as possible while telling a compelling story. This often translates into cost efficiencies and improved campaign ROAS (Return On Advertising Spending)

Learn more about the IAB Advanced Programmatic Course