main

ARTICLE

Trust and Transparency at the Forefront of Advertising

October 3, 2022 — by Justin Adler-Swanberg    

privacy

by Justin Adler-Swanberg, Director, Marketplace Quality 

 

MediaMath has consistently been at the forefront of efforts to support trust and transparency in the digital advertising ecosystem.  

Early in 2020, at the start of the covid pandemic and later in the spring of that year with the response to the death of George Floyd and the Black Lives Matter movement, it became clear that taking a position on the prevention of misinformation and disinformation, while at the same time supporting and avoiding over-blocking quality news content, became an important way that MediaMath could help its clients and provide a social benefit. Through our Purpose Driven Advertising initiative we focused on providing tools for clients to both prevent the spread of misinformation and disinformation as well as engage with trusted news sources. Read more in MediaMath Blog on Project Purpose Drive Advertising. The importance of this position became even clearer with the events of January 6. Read more in MediaMath Blog Stop Financing Misinformation and Disinformation Fueling Chaos 

Combatting The Rise of Misinformation and Disinformation 

2022 has seen expansion and greater public awareness of and engagement with issues of trust and transparency, particularly in relation to misinformation and disinformation. At the recent RSA Conference earlier this summer in San Francisco, which is a conference more typically focused on cybersecurity, disinformation and “information disorder” was the topic of the final keynote speech . This shows how broadly identified it has become as a threat vector.  

Why has it become important? 

There is significant social harm attached to both misinformation and disinformation. This is harm that directly affects the end user or consumer in the ad ecosystem, as well as society at large. Companies positioned to facilitate or prevent the spread of misinformation and disinformation will find it harder to simply remain neutral bystanders on the sidelines as activists, reporters, politicians, and the general public increasingly scrutinize the role that advertisers and platforms play in the dissemination of harmful misinformation and disinformation. This spread represents a Brand Safety risk to advertisers with which they might be associated through monetization of misinformation and disinformation content, and it also represents a Brand Safety and business risk to any company that is viewed as participating in or facilitating the spread, including the providers of advertising technology infrastructure who may be more used to a lower public profile.  

What can advertisers do? 

Misinformation and disinformation represent challenges since there is no universal methodology to assess them, and there is no one central list of misinformation and disinformation sources and bad actors. This means that vigilance and a pro-active approach is required. It becomes incumbent upon parties to take responsibility to ensure that they take a clear position against misinformation and disinformation as well as craft policies to define this position and actions to be taken to prevent the monetization and propagation of misinformation and disinformation, as well as to utilize and make available prevention resources and tools. The tools to stop misinformation and disinformation typically fall within the category of Brand Safety protections, often provided by various contextual data providers.  

In 2020 we learned the value and importance of new developments in machine learning approaches to contextual targeting in the efforts to avoid broad-brush blocking such as older keyword blocking approaches and use more targeted customized solutions that would allow the prevention of specifically undesirable content while permitting important and valuable news content, which also helps prevent misinformation and disinformation. In parallel we saw the rise of Brand Suitability, a conceptual expansion beyond Brand Safety promoted by the 4As and GARM (Global Alliance for Responsible Media) to create the GARM Brand Safety Floor and Brand Suitability Framework. Recently, GARM announced the expansion of this framework to encompass protections against misinformation. 

MediaMath Takes a Stand and Provides Solutions 

In late 2020, we began our ongoing partnership with the Global Disinformation Index, aka GDI, an organization that attempts to confront the challenges regarding the lack of universal disinformation standards by leveraging an approach that identifies what they refer to as “adversarial narratives”. The benefit of this approach is that it seeks to create a method of analysis that avoids bias based on things such as political viewpoints. MediaMath incorporates sites identified as high risk by GDI into our Universal Block List which provides protection across our network to all clients.  

The value of this partnership was shown earlier this year, when Russia invaded Ukraine in February. Immediately the ongoing issue of state-sponsored disinformation came to the fore, and many information sources came under more serious scrutiny. It had long been known that certain sources represented outlets for state-sponsored disinformation, but suddenly it became critical for advertisers to identify and remove such outlets from their media mix. Fortunately, through MediaMath’s partnership with GDI, we had long since included such known state-sponsored disinformation bad actors in our Universal Block List, so that when this issue suddenly gained widespread attention, MediaMath already had a robust protection in place covering all our clients. 

Additionally, MediaMath provides protections against misinformation and disinformation while also fostering quality news by working with Newsguard via data segments provided by contextual partners such as Peer 39 and ComScore. The benefit of Newsguard’s approach is that it also seeks to find an objective approach to combatting the spread of misinformation and disinformation through the application of journalistic standards in the evaluation of news sources. 

Ongoing protection can also be found on the MediaMath Platform via Brand Safety partners such as Oracle (which has also partnered with the Global Disinformation Index), DoubleVerify, Integral Ad Science, and Semasio. A number of our contextual providers offer segments that leverage GARM’s Brand Safety Floor and Brand Suitability Framework, and these are available to all of our clients to select. Read more about Brand Safety and Brand Suitability and the MediaMath Platform 

Europe Enhances Protections Against Disinformation 

Starting in late 2021 and through 2022, MediaMath by invitation of the IAB EU and the European Commission (EC) participated with numerous other stakeholders across the digital ecosystem, including well-known online platforms, in working groups chaired by the EC and overseen by a neutral “honest broker” focused on crafting a revision to the 2018 Code of Practice on Disinformation, a voluntary self-regulatory framework. The purpose of this was to come up with a broader document to meet the current times and the diversity of different players that make up the global digital environment, with a goal to enhance policies, tools, transparency, and other efforts to demonetize and further prevent the spread of harmful misinformation and disinformation. MediaMath focused its efforts on sections related to the scrutiny of ad placements as well as political advertising, two areas which particularly pertained to online advertising and the services MediaMath offers. The outcome of this was a strengthened 2022 Code of Practice on Disinformation, to which MediaMath was proud to be a signatory as a reflection of our strong commitment. More information about the Code of Practice can be found here. 

As described in the strengthened Code of Practice on Disinformation, the European Democracy Action Plan (EDAP) defines misinformation as “false or misleading content shared without harmful intent though the effects can be still harmful, e.g. when people share false information with friends and family in good faith.” Disinformation is defined as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.” Both are problematic and require efforts to prevent, and misinformation may be sourced from disinformation.  

Subsequent to the rollout of the new Code of Practice, the IAB EU very recently held its event “The Great Debate: Trust and Transparency in Digital Advertising,” in which MediaMath participated on the Disinformation panel. A recording can be viewed here. 

MediaMath is also working with the IAB EU and other stakeholders to contribute to a Guide to Disinformation, which is currently being finalized for distribution in the near future. 

Enhancing and Supporting Transparency in Political Advertising 

As previously mentioned, political advertising is one of the areas in which MediaMath contributed to the Code of Practice, which is consistent with MediaMath’s pre-existing commitment to transparency in the political advertising space. In order to provide the public with the information it needs to make informed decisions, it is important to maintain baseline transparency standards with regard to political advertising. To this extent, as part of MediaMath’s ongoing partnership with the Digital Advertising Alliance (DAA) and the Digital Advertising Alliance of Canada (DAAC) and adherence to the principles of these organizations, MediaMath supports and requires the use of the purple Political Ads icon for political advertising in the US and Canada. This icon provides consumers with a way to easily link to and see background information on the source of political advertising. To facilitate this for our clients, we provide an easy way to opt-in to the Political Ads icon within the MediaMath platform and with minimal effort. For more information on this, please contact your MediaMath representative. The DAA recently put out an informative blog post to describe their efforts. 

MediaMath’s close partnership with the DAA/DAAC on political advertising is reflected in a recent webinar the DAA put together with Venable LLP,  MediaMath, and Campaigns & Elections magazine which can be viewed here. 

Read about the DAA Self-Regulatory Principles in relation to political advertising and the DAAC’s companion Political Ads Principles & Guidelines.  

Future Developments 

By the beginning of 2024 a new level of transparency regarding all advertising in the European Union will be expected to be adhered to as part of the EU’s new Digital Services Act. Discussions are ongoing with the IAB EU and various stakeholders including MediaMath regarding the precise form and mechanism of compliance that will be needed, however some baseline transparency regarding the source of the advertisement (on whose behalf the ad is displayed) and information about how the recipient of the ad was determined will be among the required disclosures. As with our other transparency efforts, MediaMath is working to stay abreast of these new developments. 

Trust and Transparency  

As you can see from all of the above, trust and transparency remain key elements of MediaMath’s offerings to clients and positioning in the larger ad ecosystem. We are proud of our commitments to the fight against misinformation and disinformation, the support of quality news, and the enhancement of transparency in advertising. We are uniquely positioned to not only provide the best protections and support for our clients, but to also contribute to the larger social good that these initiatives enhance. We will continue to remain your trusted partner as these topics continue to evolve in the future.