With digital nonpremium video taking billions of dollars the television networks feel would have been better spent with them, the networks have taken real action to get the industry an Impression Quality metric that could be used to counterbalance CPM during media selection. Their hypothesis, supported by most of the evidence, is that television impression quality is on average higher than that of the same digital ads in videos made by nonprofessionals, especially where such ads are typically served in a "skip in X seconds" countdown mode.
As presaged by the Advertising Research Foundation in their creation of the ARF Model, there are a number of different levels at which impression quality can be measured, including viewability, sophisticated fraud detection, viewthrough, attention, ad recall, ad liking, ad brand recognition, clickthrough, search for brand, website visit, persuasion, sales, and more. Now that there is clear demand for impression quality, purveyors of all of these different metrics are beating their respective drums. Even within a single metric, attention for example, there are as many different ways of measuring it as there are suppliers. Nevertheless the television networks and many others are investing time and money finally on this age-old need for a way of valuing impressions in one media type versus another.
Since there is a very strong interaction effect between a specific ad and a specific context (can itself cause swings of 35% in sales lift), to do this for the maximum benefit to the individual advertiser, both ads and media contexts need to be measured together, under natural conditions, whichever quality metrics are collected. In the next few years, we shall probably see the beginnings of these more perfect systems, but in the meantime in order to make media advertising investments with highest chance of success, the industry will probably continue to test contexts and ads separately, and then combine their results. This is the way it’s being done right now. Ads are pretested against attention and emotion plus verbal response measures, and some ads are given more media dollars while others are run very little if at all, a process I’ve always thought of as Creative Weeding. At TRA, pretesting against sales effects showed that Creative Weeding increase ROI +52%, more than targeting Heavy Swing Purchasers, which averaged +38% ROI lift.
Today there is far more use of attention and emotion in pretesting, and less dependence on the questionnaire measures which are still usually taken also.
In the current 8-level ARF Model, Attention is the 4th level, so that we can imagine that the future holds in store for us perhaps a decade per level before we have created the perfect ROI optimization system around the year 2060.
Why not do it all at once, in this decade?
This was the idea behind my summation comment at the ARF AxS conference Attention Panel, recommending that we stop focusing specifically on attention, and instead test “cocktails” of multiple metrics across the whole ARF Model, and optimize the weights on the metrics to calibrate to ROI. In other words, give each metric the weight that will yield the best prediction of sales lift using the whole cocktail, each metric with its earned weight based on multiple regression analysis.
This idea seemed to carry the day at the AxS conference, and people are writing and talking to me about the cocktail idea daily ever since.
Carrying out the project of analyzing the various metrics against sales effects is not as daunting as it sounds. A number of companies are continually updating their estimates of the quality of the main media context choices, and many brands have pretest scores on many of their ads. RMT is the one company that is providing an estimate of the quality of the interaction between specific ads and specific contexts, so we see it as a necessary ingredient in any cocktail, in order to weigh in the interaction effect between that specific ad and that specific program, website, app, etc. The straight creative scores where available can represent the ad itself, and the average context scores can represent those contexts.
With sales as a measure, the multiple regression is powered up. The first round of analytics will tend to show that some ingredients are more predictive of sales than other ingredients. The test rig can be adjusted to drop out metrics that aren’t making any difference to sales results. New metrics being created can be tested to see if the predictive power of the current cocktail can be improved. In time, the efficiency of the system can itself be optimized, resulting in the least cost and fewest measurements taken that will yield the greatest good in terms of maximizing sales.
The media impact score produced by the cocktail will be Predicted ROI. Machine learning can be used as more and more brands begin to use the method to adapt the weights on the ingredient metrics to be more predictive for specific verticals, and someday customized down to the brand level.
Again, this is not a large lift, as compared to, for example, the establishment of a new 5000-home calibration panel for the big data analytics suppliers to use, said to take five years at a cost of $90 million. The first cocktail analysis project might cost five figures for a DTC or any brand that has its own sales data. A brand or its agency might already subscribe to some of the quality metrics. Brands usually already have methods in place for measuring their own sales, even if it is third-party data. Networks are willing to do value add deals to help along such research, since they expect to be vindicated in terms of quality leadership.
Let the cocktail party begin!
Posted at MediaVillage through the Thought Leadership self-publishing platform.
Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.com/MyersBizNet.