One ARF thought leader Horst Stipp wrote me recently about Creative, Targeting and Context, three of the most powerful levers the buy side can pull to make advertising work harder per dollar invested. Horst has done some of the most compelling work on bringing context effects to industry attention in recent years. Now he’s musing about how to put that work into the proper perspective with the other top levers available to marketers, and he and I love evolving our ideas together. Today’s column is sort of my way of answering Horst’s most recent email to me.
I put this slide together a few years ago to try to answer the same question. What are these levers each worth, relative to one another?
Across all the campaigns (around 80 brands, several hundred campaigns) aided by TRA on my watch, the average improvement in incremental sales ROI was +28%. This was caused by downshifting on demos and upshifting on reaching purchasers. In many of those cases the purchasers we were trying to reach were Heavy Swing Purchasers (HSPs) -- people who buy the brand only occasionally -- which Joel Rubinson has given the much better name "Movable Middle."
Context effects are shown on this slide based on two studies of the increase in advertising effectiveness caused by placing an ad in an environment where there is alignment between the psychological characteristics of the ad and the context, using the RMT DriverTag system. +36% is the average of ROI effects across studies done by Nielsen NCS and others. In this case we also have branding effects, which tend to be higher, based on a study done for one of the world's largest advertisers by 605.
Interestingly, the context effects show a higher impact than the targeting effects. But this does not tell the whole story, as I'll explain, after finishing our discussion of this slide.
The one creative effect shown on the slide is the effect of what I call "creative weeding." What that means is yanking specific creative executions as soon as ROI data show that they are not producing the sales effects desired. This depends on the purchase cycle of the category involved, and on average it's about three months before the yanking takes place. This weeding effect reflects removal of the bottom ad performers as opposed to the creating of terrific new ads. But notice that it has more positive impact than either targeting or context. How much greater would the impact be if we were talking about the impact of a terrific new ad?! It could be infinitely higher, if the ads being used were all duds. Logically, the impact of better new creative has got to be the most important lever to pull. Too bad one never knows whether it is a better ad until it's too late.
Why is this the case? Why doesn't pretesting tell us in advance of spending the first media dime? The main reason is because pennywise pound-foolish thinking has caused pretesting to not always be used nowadays! This is an easily fixed problem. Pretesting can be very quick nowadays and its typical cost range of $35K-$65K compares with how much will be spent if the dud ad airs and runs for three months before it is weeded -- that could typically be a waste of at least $3,000,000 media investment.
But the other reason is that pretesting is not a perfect predictor of ROI. Done by one of the best pretesting companies, pretesting is nearly perfect in predicting persuasion effects, but not perfect in predicting immediate sales effects. This is because persuasion is the key metric in branding, the turning point in the funnel journey. And all branding effects need time to simmer. In the work published by ARF and done by TRA and Mars, ads which had all been pretested by the best pre-testers, had the following average results: a third had no sales effect at all in the first months of the campaign and were weeded; a third had decent sales effect; and a third had excellent sales effect.
Conclusion: Creative is what deserves the most attention. Research that helps the creative process deserves much more attention than it gets.
Let's look at the final lever on the slide above. Fast reach -- let's say reach per average day -- has more sales effect than slow reach, and far more sales effect than low reach for obvious reasons. By why should the speed of the reach make a difference? It's because of that phenomenon Erwin Ephron told us about, recency. The faster the reach, the more recency, no matter when the prospect goes shopping. This is demonstrated in an ARF study I did with Dave Poltrack (CBS) and Leslie Wood (Nielsen NCS):
Look at those short green lines and how the fast reach (high rated) schedule short green lines hover above the short red lines of the slow reach (low rated) schedule. This is the effect of recency.
Importantly, fast reach can be achieved not only by using high rated programming, but it can also be achieved by using a fast reach optimizer, such as Simulmedia's VAMOS system and our own OptiBrain system.
Now let's get back to the pin we put in it above, comparing targeting and context effects for size. There are three different classes of targeting:
- Targeting by buying whole programs or websites (content targeting)
- Targeting based on deterministic 1:1 purchase/intent/motivation data in an addressable medium (ID targeting)
- Lookalike targeting ("LAL")
The average +28% ROI lift shown above is for content targeting. When the movable middle is targeted deterministically based on purchase history in an addressable medium, the results can be far higher, as we can see in the following slide:
Another case where deterministic addressable targeting was used, this time targeting IDs of people who (based on their online content consumption) had the same motivations as the ads used, also showed a quintupling effect: