CIMM CEO and Managing Director Jane Clarke, Sequent Partners Jim Spaeth and Alice Sylvester, and Janus President Howard Shimmel have conducted a much-needed and powerful study of nine companies using set-top box and/or Smart TV data in ROI attribution.
These thoughts about the study reflect my own thinking and are not ideas brought out in the CIMM paper. The CIMM paper is factual and impartial.
Stepping back for a moment, this CIMM report is one of a series. The overall series intends to study the different ways that suppliers approach attribution, in terms of data used, how data are transformed, attribution equations and other algorithms.
This particular report focuses on the data used, specifically the ad occurrence data, and the audience exposure data.
The ad occurrence data turns out to have a substantial impact on the final results. Ad occurrence data has typically been around 95% accurate at least since modern automated monitoring came into existence in the 1980s. Nielsen with its Monitor Plus and Adviews technology and Kantar with its own technology have not come under criticism much over the years. It was surprising to see that the four different sources of monitoring data nowadays are getting such very different results from one another. Eye opening.
Network traffic logs and Mediaocean as run data were used as the benchmarks. The industry knows that even those sources contain errors. So when Sequent and Janus dissect the spots seen by one service but not another, it's unnerving. If in the 21st Century we cannot even depend on this old stalwart then what can we depend upon?
Perhaps the most troubling thing is the lack of time synchronization. The analyst team had to use the plus-or-minus-five-minutes rule to get the maximum number of matches between ad occurrence measurement systems.
To make this problem go away we need to finally pull the trigger on 100% Ad-ID invisible/inaudible tags in all the ads that get trafficked. Otherwise ROI measurement and/or modeling will be built upon sand.
Howard Shimmel writes about the draft of this article: "The section on Ad-ID is really important. For the brand campaign we ran, the exposure providers found spots that were not captured in Mediaocean. Could be that media companies are running spots they're not able to rectify with Mediaocean, and not getting paid for. Even though it was only 1% of spots, 1% of $50 billion in national TV advertising is $50 million worth of spots potentially not being paid to media companies. Having Ad-ID could remove any potential errors, also allow financial reconciliation faster."
Thank you CIMM, Sequent and Janus for that wakeup call.
The team also studied the audience measurement data across the nine companies. Nielsen was used as the benchmark. This was done while holding the occurrence data constant, so that it is a clean measure of the audience measurement itself. The same thing was done the other way around, holding the audience data constant while studying the occurrence data. The team will be now looking at the data combined and its effect along with the processing and algorithms in terms of impact on the final data clients use in decision making.
The report makes the point that the general outlines of what each company does fall into three buckets – set top box data alone, Smart TV data alone, and the combination – yet even within the same bucket there is great variation in the audience data reported. It's the methodology, the report concludes, the granular differences in the way things are tabulated causes very different results.
Although the report bends over backwards to neutralize any competitive use of the report, the one elephant in the room is the 20 percent lower audiences than Nielsen as reported when Smart TV data are used alone, without set top box data.
There are a number of well-known suppliers who are using Smart TV data on its own, and rumors that some of them are now talking about adding some set top box data.
It's easy to understand why Smart TV data would come in lower, in a way that could not be adjusted for accurately by algorithms. Remember Nielsen has always had a rule about covering the TV sets in a given home that account for at least 95% of the viewing minutes. You can only license Smart TV data from the manufacturers who sell it, and typically one of the attribution companies studied is only licensing Smart TV data from a single manufacturer. That means they are typically covering one room. The average Smart TV home as an average of 1.1 Smart TVs. The average U.S. home has an average of over 2.6 TVs.
We expect to see a movement to combining Smart TV data into a strong base of set top box data, as pioneered by 605 and VideoAmp.
These two outcomes presaged here may not happen overnight but they might have taken ten times longer if this study had not been done. Hurrah to all concerned.
Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.com/MyersBizNet.