The Advertising Research Foundation’s annual flagship conference AudiencexScience 2023 was very special to all attendees, being the first in person (and online) major ARF conference since the onset of COVID. As such it drew over 300 people from all over the world, eager to reconnect in person with their colleagues. There was much hugging and kissing. And a valuable array of papers exceptional even by ARF standards. Here are the moments that stand out the most for me, and I’m sure that readers who attended will wonder why I left out other noteworthy happenings.
These fusion methods have now become essential parts of cross-platform measurement, written in stone in the World Federation of Advertisers/ANA Northstar standards, and the means by which Nielsen accurately projects its people meter data onto big data. Nielsen is able to combine big data from computers, mobile devices, smart TVs and other TVs, because of Nielsen’s panel and well-validated mature fusion methodologies.
In the breakout session, Pete transparently revealed an attempt to adapt these methods to validly project beyond the footprint of a big data source, which showed that in that use case, unacceptable levels of validity are irreducible. However, springing back from that disappointment, Pete showed the first results of a different method of big data calibration to the panel which yield the stability of big data without sacrifice of validity.
An ROAS optimizer showed that the highest increases in ROAS, in the 20% to 40% range, would be achieved by shifting some digital ad spend back into TV and premium digital video. Broadcast Prime Entertainment was spotlighted as deserving the largest increases, coming in at an optimal 32% of CPG TV dollars. Decades ago the ARF IRI Adworks study found that parameter to be 38%, so all of the new media choices have slightly reduced the optimal CPG allocation to Broadcast Prime.
The study showed that Feed has the most positive click behavior, Stream has the highest viewability and attention, but that almost every business outcome measure shows the three broad types almost equal in outcome value. This shows that each environment creates value in its own unique way, with some using more attention than others.
At the conclusion of this session I made the point that although perhaps not as scientific as lab measures, eye tracking is robust in the wild, where lab measures break down in all the noise. And I predicted that we would as an industry converge on a “cocktail” of metrics – eye tracking, facial emotion, skin conductance response, heartbeat, alignment of metadata between ad and context – to measure “impression quality”, a term which I recommended in place of “attention”, which Duane and Elise and everyone else on the panel agreed to, ending on a note of consensus.
Alas I’ve run out of space to cover the fascinating and valuable sessions involving Robert L. Santos, Director, U.S. Census Bureau, Colleen Fahey Rush, Executive Vice President & CRO, Paramount, Andrea Zapata, Executive Vice President, Head of Ad Sales Research, Measurement and Insights, Warner Bros. Discovery, Brian Wieser, Principal, Madison & Wall, Harvey Goldhersz, Executive Vice President, Product, Circana (formerly IRI and The NPD Group), and many others. I’ll have to dedicate individual interviews in order to close this gap in upcoming columns.
Pedro Almeida, CEO, MindProber
Mike Follett, CEO, Lumen Research
Marc Guldimann, Founder & CEO, Adelaide
Bill Harvey, Chairman, RMT
Elise Temple, Ph.D., VP, Neuroscience & Client Service, Nielsen IQ
Duane Varan, Ph.D., CEO, MediaScience
Johanna Welch, Global Mars Horizon Comms Lab Senior Manager
Posted at MediaVillage through the Thought Leadership self-publishing platform.
Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.com/MyersBizNet.