Digital ratings – why are we bothering?
Recent months have seen Mumbrella report on a number of questions relating to online traffic statistics provided by Nielsen, which the Interactive Advertising Bureau has selected as its preferred currency. In this guest post, the IAB’s director of research Gai Le Roy responds.
There is no getting away from the fact that ratings data, in any media channel, by its nature is always going to be controversial. It deals with a highly competitive media environment and provides data on which agencies and advertisers base commercial decisions. It won’t come as any surprise to most of you that in the digital space this controversy appears sometimes to be somewhat amplified.
To a large extent you can put this down to the fragmentation of the Australian digital market – the Nielsen ratings data reports on over 2,342 digital publishers and over 5,500 different entities (parents, brands & channels). And with the number of channels, commercials sites and apps competing for consumers in both the media and transaction space increasing rapidly this fragmentation is only going to continue.
Collectively we need to get use to the fact that ratings data discussions aren’t going to get any easier any time soon and that it’s never going to be as straightforward as ‘traditional media channel’ measurement that are predominantly dominated by oligopolies and duopolies.
Hi Gai
How do you account for metrics that have been faked?
*DO* you account for metrics that have been faked?
As stated previously, the most naive kiddie with running a script over a TOR browser could generate thousands of fake pageviews per hour.
Should online metrics be trusted at all?
How can you accurately report when you can’t measure people using a current MAC computer!
Good question Observer. I’ll have a crack at it.
It’s easy to fake PIs – agreed (gees, even I can do it!). If you think of the parallel of ‘wisdom of crowds’ when you line sites up against each other it becomes obvious who is gaming the numbers … who has the PI count that is out of whack. So one of the ‘tricks’ is to look at multiple sites in parallel – which is one of the core reasons behind using a panel as part of the AMS (audience measurement system).
But a panel can’t reflect all segments of the market (businesses, government, defence, education, public place etc.). But what it can do is show sites that simply don’t match typical usage patterns of the myriad of sites in the panel of thousands of people.
Also be aware that any analysis of server-side logs to identify fake PIS (such as Auto-Refresh) are fraught and bound to fail as then can be so easily side-stepped with a few tweaks of the code. So how do we detect AR? Again we rely on the panel – if we see a page being served but there was no mouse-click or user request then you have identified a source of inflated PIs. Of course for some sites like sports, ASX, news etc. refreshing the page is in the best interests of the user so a “one size fits all” rule doesn’t work. But at least with the panel we can calculate an AR rate and ‘discount’ the PI traffic.
But that is all about traffic. What the buyer (client) wants is audience. Media agencies buy traffic to deliver audience so both are important but the end-game is audience. Again, the panel comes into play.
So, in essence the panel is used to (I) quasi-verify the traffic (ii) convert that traffic to people (iii) report those people demographically.
I stress that the panel produces ESTIMATES rather than precise measurements – because the things we can measure precisely can be easily rorted and don’t meet the end-game need. Where the panel really struggles is with sites that have low traffic (while the number may look high to a publisher it is comparatively low). If the panel has a +/- 3% precision level (albeit one of the more meaningless statistics oft quoted), if a site has a reach of 3% then anywhere between 0% and 6% is statistically valid. We can increase the precision – quadrupling the size doubles the precision … but who can afford that?
Just my rambling thoughts.
Mike, Mac’s are metered. A tad short of the proportion they should be … but they are metered.
Nice one John !