The results are in for those ‘controversial’, ‘bold’ radio programming gambles
The last radio ratings survey of the year is done, and the results are in. So were those gambles worth it, asks Zoe Samios?
This time last year, Southern Cross Austereo’s chief creative officer, Guy Dobson, told me he wasn’t fussed about the results of the final survey of 2017.
“It’s kind of an irrelevant landscape today, because they’ll be fresh and new next year. May the best team win,” he said.
He had a point.
Wow, the main (first) graphic of the man with the quaff shows someone with an “extremely healthy” ego. The pose screams of self satisfaction and pomposity. Is it a stock photo or is there someone really like that in real life? Inquiring minds would like to know 😉
And these numbers prove yet again that most movement in radio ratings is noise, especially when plotted on a graph with wildly exaggerated Y-axes.
There are notable exceptions of course, such as the Alan Jones and (surprisingly to me) the Kyle and Jackie O effects, but considering the still outdated survey methodology, analyses of share movements are certainly far less useful than claimed by content directors. Guy Dobson is closest to the mark with his comments.
As a mathematician as well as a long-time radio broadcaster I am stumped how any tick in a diary or click of an online box based on long-term recall could possibly be an independent event. Inevitably that’s how some diaries have to be filled out – it’s just human nature to delay until the last minute something that seems so trivial. That issue is compounded with multiple diaries in the same home.
If these data points are not independent or even if just a small percentage are questionable, the margin of error is so large it’s pointless drawing any kind of survey-to-survey inferences without at least an order of magnitude more data points.
Any science or maths professional will tell you quantitative statistical results should never be released without error bars. The assumptions are so wild, and when such a horrifying proportion of the audience doesn’t even know what station they are listening to, in my humble opinion makes these six-weekly ritual microanalyses next to useless and mostly a function of luck and the amount of promotional noise made by stations a few hours before the collection of survey diaries.