Looking through it, there are some curious results from the UK participants in that fewer people are engaging in certain social activities online. There are a couple of possible reasons as to why these scores have appeared.
Before I look into these, I want to stress that I am not trashing their research. UM have been very clever in setting up this tracker. On the one hand, they publish topline figures that are widely sought after and thus generate excellent PR. And on the other, they keep the details and breakdowns for their own internal use, which gives them a competitive advantage. Doubly beneficial.
The challenges of tracking are wide-reaching and not particular to this study. So, while I use the UM tracker as a case study, I hope my points are construed as general and not specific to their methodology.
1. Non-constant audience – UM have concentrated their study on active internet users (and fair enough – it doesn’t make sense to track non-use). But whereas demographics are largely consistent over time, the internet isn’t yet fully matured and so this audience will change. As such, the universe for wave 3 of the tracker was 17.8m UK 16-54s who use the internet every day or every other day. During Wave 4 the universe had expanded to 19m. Late-comers aren’t going to be as interested in social media or the internet as a whole, and so they will be less active.
2. Absolutes or percentages – if the universe is expanding, a percentage drop may still be an absolute increase. For instance, video viewers dropped from 86% in Wave 3 to 79% in Wave 4 – this is cited as a surprising change. But factoring in the audience size and looking at absolute figures – the number of people participating only fell from 15.2m to 15m – a 1% difference that is within the realms of sampling error.
3. Dips and seasonal effects – the UM tracker takes annual dips, rather than consistently tracks. Our behaviour is highly seasonal – we consume less of some things over summer as we go on holiday, and more of some things in January as we enjoy the novelty of our Christmas presents. The 4 UM waves to date have been in September, June, “completed in March”, and “between November and March”. This will have an effect.
4. Changing the survey options – as much as it pains me to say, respondents don’t fully and honestly answer surveys. They get bored. The more things we seek to track, the less time they will spend thinking and considering each option (though in total the time will be greater). If we give a respondent four options, they may answer three. If we give them 16 options, they would answer 12 times if they went in the same proportion, or fewer times if they got bored. Therefore answers become more spread out, and percentages for some may fall. As UM track more types of behaviour, they may be dissuading some respondents from answering completely.
5. Context – Research studies don’t operate in a vacuum – the external and interconnected environment need to be factored in to place the research in context. For instance, perhaps Christmas 2007 saw more sales of laptops with webcams than Christmas 2008. Therefore, in 2007 you have more people experimenting with uploading videos. As this isn’t particularly sticky behaviour, fewer sales the following year could explain the dwindling number (this is abstract speculation – sales of webcams may well have risen in 2008)
6. Anomalies – we survey in samples, and not censuses. Despite quotas and stratified sampling, there will always be some quirks. There is always the danger of reading too much into one data point, when it should be the general trends that are considered. So, we should wait to see what Wave 5 shows before coming up with any conclusions.
One of the projects I’m currently working on is setting up a tracker. As the above six points indicate, it is a tricky endeavour. Universal McCann have set up a great resource (I used the data several times while at ITV) and I hope to replicate their success in my work. There are plenty of challenges to meet before this happens though.