• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Trust me, I’m a researcher

Everyone knows the aphorism lies, damn lies and statistics. We continue to use and rely on them but do we – and more important, do our audience – trust them?

The government isn’t so sure, and has some stats that suggest that trust is lacking. Leaving aside the contradiction of trusting statistics that indicate a lack of trust in statistics…
– In 2005 ONS reported that only 17% of people believe that official figures are produced without political interference and only 14% say the government uses official figures honestly
– A Eurobarometer report put the UK last of all the 27 EU member states in terms of public trust in official statistics

This is why The Statistics Commission (obviously shortened to Statscom) is being replaced by the UK Statistics Authority (headed up by no less a person than the President of my former college Sir Michael Scholar). The two main issues for this change, which do amount to more than a rebranding, are:
There was no limit to the time the government could sit on, and thus spin, figures
There were no powers to constrain the spin put on figures, or ability to compel the government to report objectively or accurately

It is interesting that the government is worried that a lack of trust is symptomatic of overt spin, for this is a given in the type of research I work in. Paraphrasing an old joke, with 4 researchers you can get 9 opinions. As evidence, note the preening and posturing that accompanies each ratings report – whether it is RAJAR, NRS, ABC or another measurement tool. Each party is able to find a positive message from the results. It is no surprise when competitors offer completely contradictory accounts of the same figures.

Research is regularly commissioned with a hypothesis that the client wishes to prove, and results can be fashioned to suit that hypothesis. Numbers will be open to interpretation and it is difficult to get a truly objective opinion. Unfortunately, exaggeration of data to suit an agenda is not uncommon. Contentious research will provoke arguments from both sides. The more exaggeration gives more scope for sniping, and reduces faith in the findings.

The Market Research Society can control the quality of the research process by bestowing and removing accreditation, but it has no powers of compulsion over how research is reported. And nor should it. An independent commission to verify the reporting of research would be unworkable – for cost, time, size and client confidentiality issues among many others.

However, more effective self-regulation can build trust in data reports. Data will always be open to interpretation, but integrity is vital. What good is the spinning of a good message when the truth is distorted? Ultimately, inaccurate recommendations will only come back to hurt the client.

This brings me on to some of the recommendations laid out in what is presumably the final report by Statscom – Official Statistics: Values and Trust – recommendations that all would be wise to take on board.

1. It would do more for trust if there were greater public engagement by the professionals, regardless of the political ripples that might then create

Absolutely. Opening research to debate can evolve the conversation. Issues can be dissected and probed further. Not everyone will agree but identifying issues of agreement and issues of contention can help clarify the message.

2. Better explanation of the messages contained in official statistics is likely to be one of the most potent ways to ensure they are better used – and thus deliver greater value. Better explanation means widespread and routine dissemination of statistical commentary written in a way that is understandable to a broad readership, with key messages highlighted and the limitations of the statistics considered in the context of their likely use.

Research designs aren’t infallible. Acknowledging the restrictions of the design and the level of statistical confidence in the figures underlines openness. Messages conveyed may then be caveated, but there is less room for suspicion.

3. Users also need to be confident that the statistical products have not been amended (or concealed or delayed) so as to suit a particular policy or argument. These two components – quality and probity – are central to the concept of being trustworthy.

MRS Accreditation should ensure quality within the research process. Without an ombudsman probity is difficult to relate, but openness should help build trust.

4. The key to quality in the statistical output is that the producer must advise the potential user on the merits of the estimates and the potential pitfalls of relying on them.

Definitely. Understandably, research agencies don’t want to damage client relations by picking holes in a hypothesis. Indeed, in a competitive pitch, the drawbacks of the research methodology may be glossed over. But when research is being used to influence business decisions, the client needs to be fully informed of the margins for error in the research.

There will always be agendas and there will always be individual interpretations in research. By opening up debate to both the methodologies and recommendations, we can build on the conversation, and that can build on trust.