Treating respondents as commodities

Treating respondents as commodities – don’t do it, kids.

Yet it happens, particularly with online surveys.

I recently had a sales call with a provider who said that their panel was no better or worse than any competitor; they sought to differentiate themselves via client management and survey aesthetics.

This experience is backed up by a pithy comment from Tom H.C. Anderson in his Linked In group (NGMR – I won’t link to it, since it is only visible to members) who said “There is really only one panel that is used by everybody. Counting panelists is like counting fish in the sea and or clouds in the sky. One day they’re being used by company X, the next by company Z & Y”

Online panels aim to be as representative as possible; thus there is little difference in their make-up and so companies compete on other grounds. Primarily, this seems to be price. This means providers are continually trying to squeeze more out of their respondents for less.

This has contributed to the commoditisation of sample (it is by no means the only reason – it is perhaps an inevitability given the need to maintain respondent anonymity and confidentiality) and the research process. The research experience is at best variable (at worst, terrible) for respondents.

Surveys are now analogous to Farmville – drones click on different parts of the screen to complete monotonous tasks for a tiny reward.

This has to change. Perhaps it will – two recent articles on Research Live have broached the topic

As a user of online panels, I know I am part of the problem. But it is the panel providers’ responsibility to protect its users. This would require coordinated action across the industry. Given that market research is regulated, this shouldn’t be an issue.

And the providers could start by treating their panel members as humans, and not commodities. Notwithstanding the inefficiencies of asking questions rather than capturing data (I’ve written about this previously in “If data is the new oil, we need a bigger drill”), some simple user experience testing could provide opportunities for easy, impactful changes.

For instance, why do surveys need to always ask demographic information? I’ve been stonewalled on this by several different companies, who say that it is “standard” (which sounds like commodity-speak) or that they need to ensure information is up-to-date. It is conceivable that a panel member may have changed their gender in the interim period between surveys, but I wouldn’t expect their ethnicity or age birthday to change. Cutting out extraneous questions can easily reduce survey length, and the burden on respondents.

This is a discussion the industry needs to have, and one I’m happy to be a part of.

sk

NB: I’m not concerned about whether they are called respondents or participants. Actions are more important than semantics.

Image credit: http://www.flickr.com/photos/baconandeggs/1490449135/

Advertisements

Links – 11th August 2008

Yes, still tardy

Blog-related links

Rejuvenating dead brands (NY Times) –  I found the bit about repeated fake-ad exposure leading to higher false-memory rates fascinating yet unsurprising (from a research perspective)

Excellent analysis on the faults with Microsoft’s Vista campaign (Wilshipley) – anyone that paints current/potential customers as stupid is asking for trouble

Jeremiah Owyang on the many challenges of social media – a well-thought out and well-reasoned analysis

A reason that large businesses falter is that they fail to create a sense of urgency (Michael Hyatt)

Wired looks at Gemini Division and video distribution online

“Shockvertising” for series 2 of Dexter (Guardian) – what is ironic is that the advertising for both series has sought to subvert expectation, yet the actual plot of the show is signposted so obviously

Interesting debate on the future of online panels (MRSpace forum topic) – my predictions are on the post. Incidentally, those interested in research should have a look around the entire network

How to be a self-funded anthropologist (Cultureby) – interesting reading, and not just for those interested in being the next Gladwell

Series of papers from the MCPS-PRS Aliiance (the body that identifies and distributes royalty payments for music in the UK) – my attention was piqued by the recent look at In Rainbows

Using Happiness as your business model (Slideshare presentation)

When logos look alike (Logo Design Love)

Random links

Extinction timeline 1950-2050 – awesome graphic

Quiz testing your knowledge on the hundred most common English words – I scored a rather poor 38

Mindmeister is a fantastic tool for creating and sharing mindmaps online

And Evernote looks like a rather splendid tool as well – bookmark and annotate pretty much anything

Lifehacker editors’ favourite software and hardware

Profile on Sheldon Adelson – possibly the richest person I had never heard of; odd considering his high profile in both Israeli politics and gambling (New Yorker)

Great British gameshows (Guardian)

Rather sniffy look at how hipsters represent the end of Western civilisation (Adbusters)

Jump the shark has passed. Nuke the Fridge is the correct terminology now, thanks to Indy 4 (NY Times)

Profile on Rush Limbaugh (NY Times)

The police were less than friendly to peaceful Climate Camp demonstrators (Comment is Free)

Retro posters (Bob Staake) – I love the sort of aesthetic displayed here

The Nazi-descended Jews living in Israel (Guardian)

Recommended reading would be

Blog-related: Excellent analysis on the faults with Microsoft’s Vista campaign, Jeremiah Owyang on the many challenges of social media and  Interesting debate on the future of online panels

Random: Extinction timeline 1950-2050, Mindmeister, Evernote and Profile on Sheldon Adelson

sk

How representative are Online surveys?

To answer the above question in three words: I don’t know.

Generally, I have been sceptical about the relative veracity of Online surveys. Working for a media owner, there is a general concern that moving surveys online may reduce the strength of TV and increase that of the Internet. But I am being won around.

After all, no methodology is perfect. In fact, it could be argued that all are inadequate. Even if one were able to take a census of the entire population (even the UK census only has a 94% response rate), how accurately are people able to express their unconscious thoughts, desires and opinions?

Which is why I was pleasantly surprised to read that YouGov correctly predicted the results of the London Mayoral election. Accurate polling always requires a bit of luck. When I worked at a research agency, the weighting factors for the forthcoming election were changed at the last moment, and fortunately improved the prediction. But even when considering the fluctuations, it does represent a significant victory for the online method.

There are far better resources than this blog debating the relative merits and drawbacks of research methodologies, but my softening of opinion has come about for two main reasons that I have recently given more thought

Societal changes are making the traditional methodologies less accurate over time: The rise of the one person household makes it more difficult for face-to-face interviewers to catch people at home at a time where they are willing to participate. Telephone research is becoming less representative thanks to the rise of mobile phones at the expense of landlines, and the popularity of the TPS. Even if mobile numbers are included in the sample, people are far less willing to participate, since mobile phones are more personal and the call is therefore more intrusive. And while the TPS doesn’t cover market research, some companies voluntarily clean the sample of TPS numbers, since the public perception is that research is no different to telemarketing. And as online penetration increases, one would expect survey representativeness to follow suit.

Online research is more conducive to considered opinion: Online surveys produce more honest responses thanks to the anonymity provided. Without an interviewer waiting for an answer, the respondent can also give a more considered answer (if they so desire). Combined, these will produce more accurate data.

Of course, these points aren’t uniformly positive. Even though Internet access increases, the proportion of those actively on a research panel will still be quite small. Gritz (2004) achieved an 8.4% sign-up rate for an online survey and I wouldn’t be surprised if this figure would be lower if the experiment were repeated now. And Online surveys may allow for more considered responses but without an interviewing probing, the answers may be ambiguous and thus meaningless. But, for me at least, the benefits far outweigh the drawbacks.

I do still have one major concern with online surveys. Without any proof, I have the perception that the attitudinal differences between those that take part in online surveys and those that don’t is greater than the differences between those that do and don’t respond in different methodologies. Those that join online panels are self-selecting, and will tend to spend more time online than the average person.

Sticking with YouGov, their Brand Index (which, in general, I like) ranks Google and Amazon as the top 2 brands in 2007. Would they still come out on top in an offline survey, factoring in the third that don’t use the Internet and those that spend more time with traditional media? I’m not so sure.

But for me at least, I have far less reservations with moving research online than I had a year ago.

I’d be interested in hearing other people’s thoughts on the benefits and drawbacks of the shift. Am I late to the party in accepting online, or do others still hold reservations?

sk