• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Ten things I learned from the New MR Virtual Festival

My previous post included all of the notes I took while listening into the New MR Virtual Festival. This post contains the key things I took away from the day, and have subsequently been mulling over in the 2 months (!) since.

New MR Virtual Festival header

NB: All slides shown below are taken entirely without permission. If a creator objects to its use, please contact me and I’ll remove it.

1. The boundaries between participation and observation can (and, in some circumstances, should) be blurred

Although Ray Poynter touched on the range of methods in which research can overtly or covertly be conducted online, Netnography (cf. Robert Kozinets) is – to me – one of the more interesting. To be effective, it needs to have the research both participate and observe the environment of interest.

Erica Ruyle argues that observation (or lurking) is fine in the initial stage, since the norms and cultures need to be understood and respected. But active participation is vital in order to get than “insider” knowledge and to be able to read between the lines of the interactions.

This is a difficult proposition to promote commercially as a) the time investment (and thus cost) required will be huge and b) the researcher will need to have a personal as well as professional interest in the topic in order to both be accepted by the community and accept the community. For instance, how many researchers would turn their nose up at being asked to take part in World of Warcraft for 6 months?

Nevertheless, in the right circumstances it could prove to be a fascinating and rewarding exercise.

2. Convenience samples can still be worthwhile

It is true that all non-census research incorporates a convenience sample to some extent. But some methods require more convenience (and thus are less representative) than others.

Annelies Verhaeghe highlighted some of the issues to be aware of when conducting social media research – particularly that we should resolve ourselves to not always know who we are speaking to or monitoring.

Furthermore, something I had not considered but makes sense is that even though companies trumpet the volume of data they scrape and collect, only a sub-sample of that will be analysed due to the diminishing returns of going deeper into a very large data set.

If we’re able to augment social media research with other techniques or data sources – Annie Pettit mentioned some of the benefits of combining social media research with surveys – then it can be a very valuable and insightful method of getting real-world information on past actions and thoughts.

3. Respondents shouldn’t necessarily be viewed equally

Both Rijn Vogelaar and Mark Earls talked about premises realised more thoroughly in their books – The SuperPromoter and Herd respectively.

Segmenting the audience isn’t a new phenomenon – we often restrict our universes to who we are interested in – but within these universes perhaps we should pay more attention to some individuals more than others – particularly given the complex social interactions that cause ideas and opinions to spread. I’m not clever enough to be able to incorporate full network theories into any of my research – in the manner of Duncan Watts, for instance – but perhaps there is an argument for applying simple weights to some projects, to account for some opinions becoming more important than others. Or perhaps it is too contentious to implement without proper academic grounding and proof.

4. More of an effort needs to be made to meet respondents on their terms

Betty Adamou joined the likes of Stowe Boyd in saying that email is on the decline among younger people. This is problematic for online surveys, given that online panels are predominantly administered via email. Given the trends, perhaps we should be looking to Facebook, Twitter, instant messenger etc for both initial recruitment of these audiences and then allow them to dictate how we can contact them to alert them with new surveys. I’m not sure whether a note on a Facebook group could be as effective as an email, but it is certainly worth exploring.

5. Survey structures can do more to take advantage of the online environment

Our media and communications channels have fragmented but the content providers retain a centralised hub of controlling activity. Why can’t the research industry do this? John Dick talked through Civic Science’s modular research methodology, whereby questions are asked in chunks of two or three at a time, but combined at the back-end to build up extensive profiles of respondents.

This approach makes intuitive sense. In face to face research, the main cost was in finding people to speak to. Thus, once they were located, it was efficient to collect as much information as possible. The web is abundant with people, who are time-poor. The cost isn’t in finding them, it is keeping them. People could easily answer three questions a day if there was the possibility of a prize draw. They would be less willing to spend 30 minutes going through laborious and repetitive questions.

There are clearly downsides to this method and plenty of issues to overcome regarding data quality assurances, but the notion of Facebook users answering a couple of questions a day sounds like a feasible way to collect information among people who might be unwilling to sign up to an online survey

6. Surveys don’t have to be boring or static…

Another aspect of the online world that should be explored further is the level of interactivity. Many online surveys are straight ports of face to face surveys – a shame when there are so many more things that a web survey can – in theory – be capable of.

Jon Puleston of GMI highlighted several of their experiments in this area. Interestingly, although interactive surveys take longer, respondents are more engaged, enjoy them more and give “better” answers. I particularly like the idea of timing respondents to give as many answers as possible within a given timeframe. This appeals to people’s competitive nature, and means they’d spend far longer on it than they normally would.

Jon Puleston of GMI at New MR Virtual Festival

The King of Shaves case study was very interesting. Rather than a 1 minute introduction for a 10 minute survey, this example reversed the process. People were given a detailed briefing on the role of a copywriter, and asked to come up with a creative slogan. In subsequent testing, seven “user-generated” ideas scored better than the advertising agency.

7. But we should be aware of the implications of survey design on data capture…

Jon’s examples showed how framing questions can improve data collection. But Bernie Malinoff warned us that even minor superficial changes to a survey can have a big impact on how people answer questions. For instance, the placement of the marker on a slider scale can heavily impact the distribution of answers.

Bernie Malinoff at the New MR Festival

Bernie also had some important lessons in survey usability – ranging from the wordiness of the questions (something I’ve been guilty of in the past) to the placement of error messages and how they can influence subsequent responses.

Surprisingly, his research found that survey enjoyment was comparable among people who did traditional “radio button” style surveys versus richer experiences, and that people were less willing to take part in future after having completed a flash-based survey.

It acts as a sobering counter-point to Jon’s presentation, but I inferred some caveats to this research (or perhaps I am only seeing what I want to see). I suspect some of the resistance to flash might be down to the “newness” of the survey design rather than a genuine preference for radio-button style surveys. Similarly, design iterations aren’t neutral – I wouldn’t mind different results so long as I felt they were “better” (and any methods to discourage survey cheaters are welcome). Nevertheless, it an important reminder that a better designed survey is only an improvement if it makes the survey more usable and easier to understand, and I completely agree with the final point that the industry should reinforce best practices for interface design.

8. …And whether they are suitable for the audience

Tom Ewing’s talk on gaming and research covered many interesting points, but the one that stuck with me is that it isn’t for everyone. As he points out, FourSquare holds little appeal to him (unless he wanted to be Mayor of his child’s nursery). Similarly, while the number of gamers is rising, it isn’t everyone and so we cannot assume that introducing interactive, exploratory or personalised experiences will automatically make respondents enjoy our surveys more.

Particularly since games design is pretty hard – Angry Birds and Farmville may look simple, but I wouldn’t expect any research agency to devise and incorporate something as addictive to their research methodologies. The latter in particular seems to purely encourage the completion of repetitive, monotonous tasks – not something that would benefit the quality of research outputs.

9. There is plenty of scope to improve beyond the debrief

John Clay talked about ways in which researchers can improve the way that debriefs are communicated. This is an area that many (too many) researchers need to improve upon, but an even more pressing area of improvement is what occurs after the debrief.  Spencer Murrell’s presentation on insight translation covered this.

Spencer Murrell at New MR Virtual Festival

Summary slides and executive summaries are important in debriefing research, but it is important to go beyond the report/presentation into a framework that can be referenced in future. Whether it is a model that can be stuck on a wall, or a cheat sheet that can be put on a post-it note, there are many creative ways in which the core findings of a project can be transformed into an ongoing reminder. Evidently, this could easily descend into a gimmicky farce, but it is important to remember that the debrief isn’t the end of the project. In many ways, it is only the end of the beginning. The next phase – actually using that information to improve the organisation – is the most important. Any ways in which researchers can add value to this stage can only improve their standing with their clients.

10. Online conferences can work

On the whole, I think the event can be viewed as a huge success. For an affordable fee ($50), I listened to many intelligent speakers on a variety of topics, as shown by both this post and the previous post.

There was also plenty of excellent discussion around the talks and the content on Twitter, using the #NewMR hashtag. I’m usually reticent to tweet during events, but given the lack of face-to-face contact and the fact I was facing my computer at the time, Twitter worked excellently as a forum to amplify the usefulness of the content presented.

An idea is one thing, but executing it is something very different. Aside from the odd technical hitch (inevitable given the volume of speakers from across the globe), the day ran impeccably. So Ray Poynter and his board deserve huge congratulations for not only the concept, but also the organisation and output of the event. I would wholeheartedly recommend people with an interest in research investigate the New MR site and list of upcoming events.

sk

Advertisement

New MR Virtual Festival notes

I’m breaking my longest-to-date blogging absence (work-life parity should soon be restored) with two versions of the same post. This is the first.

They are related to the New MR global online conference that ran on 9th December 2010, featuring speakers and moderators across Europe, the Americas and Asia-Pacific. The event was created and organised by Ray Poynter – a long-standing, committed and energetic member of the international market research community – and his management board. In addition to the core event, various “fringe” events also took place. More info on them can be found at the 2010 Festival pages on the site.

This is the larger of the two posts, where I’ve reformatted all of the notes I made on the day and supplemented them with some additional thoughts (I’ve not yet caught up on the presentations I missed for reasons such as being asleep/exhausted, while some presentations weren’t as relevant to me and so I skipped them). So while this isn’t exhaustive, there will still be plenty of words to keep you occupied for a short while.

I’ll follow it up with a shorter post outlining my key takeaways from the day, and my overall thoughts on the event.

As this is the single longest post on this blog (circa 5,000 words), I’m taking the rare step of putting the bulk of it behind a cut (the size also means it is not properly proof-read). Click through to continue (unless you are reading via RSS) Continue reading

The gamification of surveys

How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.

I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.

The presentation is embedded below.

The problem

There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.

In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.

As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.

The solution

Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.

Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as

  • Time taken to complete the survey (against what time it “should” take)
  • Time taken on a page before an answer is selected
  • Consistency in time taken to answer similar forms of questions
  • Length of response in open-ended answers
  • Variation in response (or absence of straight lines)
  • Absence of contradictions (a couple of factual questions can be repeated)
  • Correct answers to “logic” questions

A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via

  • Achievement – aiming to improve a quality score over time
  • Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
  • Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)

The challenges

For this to work successfully, several challenges would need to be overcome

  • Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
  • Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
  • Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
  • Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
  • Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.

Would it work? I can’t say with any certainty, but I’d like to see it attempted.

sk

Enhanced by Zemanta