My previous post included all of the notes I took while listening into the New MR Virtual Festival. This post contains the key things I took away from the day, and have subsequently been mulling over in the 2 months (!) since.
NB: All slides shown below are taken entirely without permission. If a creator objects to its use, please contact me and I’ll remove it.
1. The boundaries between participation and observation can (and, in some circumstances, should) be blurred
Although Ray Poynter touched on the range of methods in which research can overtly or covertly be conducted online, Netnography (cf. Robert Kozinets) is – to me – one of the more interesting. To be effective, it needs to have the research both participate and observe the environment of interest.
Erica Ruyle argues that observation (or lurking) is fine in the initial stage, since the norms and cultures need to be understood and respected. But active participation is vital in order to get than “insider” knowledge and to be able to read between the lines of the interactions.
This is a difficult proposition to promote commercially as a) the time investment (and thus cost) required will be huge and b) the researcher will need to have a personal as well as professional interest in the topic in order to both be accepted by the community and accept the community. For instance, how many researchers would turn their nose up at being asked to take part in World of Warcraft for 6 months?
Nevertheless, in the right circumstances it could prove to be a fascinating and rewarding exercise.
2. Convenience samples can still be worthwhile
It is true that all non-census research incorporates a convenience sample to some extent. But some methods require more convenience (and thus are less representative) than others.
Annelies Verhaeghe highlighted some of the issues to be aware of when conducting social media research – particularly that we should resolve ourselves to not always know who we are speaking to or monitoring.
Furthermore, something I had not considered but makes sense is that even though companies trumpet the volume of data they scrape and collect, only a sub-sample of that will be analysed due to the diminishing returns of going deeper into a very large data set.
If we’re able to augment social media research with other techniques or data sources – Annie Pettit mentioned some of the benefits of combining social media research with surveys – then it can be a very valuable and insightful method of getting real-world information on past actions and thoughts.
3. Respondents shouldn’t necessarily be viewed equally
Segmenting the audience isn’t a new phenomenon – we often restrict our universes to who we are interested in – but within these universes perhaps we should pay more attention to some individuals more than others – particularly given the complex social interactions that cause ideas and opinions to spread. I’m not clever enough to be able to incorporate full network theories into any of my research – in the manner of Duncan Watts, for instance – but perhaps there is an argument for applying simple weights to some projects, to account for some opinions becoming more important than others. Or perhaps it is too contentious to implement without proper academic grounding and proof.
4. More of an effort needs to be made to meet respondents on their terms
Betty Adamou joined the likes of Stowe Boyd in saying that email is on the decline among younger people. This is problematic for online surveys, given that online panels are predominantly administered via email. Given the trends, perhaps we should be looking to Facebook, Twitter, instant messenger etc for both initial recruitment of these audiences and then allow them to dictate how we can contact them to alert them with new surveys. I’m not sure whether a note on a Facebook group could be as effective as an email, but it is certainly worth exploring.
5. Survey structures can do more to take advantage of the online environment
Our media and communications channels have fragmented but the content providers retain a centralised hub of controlling activity. Why can’t the research industry do this? John Dick talked through Civic Science’s modular research methodology, whereby questions are asked in chunks of two or three at a time, but combined at the back-end to build up extensive profiles of respondents.
This approach makes intuitive sense. In face to face research, the main cost was in finding people to speak to. Thus, once they were located, it was efficient to collect as much information as possible. The web is abundant with people, who are time-poor. The cost isn’t in finding them, it is keeping them. People could easily answer three questions a day if there was the possibility of a prize draw. They would be less willing to spend 30 minutes going through laborious and repetitive questions.
There are clearly downsides to this method and plenty of issues to overcome regarding data quality assurances, but the notion of Facebook users answering a couple of questions a day sounds like a feasible way to collect information among people who might be unwilling to sign up to an online survey
6. Surveys don’t have to be boring or static…
Another aspect of the online world that should be explored further is the level of interactivity. Many online surveys are straight ports of face to face surveys – a shame when there are so many more things that a web survey can – in theory – be capable of.
Jon Puleston of GMI highlighted several of their experiments in this area. Interestingly, although interactive surveys take longer, respondents are more engaged, enjoy them more and give “better” answers. I particularly like the idea of timing respondents to give as many answers as possible within a given timeframe. This appeals to people’s competitive nature, and means they’d spend far longer on it than they normally would.
The King of Shaves case study was very interesting. Rather than a 1 minute introduction for a 10 minute survey, this example reversed the process. People were given a detailed briefing on the role of a copywriter, and asked to come up with a creative slogan. In subsequent testing, seven “user-generated” ideas scored better than the advertising agency.
7. But we should be aware of the implications of survey design on data capture…
Jon’s examples showed how framing questions can improve data collection. But Bernie Malinoff warned us that even minor superficial changes to a survey can have a big impact on how people answer questions. For instance, the placement of the marker on a slider scale can heavily impact the distribution of answers.
Bernie also had some important lessons in survey usability – ranging from the wordiness of the questions (something I’ve been guilty of in the past) to the placement of error messages and how they can influence subsequent responses.
Surprisingly, his research found that survey enjoyment was comparable among people who did traditional “radio button” style surveys versus richer experiences, and that people were less willing to take part in future after having completed a flash-based survey.
It acts as a sobering counter-point to Jon’s presentation, but I inferred some caveats to this research (or perhaps I am only seeing what I want to see). I suspect some of the resistance to flash might be down to the “newness” of the survey design rather than a genuine preference for radio-button style surveys. Similarly, design iterations aren’t neutral – I wouldn’t mind different results so long as I felt they were “better” (and any methods to discourage survey cheaters are welcome). Nevertheless, it an important reminder that a better designed survey is only an improvement if it makes the survey more usable and easier to understand, and I completely agree with the final point that the industry should reinforce best practices for interface design.
8. …And whether they are suitable for the audience
Tom Ewing’s talk on gaming and research covered many interesting points, but the one that stuck with me is that it isn’t for everyone. As he points out, FourSquare holds little appeal to him (unless he wanted to be Mayor of his child’s nursery). Similarly, while the number of gamers is rising, it isn’t everyone and so we cannot assume that introducing interactive, exploratory or personalised experiences will automatically make respondents enjoy our surveys more.
Particularly since games design is pretty hard – Angry Birds and Farmville may look simple, but I wouldn’t expect any research agency to devise and incorporate something as addictive to their research methodologies. The latter in particular seems to purely encourage the completion of repetitive, monotonous tasks – not something that would benefit the quality of research outputs.
9. There is plenty of scope to improve beyond the debrief
John Clay talked about ways in which researchers can improve the way that debriefs are communicated. This is an area that many (too many) researchers need to improve upon, but an even more pressing area of improvement is what occurs after the debrief. Spencer Murrell’s presentation on insight translation covered this.
Summary slides and executive summaries are important in debriefing research, but it is important to go beyond the report/presentation into a framework that can be referenced in future. Whether it is a model that can be stuck on a wall, or a cheat sheet that can be put on a post-it note, there are many creative ways in which the core findings of a project can be transformed into an ongoing reminder. Evidently, this could easily descend into a gimmicky farce, but it is important to remember that the debrief isn’t the end of the project. In many ways, it is only the end of the beginning. The next phase – actually using that information to improve the organisation – is the most important. Any ways in which researchers can add value to this stage can only improve their standing with their clients.
10. Online conferences can work
On the whole, I think the event can be viewed as a huge success. For an affordable fee ($50), I listened to many intelligent speakers on a variety of topics, as shown by both this post and the previous post.
There was also plenty of excellent discussion around the talks and the content on Twitter, using the #NewMR hashtag. I’m usually reticent to tweet during events, but given the lack of face-to-face contact and the fact I was facing my computer at the time, Twitter worked excellently as a forum to amplify the usefulness of the content presented.
An idea is one thing, but executing it is something very different. Aside from the odd technical hitch (inevitable given the volume of speakers from across the globe), the day ran impeccably. So Ray Poynter and his board deserve huge congratulations for not only the concept, but also the organisation and output of the event. I would wholeheartedly recommend people with an interest in research investigate the New MR site and list of upcoming events.
Filed under: events, internet, research | Tagged: Annelies Verhaeghee, bernie malinoff, Betty Adamou, Civicscience Inc, element 54, erica ruyle, Gamification, gmi, herd, John Clay, John Dick, jon puleston, kantar operations, Lextant, mark earls, Market research, Nebu, newmr, ray poynter, research4, Rijn Vogelaar, Spencer Murrell, superpromoter, the future place, tom ewing | Leave a Comment »