How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.
I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.
The presentation is embedded below.
The problem
There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.
In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.
As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.
The solution
Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.
Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as
- Time taken to complete the survey (against what time it “should” take)
- Time taken on a page before an answer is selected
- Consistency in time taken to answer similar forms of questions
- Length of response in open-ended answers
- Variation in response (or absence of straight lines)
- Absence of contradictions (a couple of factual questions can be repeated)
- Correct answers to “logic” questions
A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via
- Achievement – aiming to improve a quality score over time
- Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
- Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)
The challenges
For this to work successfully, several challenges would need to be overcome
- Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
- Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
- Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
- Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
- Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.
Would it work? I can’t say with any certainty, but I’d like to see it attempted.
Filed under: gaming, research | Tagged: Data quality, Gamification, gaming, incentives, Market research, online surveys |
I’m not sure I can think of a game where the mechanics of the scoring system are not transparent to the player (though granted, I’ve spent about a minute thinking, so there may be some examples). In order to motivate a quality response, people need to be aware of how quality is being scored don’t they? The other concern is one you’ve raised yourself – fundamentally the best quality response is the most honest response and that’s not something we can build a scoring system around.
Also, a point well made in the Deterding deck is that a scoring system alone does not a game make.
Finally, I would just note, most of this post seems to be about how we can add a scoring mechanism to better engineer good quality response from existing surveys in a fairly direct way – isn’t the potential of gamification more about transforming the fundamentals of survey design to make them more engaging for respondents with better quality data a pleasing side effect?
Thanks AJ, and some fair points.
You’re right, the broad mechanics do have to be explained. But I don’t think the explicit measures/weights/algorithms need to be – people just need to know they have to give full, considered responses.
I can’t see how surveys can ever properly embrace gaming mechanics until respondents have a much greater control over which surveys they complete and how.
There are of course many ways to improve response quality – better visuals, better content, better non-monetary incentives. I see this “direct” way in being one method that could “nudge” people towards giving better responses. It may work, it may not. But I’d be interested in seeing what impact it had.
A very interesting article Simon, thanks for this. It is the very subject I’ve working on for over the past 2 years.
Social Media opened up a heap of opportunities to not only solidify a scoring system but also increase engagement thus response and data quality.
Thus I agree with researchgeek and yourself in that only the broad strokes need be displayed to the respondents. There are many examples of such set ups in the social media world touching upon influence in particular… and they’re only starting. Embedding oneself in an already familiar environment goes a long way in addressing the “dress up” issues you rightly raise. The key then to keep people engaged lies in relevant incentives.
Thanks Simon for a great post. You have perfectly explained the problem and the solution in the detailed way and i appreciate it very much. Social Media opened up a heap of opportunities to not only solidify a scoring system but also increase engagement thus response and data quality.
Hello! This is my 1st comment here so I just wanted to give a quick shout out and tell you I truly enjoy reading your articles. Can you suggest any other blogs/websites/forums that cover the same topics? Thanks for your time!