How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.
I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.
The presentation is embedded below.
The problem
There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.
In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.
As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.
The solution
Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.
Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as
- Time taken to complete the survey (against what time it “should” take)
- Time taken on a page before an answer is selected
- Consistency in time taken to answer similar forms of questions
- Length of response in open-ended answers
- Variation in response (or absence of straight lines)
- Absence of contradictions (a couple of factual questions can be repeated)
- Correct answers to “logic” questions
A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via
- Achievement – aiming to improve a quality score over time
- Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
- Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)
The challenges
For this to work successfully, several challenges would need to be overcome
- Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
- Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
- Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
- Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
- Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.
Would it work? I can’t say with any certainty, but I’d like to see it attempted.
Filed under: gaming, research | Tagged: Data quality, Gamification, gaming, incentives, Market research, online surveys | 5 Comments »