Scalability is something I’ve been thinking about recently. What works at one level may not work at another.
Foursquare is a prominent example of this. As a social service, it should benefit from network effects. The more people participating, the more information between interconnected nodes of people and places.
Yet the major draw of Foursquare – the gaming aspects around badges and mayorship – is not designed to scale.
There can only be one mayor of a locale. Six months ago, I was Mayor of two Cineworlds, the O2 Academy in Brixton, the Liberty building and The White Horse, among others. But as more people use the service, becoming a Mayor of somewhere becomes more difficult. Rather than swarm at a single venue, people are creating sub-venues (“the table in the corner”) or irrelevant landmarks (“the lamppost off Acacia Avenue”) in order to become Mayor.
To become a Mayor of somewhere, you need a regular routine. Therefore, the place you work is the place you are most likely to be Mayor of. Of course, if more than one Foursquare user works at the same location then Mayorship will switch depending on holidays.
If staff are the most likely to be Mayors, then there is little point offering incentives to Mayors. Why give a free mocha to the Mayor of the Starbucks at Parsons Green when the Mayor is the duty manager who opens up six mornings a week?
Furthermore, the badges are also fading into insignificance. Sponsored and location or chain-specific badges mean there is no objective challenge in collecting them; rewards are based on good fortune or a willingness to participate in marketing activity.
The gaming mechanics need to be fundamentally altered if Foursquare is going to remain relevant (unless of course it contracts back to a size where the system works).
. Collecting additional information on visits (for instance, to distinguish staff and patrons) would be too cumbersome. The most obvious way to change the system is overhauling the Mayoral system. It is not enough to have a single Mayor at a location. Instead, additional levels of visitors – such as councillors and citizens – should be created, with different incentives and rewards. This instantly becomes more scalable.
Scaling games is incredibly problematic. For instance, I’ve played Home Run Battle 3D quite a lot on my phone, yet am not even in the top 1,000 scores. My obscurity means the leaderboard holds no incentive for me to improve.
The joy of games (or at least one of them) is in the competitive edge it brings to social activity. The guys behind Cadbury’s Pocketgame recognise this. Rather than rank performances, they instead focus on the intrinsic joy of competing. The overall aggregation of spots vs stripes offers an additional edge to bragging rights, but is largely arbitrary. The campaign is about playing. I like it.
Moving away from games, the biggest issue, for me at least, in scalability is time. Time is finite, and in a world of ever-increasing choice the need for editors and aggregators is ever greater. The Paradox of Choice is very real, and I’m increasingly reliant on “shortcut” services to find the right balance between effectiveness and efficiency – whether it is Twitter lists enabling me to go straight to the contacts I am closest to, or Expedia allowing me to book flights, transport and hotels at the same time.
In the research world, scalability is the reason why qualitative research only currently accounts for around 10-15% of billings. It is inherently more time and labour intensive. The more people spoken to, the longer the interview process, transcription, editing and analysis time. Quantitative work – particularly online – is scalable. Analysis times may vary, but the set-up of a survey sample of 100 or 10,000 is essentially the same.
Could online qualitative research scale – whether “netnography”, communities or traditional methods transplanted online? Possibly. Siamack Salari has shown with his ethnography mobile application that it is possible to outsource tasks and functions to individual respondents. But the dynamics of a 30 person research community are very different to a 3,000 person community. Even with automatic coding, transcriptions and sentiment analysis (if such a thing is even possible to be accurate), the culture of a large community is such that norms and behaviours get moulded by the “power users”. This may be more natural in some respects, but might make fulfilling the objectives of the research harder. And for this reason, I’m sceptical as to the scalability of online qualitative research.
Image credit: http://www.flickr.com/photos/monana7/3622111882/
Filed under: research, social media | Tagged: Foursquare, mroc, online communities, pocketgame, scalability, stars vs stripes |
I think there are two ways of looking at online “qual”- one is an online focus group, which is harder to track, everyone “talking” at once- but everything is down on the logs. So very similar to the offline equivalent, but with different strengths and weaknesses. Similarly, you can set something like an online forum up- working in a similar way, but without the time-sensitive nature of a chartroom-based online focus group.
The other way of looking at it is the conversation that is already out there; “buzz monitoring.” introduces a whole different set of problems (filtering out irrelevance and identifying relevant content, and putting it all into context is a massive challenge.)
But my view is that once you get over the fact that everything is reported in the same manner as “quant” research (big numbers and percentages), what you’re left with is exactly what I think scalable, online qualitative research should look like.
On another note, I read somewhere (AdAge, I think) about an issue with Starbucks and Foursquare, where someone couldn’t get their “reward” redeemed because none of the staff had the faintest idea what Foursquare was, or why someone should get a free coffee because of it. So while it might be an easy thing to implement for a single independent shop, there are other challenges for a huge global network.
Cheers – you’re right to pull me up for conflating several techniques within “online qual”.
Buzz monitoring is scalable, though there is the separate issue of confidence and accuracy in the reporting (manual coding certainly isn’t as scalabe).
Specifically created research communities may be less intensive than their offline counterparts, but I do see the amount of effort required to moderate and manage them being proportionate to the number of participants.