• About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

  • Advertisements

Scaling games

Scalability is something I’ve been thinking about recently. What works at one level may not work at another.

Foursquare is a prominent example of this. As a social service, it should benefit from network effects. The more people participating, the more information between interconnected nodes of people and places.

Yet the major draw of Foursquare – the gaming aspects around badges and mayorship – is not designed to scale.

There can only be one mayor of a locale. Six months ago, I was Mayor of two Cineworlds, the O2 Academy in Brixton, the Liberty building and The White Horse, among others. But as more people use the service, becoming a Mayor of somewhere becomes more difficult. Rather than swarm at a single venue, people are creating sub-venues (“the table in the corner”) or irrelevant landmarks (“the lamppost off Acacia Avenue”) in order to become Mayor.

To become a Mayor of somewhere, you need a regular routine. Therefore, the place you work is the place you are most likely to be Mayor of. Of course, if more than one Foursquare user works at the same location then Mayorship will switch depending on holidays.

If staff are the most likely to be Mayors, then there is little point offering incentives to Mayors. Why give a free mocha to the Mayor of the Starbucks at Parsons Green when the Mayor is the duty manager who opens up six mornings a week?

Furthermore, the badges are also fading into insignificance. Sponsored and location or chain-specific badges mean there is no objective challenge in collecting them; rewards are based on good fortune or a willingness to participate in marketing activity.

The gaming mechanics need to be fundamentally altered if Foursquare is going to remain relevant (unless of course it contracts back to a size where the system works).

. Collecting additional information on visits (for instance, to distinguish staff and patrons) would be too cumbersome. The most obvious way to change the system is overhauling the Mayoral system. It is not enough to have a single Mayor at a location. Instead, additional levels of visitors – such as councillors and citizens – should be created, with different incentives and rewards. This instantly becomes more scalable.

Scaling games is incredibly problematic. For instance, I’ve played Home Run Battle 3D quite a lot on my phone, yet am not even in the top 1,000 scores. My obscurity means the leaderboard holds no incentive for me to improve.

The joy of games (or at least one of them) is in the competitive edge it brings to social activity. The guys behind Cadbury’s Pocketgame recognise this. Rather than rank performances, they instead focus on the intrinsic joy of competing. The overall aggregation of spots vs stripes offers an additional edge to bragging rights, but is largely arbitrary. The campaign is about playing. I like it.

Moving away from games, the biggest issue, for me at least, in scalability is time. Time is finite, and in a world of ever-increasing choice the need for editors and aggregators is ever greater. The Paradox of Choice is very real, and I’m increasingly reliant on “shortcut” services to find the right balance between effectiveness and efficiency – whether it is Twitter lists enabling me to go straight to the contacts I am closest to, or Expedia allowing me to book flights, transport and hotels at the same time.

In the research world, scalability is the reason why qualitative research only currently accounts for around 10-15% of billings. It is inherently more time and labour intensive. The more people spoken to, the longer the interview process, transcription, editing and analysis time. Quantitative work – particularly online – is scalable. Analysis times may vary, but the set-up of a survey sample of 100 or 10,000 is essentially the same.

Could online qualitative research scale – whether “netnography”, communities or traditional methods transplanted online? Possibly. Siamack Salari has shown with his ethnography mobile application that it is possible to outsource tasks and functions to individual respondents. But the dynamics of a 30 person research community are very different to a 3,000 person community. Even with automatic coding, transcriptions and sentiment analysis (if such a thing is even possible to be accurate), the culture of a large community is such that norms and behaviours get moulded by the “power users”. This may be more natural in some respects, but might make fulfilling the objectives of the research harder. And for this reason, I’m sceptical as to the scalability of online qualitative research.


Image credit: http://www.flickr.com/photos/monana7/3622111882/


Recommended Reading – 25th July 2010

The second and final group of links from the past month I recommend you click on is below:


Enhanced by Zemanta

21st century market research

I’ve just finished reading Communispace’s latest position paper “You are now leaving your comfort zone: 21st century market research” (link points to their blog post, which in turn links to the pdf). It is unquestionably one of the best research papers I have read in quite some time.

It has to be taken with the caveat that the paper is promoting their position as providers of large-scale, continuous research communities and that the recommendations are focused around the relative strengths of this methodology. Nevertheless, I found myself in agreement with the majority of the points made.

  • Actionable: I loved the quote that it is “more important for research to be actionable than irrefutable”. It is to an extent a straw man argument, since 20th century “gold standard” techniques are still rife with bias, but I am in total support of “good enough” research. Trading efficiency for supposed accuracy has diminishing returns and with our complex multi-dimensional environments, no research can be truly predictive or offer complete accurate validation. Shifting the emphasis of debate from data quality to data application is crucial, in my opinion
  • Professional respondents: “Professional” respondents are inevitable in research, and I like the notion of accepting this and including them as “actors”. I was not aware of the ARF’s research showing that professional respondents actually give better quality results. but presume this is where professionals don’t lie about themselves in order to pass the recruitment screener i.e. they are “acting a role”. It is a good observation that, over time, it becomes harder to fake and so responses become more authentic and trustworthy
  • Openness: Transparency and self-disclosure are important measures in reframing respondents as participants. We should be moving away from treating the people we research as emotionless lab-rats. Instead, there should be a two-way dialogue. Obscure projective techniques may indeed relax people into opening up, but I believe the researcher revealing elements about themselves facilitates a better environment for open discussion. Similarly, why hide the research sponsor and leave the person second guessing (unless of course it is highly sensitive NPD)
  • Exploratory research: I also agree that the strength of research lies earlier in the process. Validating hypotheses may be important in offering reassurance, risk assessment or measurements of success, but there is a massive opportunity in terms of idea generation and creative development. I don’t really like the term co-creation but there is opportunity for collaboration which creatives and strategists should view as an opportunity to better relate to their target audiences, and not a threat (since ideas ultimately need their expertise to be worked up into viable and coherent campaigns or executions)

Inevitably, there were also a couple of points I didn’t agree with

  • Real-time: Real-time interaction and feedback is fantastic in some areas – customer service and closing a sale, for instance. Research is not one of these areas. Interpreting research needs consideration and contextual understanding; real-time can make us too trigger happy
  • Natural: As long as research uses recruitment techniques (nearly always necessary in order to speak to the right people, and the right balance of people), it will never be truly natural. “Naturalistic” maybe, but not natural

But on the whole, it is a great read and I would recommend you all to take a look at it.

A final thing that struck me about the paper was the use of a couple of quotes from industry leaders. When I read presentations, reports or papers from marketers or strategists, they are often illustrated with quotations from peers or thought leaders in the space. The research industry doesn’t really have that. The “researchsphere” doesn’t have the same vibrancy as the “plannersphere” and so the trade bodies and the trade press need to play a much more prominent role in providing platforms for client-side industry leaders to speak from. Thus far, they do not seem to be doing so. All talks and papers I see seem to be project- or sales-based; there is very little commentary on the evolution or application of research from their perspective.


Image credit: http://www.flickr.com/photos/expressmonorail/3046970004/