Power to the People – new data and the challenges

Universal McCann have just released Wave 4 of their “Power to the People” social media tracker. The public report can be downloaded here or viewed below (RSS readers may need to click through).

Looking through it, there are some curious results from the UK participants in that fewer people are engaging in certain social activities online. There are a couple of possible reasons as to why these scores have appeared.

Before I look into these, I want to stress that I am not trashing their research. UM have been very clever in setting up this tracker. On the one hand, they publish topline figures that are widely sought after and thus generate excellent PR. And on the other, they keep the details and breakdowns for their own internal use, which gives them a competitive advantage. Doubly beneficial.

The challenges of tracking are wide-reaching and not particular to this study. So, while I use the UM tracker as a case study, I hope my points are construed as general and not specific to their methodology.

1. Non-constant audience – UM have concentrated their study on active internet users (and fair enough – it doesn’t make sense to track non-use). But whereas demographics are largely consistent over time, the internet isn’t yet fully matured and so this audience will change. As such, the universe for wave 3 of the tracker was 17.8m UK 16-54s who use the internet every day or every other day. During Wave 4 the universe had expanded to 19m. Late-comers aren’t going to be as interested in social media or the internet as a whole, and so they will be less active.

2. Absolutes or percentages – if the universe is expanding, a percentage drop may still be an absolute increase. For instance, video viewers dropped from 86% in Wave 3 to 79% in Wave 4 – this is cited as a surprising change. But factoring in the audience size and looking at absolute figures – the number of people participating only fell from 15.2m to 15m – a 1% difference that is within the realms of sampling error.

3. Dips and seasonal effects – the UM tracker takes annual dips, rather than consistently tracks. Our behaviour is highly seasonal – we consume less of some things over summer as we go on holiday, and more of some things in January as we enjoy the novelty of our Christmas presents. The 4 UM waves to date have been in September, June, “completed in March”, and “between November and March”. This will have an effect.

4. Changing the survey options – as much as it pains me to say, respondents don’t fully and honestly answer surveys. They get bored. The more things we seek to track, the less time they will spend thinking and considering each option (though in total the time will be greater). If we give a respondent four options, they may answer three. If we give them 16 options, they would answer 12 times if they went in the same proportion, or fewer times if they got bored. Therefore answers become more spread out, and percentages for some may fall. As UM track more types of behaviour, they may be dissuading some respondents from answering completely.

5. Context – Research studies don’t operate in a vacuum – the external and interconnected environment need to be factored in to place the research in context. For instance, perhaps Christmas 2007 saw more sales of laptops with webcams than Christmas 2008. Therefore, in 2007 you have more people experimenting with uploading videos. As this isn’t particularly sticky behaviour, fewer sales the following year could explain the dwindling number (this is abstract speculation – sales of webcams may well have risen in 2008)

6. Anomalies – we survey in samples, and not censuses. Despite quotas and stratified sampling, there will always be some quirks. There is always the danger of reading too much into one data point, when it should be the general trends that are considered. So, we should wait to see what Wave 5 shows before coming up with any conclusions.

One of the projects I’m currently working on is setting up a tracker. As the above six points indicate, it is a tricky endeavour. Universal McCann have set up a great resource (I used the data several times while at ITV) and I hope to replicate their success in my work. There are plenty of challenges to meet before this happens though.

sk

Reblog this post [with Zemanta]

Mark Earls – From “me” to “we”

Thanks to Mat kindly donating his ticket, I was able to go and see Mark Earls give a seminar entitled From “me” to “we” at the Royal Society.

herd by mark earlsRather shamefully, I am still yet to read Herd – the book (and associated research) on which the talk was based. This is despite regularly reading the Herd blog and even having a copy in the Essential library. As I said, shameful.

Despite this, I think I was the target audience. Along with a Q&A only notable for the rather aggressive questioning of a lady accusing Mark of ignoring “the female perspective”,  the session offered a fairly gentle precis of the book’s central theory which, if I had read it, I would of course have been familiar with.

The talk

A tenet of the book is that we’re bad at changing other people’s behaviour. To highlight this, Mark recalled a few statistics from his research:

  • Only 10% of new products survive longer than 12 months
  • Only 30% of change management programmes begin to achieve their aims
  • Mergers & Acquisitions lessen shareholder value two thirds of the time
  • No government initiative has created demonstrable and sustainable change

This is particularly worrying because behavioural change comes before attitude change – our thinking comes after the fact. We (post)rationalise rather than act rationally.

Therefore, in order to change attitudes, we need to change behaviour. And to be able to do this, we need to understand who we are. Only then can we can create solutions that work.

The Herd thesis draws upon the Asian culture of believing that humans are naturally social. We are fundamentally social with only a bit of independence, not vice versa.

Although it doesn’t sound particularly controversial, this thinking does run contrary to some well established tenets of both marketing and social theory.

According to Mark, thinking is much less important in human life than it seems. He likens us and thinking to a cat in water – we can do it if we have to, but we don’t particularly like it.

This is because it is easier to follow than think. We know our judgement is fallible and so we outsource the decision by following the crowd. But while this may work in some situations – many illustrated by James Surowieki – it is also arguably a contributing factor to the financial crisis, as financial institutions copied one another without comprehending the implications.

We therefore need to design our theories and tools to accommodate this social behaviour. It is much more rewarding to understand how social norms are created and perpetuated than it is to work on the assumption of cogito ergo sum.

Some initial thoughts

While brief, the talk certainly conveyed the need for me to read the book fully. Perhaps then some of my questions regarding the theory will be answered.

In particular, I’m interested in knowing where movements originate and whether this herd behaviour can be predicted.

For all the sheep, there must be a shepherd somewhere. Are these shepherds always designated as such – the almost mythical influentials – or do we alternate between thinking and following?

Rarely are our choices as clear cut as choosing whether to join the corner of the party where people are talking rather than the one where people are sitting in silence. Instead we have multiple choices and herds – how do we choose?

Is it a level of proximity? In the Battle of Britpop, Northerners sided with Oasis and Southerners with Blur? However, I’m from the Midlands, so was my choice one of the rare occurrences of rational choice (which would make a rather unconvincing deus ex machina) or is it purely random?

If random, then the work of Duncan Watts becomes pertinent. His modelling has suggested that in situations where groups vote up and down their favourite songs, there is no objective winner. Different simulations create different patterns. Purely random.

This creates difficulties for researchers as we like our statistical certainty. We like to have a set answer that we can post-hoc explain given the evidence. Duncan Watts’ research would suggest that research tools that build in mass opinion – such as crowdsourced tagging or wikis – are effectively meaningless. Rather than ultimately deviate towards a “correct” answer, they simply reflect the random order of participation and interaction.

Can mass behaviour be effectively incorporated into a research programme? I’ll report back with some thoughts once I’ve read the book

sk

We’re bad at changing other people’s behaviour

Only 10% of new products survive longer than 12 months

30% of change management programmes begin to achieve their aims

Mergers & Acquisitions lessen shareholder value 2/3 of time (pwc)

No government initiative has created demonstrable and sustainable change

Behavioural change comes before attitude change – thinking comes after the fact

In order to change attitudes, change behaviour

We need to understand who we are so we can create solutions

More rationalising than rational

Cognitive outsourcing – memory is a distributed function so only remember slivers

We are fundamentally social with a bit of independence, not vice versa

Asian culture is inherently social

Gandhi said that humans are a necessarily interconnected species

Thinking is much less important in human life than it seems

“lazy mind hypothesis”

We can think independently, we just don’t like it – like a cat to water

Behave according to other people’s actions e.g. go to busy shops

We know our own judgement is fallible so “I’ll have what she’s having” – wisdom of crowds or financial crisis

Leads to social norms

Need to design our theories and tools to accommodate social behaviour

Genesis random – Duncan watts

Is it proximity that leads us to follow a herd, or example of using rationally weighing up the pros and cons

Herds originate from somewhere – must be a leader. Are these leaders the same in each situation, or are we all capable of being shepherds

Research application – crowdsource answers. But random – no statistical certainty as only one situation

Wikis to collate group opinion?

Reblog this post [with Zemanta]

How can research inspire?

The question in the title is predicated on the assumption that research can inspire. While the haters may disagree, I truly believe it can.

Understanding the different ways in which it can do so is trickier.

In a slight contradiction to my previous post on “insight”, I’m using the term “research in its most catch-all form. Rather than restricting the thinking to groups or surveys, I’m thinking about all disciplines and all methodologies. Research, data and insight.

In order for research to inspire, the recipient needs to be able to be inspired. Some form of creative process in order to make that new connection or leap is necessary.

In thinking about how research can inspire, I’ve come up with three initial ways. It is by no means a typology and the examples aren’t even mutually exclusive but it seems like a good start in which to organise my thoughts.

Structure:

The way in which research issues are approached and the problems framed. Examples include:

  • Methodology: The methodology itself could suggest new and previously alien ways to approach an issue. This post from Tom Ewing highlights some innovations in how research is carried out, but there are numerous examples of fresh approaches – from fMRI scanning to crowdsourcing.
  • Influences: Research is often (correctly) portrayed as insular but there are notable exceptions – Tom Ewing himself being one of them. He is able to take his knowledge and skills from music criticism and community building and apply them to research problems. Admittedly, this example isn’t research-specific but it nevertheless can inspire others to bring in people with different perspectives
  • Backwards approach: I mean this in a good way – research briefs are often issued to answer specific questions. To discover the most relevant way to get this information, researchers need to start with the answer and work backwards to figure out both the question and the way in which it is asked

Results

While a lot of research may be characterised otherwise, results themselves can inspire:

  • Exploratory research: By its very nature is designed to uncover new observations or – deep breath – insights
  • Fresh perspectives: Seeking to understand different audiences can lead to fresh outlooks as we look at the same issue from someone else’s eyes. While the Morgan Stanley note from their 15 year old intern was undoubtedly overplayed, I did like the notion that teenagers stay away from Twitter because it is full of old people trying to be young (for what it’s worth, I view Twitter as being far closer to Linked In than Facebook – it is useful connections rather than genuine relationships)
  • Holistic understanding: On a larger scale, ethnographers like Jan Chipchase offer us fascinating observations into areas we would never have even previously considered
  • Prototyping: I’ve written about IDEO before, and I love how they actually physically build things in order to better understand the problems
  • Desk research: Somewhat tenuous, but even sitting at your desk and reading, and being inspired, by different blogs or sites can be considered a form of research – whether one is explicitly looking for specific information or not

Implementation and Impact

Moving on from the results themselves, how research is used or the effects it has may also inspire

  • Workshops: Debating how research can be used can lead to further thoughts on idea implementation
  • Social effects of making data public: From last.fm to Nike+ making personal data available both encourages further participation and causes people to adjust their natural behaviour
  • Rewards and recognition: Similarly, in communities there have been noticeable effects on user behaviour and community culture when elements such as post counts or social connections have been introduced
  • Analytics: Avainash Kaushik is a Google Analytics evangelist who is full of great examples in how understanding site data has improved business performance

This question was recently posed to me by a colleague working on an assignment. The assignment is ongoing so any further thoughts, ideas or examples on how research methods, results or implementation can inspire would be massively appreciated.

And perhaps this attempt at crowdsourcing opinion will inspire others to a solution for the issues they are facing…

sk

Image credit: http://www.flickr.com/photos/stephenpoff/

Reblog this post [with Zemanta]

The nebulous concept of an insight

(Note: Apologies in advance if I offend past, present or future clients and colleagues with the following opinion)

Inspired by Neil and John railing against the word “consumer”, I must profess my annoyance with how “insight” is bandied around. I’m struggling to think of a word more overused and misused (the word “specialist” with respect to social media is the only thing that springs to mind).

SIDENOTE: Personally, I don’t mind consumer and think it is a better word to use that people. People may imply some level of individuality or humanity, but it is broad and without context – at least consumer implies an action.

Anyway, insight…

According to this, an insight is

the ability to gain a relatively rapid, clear and deep understanding of the real, often hidden and usually complex nature of a situation, problem, etc

Notice the words “hidden” and “complex”.

Insights aren’t part of a production line. It is rare that someone can go away and just come up with a new insight, or meet a request for some particular insights, or deliver an insightful piece of work with a snap of the fingers.

It takes time. It is labour intensive. It isn’t a commodity. It is both inspiration and perspiration.

Insights are rare. Compelling and fresh insights are even rarer.

The best, if overused, example that comes to mind is “Dirt is Good“. Genius.

So when people have “insight” in their job title, or work for the “insight” department, I have to suppress a groan.

Before I joined Essential, I was a Commercial Research & Insight Consultant at ITV. I always explained to people that my job comprised of three distinct elements:

Data: Reporting on numbers and explaining situations. When I dealt with data, I was an analyst

Research: Overseeing the process of finding out something new (at a fairly basic level). People may disagree, but I see research as a process. When I dealt with research, I was an executive

Insight: Connecting the dots between different data points or research projects to (attempt to) comprehend the deep nature of a business issue.

But I never knew what to call myself when trying to deal with insights. So when I went around to different companies delivering my report on online video, I used words like “recommendations”, “conclusions” and “ideas” and relied on my job title of “consultant”.

What could I have been? Insights aren’t analysed and they aren’t executed nor managed. Could I have been an Insight connector? Insight developer? Explorer? Gardener?

Insights are the most infrequent part of my job in the market research industry and the most misunderstood. They are also the most challenging and thus the most rewarding.

So when someone asks me for some insights into an area, they are perfectly entitled to. But they need to be sure that this is what they really want, as it takes a lot of time, a lot of patience and there is no guarantee that the end product is something that fits in snugly with any objectives or strategies.

sk

EDIT: As Will succinctly points out, there is a big difference between an insight and an observation. Kudos to the “creative”.

Image credit: http://www.flickr.com/photos/cayusa/