• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Data should be used as evidence and not illustration

I read the Guardian article on journalist’s struggles with “data literacy” with interest. The piece concentrates on inaccurate reporting through a lack of understanding of numbers, and the context around them. “Honest mistakes”, of a sort.

Taken more cynically, it is an example of a fallacy that I see regularly in many different  disciplines (I’m loath to call it a trend as, for all I know, this could be a long-standing problem) – fitting data around a pre-constructed narrative, rather than deducing the main story from the available information.

This is dangerous. It reduces data to be nothing more than anecdotal support for our subjective viewpoints. While Steve Jobs may have had a skill for telling people what they really wanted, he is an exception rather than the rule. We as human beings are flawed, biased and incapable of objectivity.

Given the complexity of our surroundings, we will (probably) never fully understand how everything fits together – this article from Jonah Lehrer on the problems with the reductionist scientific method is fascinating. However, many of us can certainly act with more critical acumen that we currently do.

This is as incumbent on the audience as it is the communicator – as MG Siegler recently wrote in relation to his field of technology journalism, “most of what is written… is bullshit”, and readers should utilise more caution when taking news as given.

Whether it is due to time pressures, lack of skills, laziness, pressure to delivery a specific outcome of otherwise, we need to avoid this trap and – to the best of our abilities – let our conclusions or recommendations emerge from the available data, rather than simply use it to illustrate our subjective biases.

While I am a (now no more than an occasional) blogger, I am not a journalist and so I’ll limit my potential criticisms of that field. However, I am a researcher that has at various points worked closely with many other disciplines (some data-orientated, some editorial, some creative), and I see this fundamental problem reoccurring in a variety of contexts.

When collating evidence, the best means to ensure its veracity is to collect it yourself – in my situation, that would be to conduct primary research and to meet the various quality standards that would ensure a reliable methodology, and coherent conclusions

Primary research isn’t realistic in many cases, due to limited levels of time, money and skills. As such, we rely on collating existing data sources. This interpretation of secondary research is where I believe the problem of illustration above evidence is most likely to occur.

There are two stages that can help overcome this – critical evaluation of sources, and counterfactual hypotheses.

To critically evaluate data sources, I’ve created a CRAP sheet mnemonic that can help filter the unusable data from the trustworthy:

  • Communication – does the interpretation support the actual data upon scrutiny? For instance, people have been quick to cite Pinterest’s UK skew to male users as a real difference in culture between the UK and US, rather than entertain the notion that UK use is still constrained to the early adopting tech community, whereas US use is – marginally – more mature and has diffused outwards
  • Recency – when was the data created (and not when was it communicated)? For instance, I’d try to avoid quoting 2010 research into iPads since tablets are a nascent and fast-moving industry. Data into underlying human motivations is likely to have a longer shelf-life. This is why that despite the accolades and endorsements, I’m loath to cite this online word of mouth article because it is from 2004 – before both Twitter and Facebook
  • Audience – who is the data among? Would data among US C-suite executives be analogous to UK business owners? Also, some companies specialising in PR research have been notoriously bad at claiming a representative adult audience, when in reality they are usually a self-selecting sub-sample
  • Provenance – where did the data originally come from? In the same way as students are discouraged from citing Wikipedia, we should go to the original source of the data to discover where the data came from, and for what purpose. For instance, data from a lobby group re-affirming their position is unlikely to be the most reliable. It also helps us escape from the echo chamber, where myth can quickly become fact.

Counterfactual hypotheses are the equivalent of control experiments – could arguments or conclusions still be true with the absence of key variables? We should look for conflicting conclusions within our evidence, to see if they can be justified with the same level of certainty.  This method is fairly limited – since we are ultimately constrained by our own viewpoints. Nevertheless, it offers at least some challenge to our pre-existing notions of what is and what isn’t correct.

Data literacy is an important skill to have – not least because, as Neil Perkin has previously written about, it is only the first step on the DIKW hierarchy towards wisdom. While Sturgeon’s Law might apply to existing data, we need to be more robust in our methods, and critical in our judgements.  (I appreciate the irony of citing an anecdotal phenomenon)

It is a planner trope that presentations should contain selective quotes to inspire or frame an argument, and I’ve written in the past about how easily these can contradict one another. A framing device is one thing; a tenet of an argument is another. As such, it is imperative that we use data as evidence and not as illustration.

sk

Image credit: http://www.flickr.com/photos/etringita/854298772/

Advertisement

Workshops as inception

yo dawg, i heard you like dreamsI spent the first two days of this week on a course. The course was run in the style of a workshop – no lectures, no learning materials, no rigid structure. Just discussions and exercises that ebbed and flowed as questions arose.

This is quite liberating, particularly for a person such as myself who is primarily a quantitative researcher. But it also makes it quite difficult to evaluate how useful the workshop was.

On the one hand, I enjoyed it and I remember thinking at the time how some things could be useful.

But I didn’t take many notes and thinking back, I can’t spontaneously recall a lot of the things we covered.

But that’s because I’m sitting at my desk writing in my blog. I’m not in a situation that requires me to utilise the skills or techniques we discussed.

Perhaps it is wishful thinking, but if I were in a situation that required me to act in a way that was discussed, I’m pretty confident I would act in a manner approximating the things we discussed.

The discussed is lodged somewhere in my subconscious. The workshop moderators planted ideas in my head regarding how to act in certain situations. I may not be able to recall them now, but in future I may well act on the advice when the right context arises.

This information influences my intuitive behaviour. As it occurs on a deeper level, it makes it hard to evaluate. So how can I?

Perhaps I can’t. Though proponents of advertising research would claim to be able to.

I remain quite sceptical regarding advertising research – pre-testing more so than evaluation. It is not enough to test whether an idea is “taken”, since one may not know it is “taken” until the right circumstances or situation or position on the purchase journey/funnel/prism/metaphor of choice is reached.

People far brighter than me have given pre-testing a great deal more thought than I have, so I will leave the subject at that.

It also makes a sort of logical sense to leave thoughts on a blog post about the gestation of ideas half-formed.

Going back to my workshop, if I were asked to assess whether my attendance had been a valuable experience – not just in the things I’ve gained but balanced against the time spent away from work (which also paid for the course), I’m not sure I could give an accurate answer.

Is the power of positive thinking enough? Is the hope that germs of ideas have been planted in my subconscious enough? Time may tell, but I as a subjective viewer probably won’t be able to see it.

sk

NB: You might need to click on the image to read the text in the first panel. Which may only make sense to viewers of Inception, Pimp My Ride and Know Your Meme.

The selective truth

There are two sides to every coin, but nuance is difficult to convey in a headline or summary. A clear and decisive statement is far likelier to catch the eye. It is important to question the motives of both the source of information and the reporting when making a decision as to the veracity.

I’ve noted this during my experiment to alternate my news sources. Similarly, I’ve tracked the early responses to a recent project I’ve worked on with interest.

SIDENOTE: The project is Brandheld – an extended study into consumer perceptions of the mobile internet, and both their current and intended behaviour. The press release is here and a topline slide deck will be released shortly. If you want more information about the report, contact me at [firstname]@essentialresearch.co.uk [/sales pitch]

The press release for the project can basically be split into two sections. The first section is a reality check, noting that adoption of the technology is perhaps lower than those in the London-centric media sector might think. The second section is a call to arms, suggesting a pathway to make the mobile internet seem more relevant to the mainstream.

SIDENOTE: The comments on The Register article nicely illustrate the reason for our first section. Most comments seem to fall into the “I do this, therefore everyone else must be doing it as well” category.

Several of the outlets picking up the story (to date) are only reporting or emphasising one of these sections. The reality check grabs the attention, and the call to arms supports the relevant sectors.

There’s nothing wrong with this – reporting a single side makes it easier for readers to digest, while many of us have an agenda we seek to push and any supporting evidence we can get is gratefully received and promoted.

This is fine for external communications and reporting. But for internal knowledge, it can be dangerous to be reliant on one side of the story.

The best clients I have worked with are those that recognise that while research may be commissioned in the hope of proving something, it is necessary to start with the unbiased and unvarnished truth, even if that might be difficult to hear. Even if only half the findings are externally reported, the other half should still be included in internal briefings.

This requires a strength of conviction if there is pressure coming down the chain of command for a particular result but there is clearly a need to avoid self-delusion. If the results are “bad”, it should be made clear why. If the desired outcome is achieved, it is unlikely that there won’t be a single caveat. And these caveats are important to understand when designing or promoting a strategy.

A similar principle is required when collating secondary research. Even if the findings are sourced or quoted as evidence in external communications, it is important to understand the biases or reliability of the data for your own internal knowledge. Recognising the nuances or limitations of something can only assist your efforts to improve it.

News articles remain a fantastic way to distribute information, and are often the first place that research or data is discovered. Nevertheless, it is vital to go back to the original source if you plan to do something with the findings. That way, an informed decision can be made about the accuracy or reliability of the information (for what it’s worth, Brandheld is an independent study conducted with no prior agenda aside from us thinking the mobile internet would be an interesting area to research). Even if this doesn’t affect the way the information is collated, it is still an important facet to consider.

sk

Image credit: http://www.flickr.com/photos/colin-c/200867665/

Reblog this post [with Zemanta]

Increasing visibility

John recently wrote an interesting post about (good) planners being invisible.

It is a similar story for researchers. After all, aren’t planners glorified researchers? (Well, to some extent, it depends on the type of research but, generally, no.)

John suspects this inherent invisibility, coupled with a desire for recognition, is the motivation behind the many blogs and conferences. It does seem to be a particularly vibrant environment, and from it I’m even able to know the picture I’ve chosen for this post is doubly relevant.

Sadly, this is where the similarities with research end. There are notable exceptions (and I REALLY need to update my blogroll to reflect this), but vibrancy is not a word I would associate with the researchersphere, if such a thing existed. Which it doesn’t.

So why are so few researchers blogging, and even fewer researchers engaging in stimulating discussions? And why is it that research conferences are almost without fail dull and repetitive?

I suspect it may be due to the following reasons:

  • Both planning and research are a combination of ideas and execution. In planning, the former tends to be the most important but in research it is usually the latter. Ideas are harder to replicate (and get away with) than processes, so planners are more willing to share, while researchers are more protective
  • Planning will at worst cover a campaign, and at best the entire product/service direction. Research tends to be project based. It has a definite start, middle and end. There is little chance for serendipity or reaction, and less opportunity to note and act upon interesting opportunities
  • The fruits of a planner’s labour are visible for all to see. Most research is initially designed for an internal audience, who then cherrypick the story they want to tell for an external audience. This inherent, proprietary, knowledge gets locked up and never seen nor spoken of
  • There are far fewer planners than researchers (I assume, I actually have no idea on numbers), and it is a harder profession to get into. Therefore average ability and motivation is higher, fostering a vibrant environment

There are probably many more reasons, but those are just from the top of my head.

Can this be changed? In the widest research sense, probably not. But there are pockets of innovation, some truly excellent researchers and massive differences in the nature and scope of project work. So there is some hope.

On January 1st, I said I wanted to read less things, but better. I ended up switching to a more time-consuming job, so just ended up reading less. This blog also became noticeably quieter since I switched jobs, and my link updates stopped.

This coming year, I want to move more from passive to active. There may not be a researchersphere, but I want to do my part in fostering thought and debate among my readers (thank you for persevering with me) and those I read.

Jeremiah Owyang says he likes to pay himself first – he does that through his blog concentrating his thought processes and the recognition he receives for it. I’m not very good at getting up before 8am (or noon on weekends), so I’m going to try to end the week by paying myself.

That will involve more time spent not only reading but also thinking, writing and talking about things. Some things directly related to research (though these thoughts may go on the Essential blog, which currently features our 2009 Christmas awards), and other things related to media, technology and marketing. And I’m also going to try to resuscitate a truncated link update.

I wish you all a prosperous 2010

sk

Image credit: http://www.flickr.com/photos/chaoticgood01/3786273684/

Mark Earls – From “me” to “we”

Thanks to Mat kindly donating his ticket, I was able to go and see Mark Earls give a seminar entitled From “me” to “we” at the Royal Society.

herd by mark earlsRather shamefully, I am still yet to read Herd – the book (and associated research) on which the talk was based. This is despite regularly reading the Herd blog and even having a copy in the Essential library. As I said, shameful.

Despite this, I think I was the target audience. Along with a Q&A only notable for the rather aggressive questioning of a lady accusing Mark of ignoring “the female perspective”,  the session offered a fairly gentle precis of the book’s central theory which, if I had read it, I would of course have been familiar with.

The talk

A tenet of the book is that we’re bad at changing other people’s behaviour. To highlight this, Mark recalled a few statistics from his research:

  • Only 10% of new products survive longer than 12 months
  • Only 30% of change management programmes begin to achieve their aims
  • Mergers & Acquisitions lessen shareholder value two thirds of the time
  • No government initiative has created demonstrable and sustainable change

This is particularly worrying because behavioural change comes before attitude change – our thinking comes after the fact. We (post)rationalise rather than act rationally.

Therefore, in order to change attitudes, we need to change behaviour. And to be able to do this, we need to understand who we are. Only then can we can create solutions that work.

The Herd thesis draws upon the Asian culture of believing that humans are naturally social. We are fundamentally social with only a bit of independence, not vice versa.

Although it doesn’t sound particularly controversial, this thinking does run contrary to some well established tenets of both marketing and social theory.

According to Mark, thinking is much less important in human life than it seems. He likens us and thinking to a cat in water – we can do it if we have to, but we don’t particularly like it.

This is because it is easier to follow than think. We know our judgement is fallible and so we outsource the decision by following the crowd. But while this may work in some situations – many illustrated by James Surowieki – it is also arguably a contributing factor to the financial crisis, as financial institutions copied one another without comprehending the implications.

We therefore need to design our theories and tools to accommodate this social behaviour. It is much more rewarding to understand how social norms are created and perpetuated than it is to work on the assumption of cogito ergo sum.

Some initial thoughts

While brief, the talk certainly conveyed the need for me to read the book fully. Perhaps then some of my questions regarding the theory will be answered.

In particular, I’m interested in knowing where movements originate and whether this herd behaviour can be predicted.

For all the sheep, there must be a shepherd somewhere. Are these shepherds always designated as such – the almost mythical influentials – or do we alternate between thinking and following?

Rarely are our choices as clear cut as choosing whether to join the corner of the party where people are talking rather than the one where people are sitting in silence. Instead we have multiple choices and herds – how do we choose?

Is it a level of proximity? In the Battle of Britpop, Northerners sided with Oasis and Southerners with Blur? However, I’m from the Midlands, so was my choice one of the rare occurrences of rational choice (which would make a rather unconvincing deus ex machina) or is it purely random?

If random, then the work of Duncan Watts becomes pertinent. His modelling has suggested that in situations where groups vote up and down their favourite songs, there is no objective winner. Different simulations create different patterns. Purely random.

This creates difficulties for researchers as we like our statistical certainty. We like to have a set answer that we can post-hoc explain given the evidence. Duncan Watts’ research would suggest that research tools that build in mass opinion – such as crowdsourced tagging or wikis – are effectively meaningless. Rather than ultimately deviate towards a “correct” answer, they simply reflect the random order of participation and interaction.

Can mass behaviour be effectively incorporated into a research programme? I’ll report back with some thoughts once I’ve read the book

sk

We’re bad at changing other people’s behaviour

Only 10% of new products survive longer than 12 months

30% of change management programmes begin to achieve their aims

Mergers & Acquisitions lessen shareholder value 2/3 of time (pwc)

No government initiative has created demonstrable and sustainable change

Behavioural change comes before attitude change – thinking comes after the fact

In order to change attitudes, change behaviour

We need to understand who we are so we can create solutions

More rationalising than rational

Cognitive outsourcing – memory is a distributed function so only remember slivers

We are fundamentally social with a bit of independence, not vice versa

Asian culture is inherently social

Gandhi said that humans are a necessarily interconnected species

Thinking is much less important in human life than it seems

“lazy mind hypothesis”

We can think independently, we just don’t like it – like a cat to water

Behave according to other people’s actions e.g. go to busy shops

We know our own judgement is fallible so “I’ll have what she’s having” – wisdom of crowds or financial crisis

Leads to social norms

Need to design our theories and tools to accommodate social behaviour

Genesis random – Duncan watts

Is it proximity that leads us to follow a herd, or example of using rationally weighing up the pros and cons

Herds originate from somewhere – must be a leader. Are these leaders the same in each situation, or are we all capable of being shepherds

Research application – crowdsource answers. But random – no statistical certainty as only one situation

Wikis to collate group opinion?

Reblog this post [with Zemanta]

How can research inspire?

The question in the title is predicated on the assumption that research can inspire. While the haters may disagree, I truly believe it can.

Understanding the different ways in which it can do so is trickier.

In a slight contradiction to my previous post on “insight”, I’m using the term “research in its most catch-all form. Rather than restricting the thinking to groups or surveys, I’m thinking about all disciplines and all methodologies. Research, data and insight.

In order for research to inspire, the recipient needs to be able to be inspired. Some form of creative process in order to make that new connection or leap is necessary.

In thinking about how research can inspire, I’ve come up with three initial ways. It is by no means a typology and the examples aren’t even mutually exclusive but it seems like a good start in which to organise my thoughts.

Structure:

The way in which research issues are approached and the problems framed. Examples include:

  • Methodology: The methodology itself could suggest new and previously alien ways to approach an issue. This post from Tom Ewing highlights some innovations in how research is carried out, but there are numerous examples of fresh approaches – from fMRI scanning to crowdsourcing.
  • Influences: Research is often (correctly) portrayed as insular but there are notable exceptions – Tom Ewing himself being one of them. He is able to take his knowledge and skills from music criticism and community building and apply them to research problems. Admittedly, this example isn’t research-specific but it nevertheless can inspire others to bring in people with different perspectives
  • Backwards approach: I mean this in a good way – research briefs are often issued to answer specific questions. To discover the most relevant way to get this information, researchers need to start with the answer and work backwards to figure out both the question and the way in which it is asked

Results

While a lot of research may be characterised otherwise, results themselves can inspire:

  • Exploratory research: By its very nature is designed to uncover new observations or – deep breath – insights
  • Fresh perspectives: Seeking to understand different audiences can lead to fresh outlooks as we look at the same issue from someone else’s eyes. While the Morgan Stanley note from their 15 year old intern was undoubtedly overplayed, I did like the notion that teenagers stay away from Twitter because it is full of old people trying to be young (for what it’s worth, I view Twitter as being far closer to Linked In than Facebook – it is useful connections rather than genuine relationships)
  • Holistic understanding: On a larger scale, ethnographers like Jan Chipchase offer us fascinating observations into areas we would never have even previously considered
  • Prototyping: I’ve written about IDEO before, and I love how they actually physically build things in order to better understand the problems
  • Desk research: Somewhat tenuous, but even sitting at your desk and reading, and being inspired, by different blogs or sites can be considered a form of research – whether one is explicitly looking for specific information or not

Implementation and Impact

Moving on from the results themselves, how research is used or the effects it has may also inspire

  • Workshops: Debating how research can be used can lead to further thoughts on idea implementation
  • Social effects of making data public: From last.fm to Nike+ making personal data available both encourages further participation and causes people to adjust their natural behaviour
  • Rewards and recognition: Similarly, in communities there have been noticeable effects on user behaviour and community culture when elements such as post counts or social connections have been introduced
  • Analytics: Avainash Kaushik is a Google Analytics evangelist who is full of great examples in how understanding site data has improved business performance

This question was recently posed to me by a colleague working on an assignment. The assignment is ongoing so any further thoughts, ideas or examples on how research methods, results or implementation can inspire would be massively appreciated.

And perhaps this attempt at crowdsourcing opinion will inspire others to a solution for the issues they are facing…

sk

Image credit: http://www.flickr.com/photos/stephenpoff/

Reblog this post [with Zemanta]

The nebulous concept of an insight

(Note: Apologies in advance if I offend past, present or future clients and colleagues with the following opinion)

Inspired by Neil and John railing against the word “consumer”, I must profess my annoyance with how “insight” is bandied around. I’m struggling to think of a word more overused and misused (the word “specialist” with respect to social media is the only thing that springs to mind).

SIDENOTE: Personally, I don’t mind consumer and think it is a better word to use that people. People may imply some level of individuality or humanity, but it is broad and without context – at least consumer implies an action.

Anyway, insight…

According to this, an insight is

the ability to gain a relatively rapid, clear and deep understanding of the real, often hidden and usually complex nature of a situation, problem, etc

Notice the words “hidden” and “complex”.

Insights aren’t part of a production line. It is rare that someone can go away and just come up with a new insight, or meet a request for some particular insights, or deliver an insightful piece of work with a snap of the fingers.

It takes time. It is labour intensive. It isn’t a commodity. It is both inspiration and perspiration.

Insights are rare. Compelling and fresh insights are even rarer.

The best, if overused, example that comes to mind is “Dirt is Good“. Genius.

So when people have “insight” in their job title, or work for the “insight” department, I have to suppress a groan.

Before I joined Essential, I was a Commercial Research & Insight Consultant at ITV. I always explained to people that my job comprised of three distinct elements:

Data: Reporting on numbers and explaining situations. When I dealt with data, I was an analyst

Research: Overseeing the process of finding out something new (at a fairly basic level). People may disagree, but I see research as a process. When I dealt with research, I was an executive

Insight: Connecting the dots between different data points or research projects to (attempt to) comprehend the deep nature of a business issue.

But I never knew what to call myself when trying to deal with insights. So when I went around to different companies delivering my report on online video, I used words like “recommendations”, “conclusions” and “ideas” and relied on my job title of “consultant”.

What could I have been? Insights aren’t analysed and they aren’t executed nor managed. Could I have been an Insight connector? Insight developer? Explorer? Gardener?

Insights are the most infrequent part of my job in the market research industry and the most misunderstood. They are also the most challenging and thus the most rewarding.

So when someone asks me for some insights into an area, they are perfectly entitled to. But they need to be sure that this is what they really want, as it takes a lot of time, a lot of patience and there is no guarantee that the end product is something that fits in snugly with any objectives or strategies.

sk

EDIT: As Will succinctly points out, there is a big difference between an insight and an observation. Kudos to the “creative”.

Image credit: http://www.flickr.com/photos/cayusa/

Perspective bias and the anchoring effect

Anchoring is a cognitive trait that causes us to rely too heavily on certain pieces of information when making a decision, such as an up-until-then trusted brand name selling us a lemon.

Perspective bias is a form of subjectivity or self-selection where we are unable to divorce our own prejudices and experiences from a decision.

Both exist. Both are prevalent. And both cause problems.

When you are a researcher, you need to ensure all information is communicated clearly. This could be rewording technical jargon, removing colloquialisms or introducing cultural as well as literal translation for foreign language work. For instance, if you want to know about the video on demand market and the effects of Hulu among US residents then you shouldn’t use the phrase video on demand. That’s pay per view. Hulu is online video.

When you are a design engineer, you need to realise that someones opinion of your new product is going to be rooted in what they already know. While this new flat-screen TV may be twice the size of my old CRT, it takes a bit longer to start. This new laptop may have high-speed wi-fi and bluetooth, but the keys are a bit harder to type on. This car has great handling, but where is the cup holder?

When you are a metropolitan advertising buyer looking after a mass market brand, you need to consider that while you may hate that prime time “drama” on ITV1, it appears that 7m of your potential customers don’t.

When you are a social media expert/rockstar/heavyweight champion of the world (delete as appropriate), you may think that your actions cause ruptures into the fabric of society. But do they? Motrin don’t think so.

When you pontificate that a brand is dying, have you taken a health check out of your immediate eyeline?

Incidentally, I like that tech companies are based in a valley – it acts as a nice metaphor for the echo chamber and short-sightedness of so many of the “end is nigh” kool-aid drinkers that seem to have a voice disproportionately larger than the size of their other senses.

Anyway, I think that is enough snark for one post. The point I want to make is that we should do our best to identify a frame of reference – it could be a good thing in the case of designers trying to improve their product or a bad thing when a researcher is trying to design a survey for a country that they have never visited, but it should be sought.

Some in advertising may disagree as it promotes the rational over the emotional – it suggests we methodically compare products rather than be captured by a glass and a half full of joy. My subjective opinion is that emotional advertising works only when we are overfamiliar with a product. I know what a chocolate bar is, and I know what Dairy Milk tastes like and the ad does a good job at reminding me of these facts.

But when it is a new product, that emotion isn’t enough. The ad wouldn’t have had the same impact if it were advertising an everlasting gobstopper. I need to know the functional benefits – why should I change my behaviour? What do I get out of it? The reason is the key.

Of course, the best campaigns can combine both the functional and the emotional. “1,000 songs in your pocket” tells me why an iPod is an improvement on a walkman in a memorable soundbite.

To use an old cliche, we need to walk a mile in other people’s shoes. Look through someone else’s eyes. To take a recent example, a few of my colleagues recently held a session where they showed people who had never before used a computer how they worked. Can you conceive of that? I can’t. These people had never picked up a mouse. Seeing how they interacted with it, and how they overcame the initial trepidation to complete a few simple tasks would have been a fascinating reminder into how what we take for granted is completely alien to another group of people.

Ultimately, it is the little things that matter. Just because we think something is fine doesn’t make it fine. Second, third and fourth opinions should be canvassed. Different perspectives sought. New angles explored.

We shouldn’t be complacent.

sk

Image credit: http://www.flickr.com/photos/ranopamas/

Reblog this post [with Zemanta]

Facebook Polls could be pretty useful

At the recent World Economic Forum, Facebook Global Markets Director Randi Zuckerberg demonstrated Facebook polls. This, accompanied by an interview in the Telegraph, has sent the blogosphere aflutter in two separate directions.

In one corner are those excited by the prospect of 120,000 responses in 20 minutes (as a question on Barack Obama’s stimulus plan received). In the other are those concerned with online privacy and civil liberties (Given the tone of this Comment is Free article on phorm, it is surely a matter of time before the Guardian whips up a fresh batch of hysteria on the matter. And this comes from a Guardian reader).

I’m in the former category. In limited situations, it has the potential to be a valuable tool.

This is something of an about-turn since my recent rant against online polls. However, within a limited sphere of usage I can see the value. If someone wants to quickly know “what”, then this seems like a valid option. If they want to know “why”, then they should look elsewhere.

What are Facebook polls?

If my understanding is correct, a Facebook Poll is a question that appears within one’s newsfeed – similar to a sponsored ad. The user can then answer or ignore the question.

Questions are targeted to a specific audience, and basic quotas on responses can be set. Facebook has since denied that it will use personal data for Facebook Polls, but I would think that this is just semantics. Behavioural data and interests may not be used, but the poll would be pretty useless if basic age, gender and location information wasn’t monitored.

Despite this recent chatter, it should be pointed out that Facebook Polls aren’t new. Ray Poynter ran an experiment during the London Mayoral elections, and it was only last month that they were put on hiatus. This merely marks a repackaging of an existing product.

Another necessary correction is the conflation of Facebook Polls and Engagement Ads. Engagement Ads are a separate product – namely advertising widgets that can be shared and commented upon. Jeremiah Owyang has a summary here.

What are the benefits of Facebook Polls?

The nature of Facebook offers several benefits to a polling tool

  • A captive audience regularly refreshing their news feed provides a fast response
  • Real-time response mechanism allows immediate analysis
  • Relative unobtrusiveness and simplicity could give a decent response rate
  • Scale of either large response, or decent response among a targeted niche (if permitted)
  • Traction among the public allows for decent tracking over time

What are the uncertainties surrounding Facebook Polls?

A rudimentary polling tool is bound to be limited. Areas that need to be explored include

  • How limited will the infrastructure be? Not only in types of question, but even character limits (Shouldn’t be a problem for Twitter users)
  • How attentive are the users? They are going to be multi-tasking and processing a great deal of information – how much thought will they put into an answer? (But in some cases, gut reaction is desired)
  • How representative will respondents be? Not only in terms of non-Facebook population, but also with the heavy user and participation bias within Facebook (Though present research panels are hardly a beacon of quality in this respect)
  • Will demographic info used for analysis be accurate? People are projecting a public persona and so may be tempted to lie on some matters. However, I cannot see age, gender or location being any more innaccurate than present survey panels
  • What sort of buy-in would it receive from the industry? There is little point using Facebook Polls if no-one trusts or uses them

What is the future of Facebook Polls?

How could Facebook Polls expand? Matt Rhodes is sceptical that this will evolve into a research community. I’m less so.

Poll users can be directed to a group or page, where questions surrounding a certain topic can be explored in more depth and with more nuance. They may not evolve into a full “community”, but if visitors can be persuaded to return then they will have some value. This evolution would also support a recommendation or sharing service whereby respondents can be recruited by friends – engagement that is part of the Facebook experience (though it would have to be handled better than the various Ninjas vs Zombies widgets).

Furthermore, if these people were to opt into sharing some of their personal information then a very rich understanding of behaviour and opinion could emerge. People are on Facebook to talk and procrastinate – we should try and utilise this state of mind.

Meanwhile, looking further ahead, Read Write Web ponders the introduction of a “sentiment engine”, whereby prevailing moods and attitudes can be judged through a contextual understanding of status updates.

This would be great if a robust analytic tool could be developed, but in the meantime it would be overcome by bored people proclaiming their hunger / tiredness / hangover.

Though unproven, I can see the potential in Facebook (re)launching this product, and it will be interesting to see how it develops.

sk

Image credit: http://www.flickr.com/photos/stephenpoff/

When did we start trusting strangers?

Following on from their (very useful) Social Media tracker, Universal McCann have released some follow up research entitled When did we start trusting strangers?

(RSS Readers – you may have to click through to see the slideshare presentation)

It explores the influence that we wield online, and how consumer generated content – whether blogs, reviews or comments – affect our purchase behaviour.

It is well worth checking out, and I completely agree with their conclusions.

Everyone matters and brands have to embrace these new forms of communication to reach out and interact (openly and transparently) with their current and potential customers.

The word conversation is horrendously overused but there is a huge amount of chatter out there. It is far better to be a part of it than it is to look in from the outside or – worse – ignore it.

sk