• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Learning from Steve Jobs

Steve Jobs' fashion choices over the years

Understandably, technology news over the past week has been dominated by Steve Jobs’ resignation as Chief Executive from Apple. While he will stay on as Chairman, Tim Cook – former Chief Operating Officer – will take the helm.

There have been many wonderful pieces on Jobs (though some do read like obituaries) – these from Josh Bernoff and John Gruber being but two – which cover many angles – whether personal, professional, industry or other. I’m neither placed nor qualified to add anything new but I have enjoyed synthesising the various perspectives. Yet invariably, the person saying it the best was Jobs himself:

  • He knew what he wanted – “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking” (Stanford commencement speech)
  • He felt he knew better than anyone else – “The only problem with Microsoft is they just have no taste. They have absolutely no taste. And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.” (Triumph of the Nerds)
  • He, along with empowered colleagues, relentlessly pursued this – “You have to trust in something — your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.”(Stanford commencement speech)
  • He was a perfectionist – “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.2 (Playboy)

NB: The quotes above were taken from this Wall Street Journal article.

In Gruber’s words “Jobs’s greatest creation isn’t any Apple product. It is Apple itself.”

In 14 years he took Apple from near-bankruptcy to – briefly – the biggest company in the world by market capitalisation. He has been enormously successful. And while possibly unique – his methods run counter to textbook advice on how to run an organisation – a lot can be learned from him.

The thing I have taken most from this is Jobs’ uncompromising nature. If people weren’t on board with him, then to hell with them. This of course led to his dismissal from Apple in 1985. And his dogged focus on his preferences has informed his fashion choices over the years, as the above picture illustrates.

It might seem strange for a market researcher to take this away, particularly since research is stereotyped as decision-making by committee – something which Jobs despised:

  • “We think the Mac will sell zillions, but we didn’t build the Mac for anybody else. We built it for ourselves. We were the group of people who were going to judge whether it was great or not. We weren’t going to go out and do market research. We just wanted to build the best thing we could build.” (Playboy)
  • “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” (BusinessWeek)

Unfortunately, this stereotype is often true, and I have been guilty of perpetuating it on occasion.

One example was when trying to get a project up and running (on a far smaller scale than rescuing Apple admittedly). With a lot of stakeholders, I tried to include as many of their wishes and requests is possible. The end result was bloated, incoherent, unfocused and over-deadline. It wasn’t one of my finer moments.

Rather than bolt everything on, I should have appraised all the input and only included that which remained pertinent to the core objective. I lost authorship of the project, and it suffered.

While there will be counter-arguments, many public failures do seem to be the result of committee-made decisions. Two bloated, incoherent examples that immediately spring to mind are Microsoft Office 2003 and the Nokia N96. Conversely, there are many examples of visionary micro-managing leaders that have driven a company to success – Walt Disney, Ralph Lauren and Ron Dennis to name but three.

I am a researcher rather than a consultant, and so don’t intend to fully adopt this approach. However, it appears that there is a greater chance of success when primary research or stakeholder input informs, rather than dictates, the final decision.

Steve Jobs knew this. His flagship products weren’t revolutionary (IBM, Microsoft, Nokia and the like were the primary innovators). But his genius was in refining a variety of inputs and stimulus, and moulding them into an expertly designed final product.

And that is something to aspire to.

sk

Advertisement

Observation and participation

One of the (many) criticisms of market research is that it is based on artificial, post-rationalised claimed responses. This line of thinking contends that there have been plenty of studies showing us to be unreliable witnesses to our own thoughts and actions – therefore surveys, focus groups and the like can’t be trusted.

Obviously, the reality is no so black and white. There are some things I can recall perfectly well – places I’ve shopped in the past week, why I like to blog etc. My answers would be truthful, though with the latter example the analysis might not take my literal answer but instead interpret it into a broader motivation.

Nevertheless, what I know I know is only one part of the Johari Window (which was channeled by Donald Rumsfeld for his known knowns speech) – the arena quadrant. For the other three quadrants, these methods are insufficient.

Fortunately, there is more to research than surveys and focus groups.

To slightly paraphrase the hidden quadrant, this would involve a methodology that would provide us with previously unknown information. This can be achieved through participation. IDEO are big proponents of this – I particularly like the example Paul Bennett gives of improving hospital waiting. The best way for them to discover the patient experience was to become the patient and spend a day strapped to a gurney. The view from the gurney is boring ceiling after boring ceiling, so IDEO used this space to provide both information and soothing designs.

The blind spot quadrant is where we battle the unreliable witness through observation. This could either be straight-forward observation or a mixture of observation and participation such as ethnography (remember: ethnography is not an in-home interview). Siamack Salari of Everyday Lives gives the fantastic example of a project he did for a tea company. This tea company had invested a great deal of money in research to understand the different perspectives people had on the perfect cup of tea. For the colour, they had even developed a colour palette outlining varieties. In closed research, people would pick their perfect colour. Yet, when observed, the colour of tea would never match. This is because people don’t concentrate on making the perfect cup of tea – the colour depends on the amount of milk they have left in the fridge and whatever else is capturing their attention (such as the toaster). Valuable information though, as Siamack noted in a training session I attended, an expensive way of finding out the answer you want doesn’t exist.

Thus, two simple examples to show the role of observation and participation in improving our understanding of things. As for the unknown window…

sk

Image credit: http://www.flickr.com/photos/colorblindpicaso/3932608472

Legacy effects

Earlier this week Seth Godin blogged about legacy issues. He stated that “The faster your industry moves, the more likely others are willing to live without the legacy stuff and create a solution that’s going to eclipse what you’ve got, legacies and all.”

That might be true, but legacy effects are just as prevalent on the consumer side as the production side, and they should be recognised and incorporated as far as possible.

For instance, early digital cameras didn’t contain a shutter sound. After all, it doesn’t need one – the noise was merely a byproduct of the analogue mechanism. Nevertheless, early users felt a disconnect – the noise had let them know when their photo had been taken. Hence digital cameras all now have the option for the shutter sound to be incorporated.

Legacy effects are also present in our naming conventions – records, films and so on. I suspect this may also soon apply to the device we carry around in our pockets and handbags.

Our contracts and pay as you go credits are currently with phone companies, and so the “mobile phone” name still makes sense, even when on smartphones the phone is “just another app” (and not a regularly used one at that). But with Google looking at unlocked handsets, and the introduction of cashless payments through NFC, the business models may soon be changing. I suspect that if Visa starts selling devices that allow you to make payments as well as contact people, they will initially call it a “mobile phone” rather than a “mobile wallet”.

Behaviours are also subject to legacy effects – our habitual purchases that we continue to make without consideration. Some companies (like AOL) benefit from it, while others can suffer. For instance, I have only recently purchased a Spotify subscription and am considering a Love Film trial. From a purely economic standpoint I should have done this a long time ago, but I’ve been wedded to the idea of needing to own something tangible. Digital distribution means this isn’t necessarily the best option anymore (I type this as I look at shelves full of DVDs that I will need to transport when moving flat).

Consumers on the business-to-business side aren’t immune from this either – witness the continued reliance on focus groups or a thirty-second spot. These are undoubtedly still effective in the right circumstances, but some budget holders can be extremely reticent to leave traditional tried and trusted methods even when faced with reliable evidence than an alternative could prove more effective.

So while some companies can benefit from removing their legacy attributes early, doing so too early may be counterproductive. The comfort of sticking with what one knows can be very powerful, no matter how irrational it can seem.

sk

Ten things I learned from the New MR Virtual Festival

My previous post included all of the notes I took while listening into the New MR Virtual Festival. This post contains the key things I took away from the day, and have subsequently been mulling over in the 2 months (!) since.

New MR Virtual Festival header

NB: All slides shown below are taken entirely without permission. If a creator objects to its use, please contact me and I’ll remove it.

1. The boundaries between participation and observation can (and, in some circumstances, should) be blurred

Although Ray Poynter touched on the range of methods in which research can overtly or covertly be conducted online, Netnography (cf. Robert Kozinets) is – to me – one of the more interesting. To be effective, it needs to have the research both participate and observe the environment of interest.

Erica Ruyle argues that observation (or lurking) is fine in the initial stage, since the norms and cultures need to be understood and respected. But active participation is vital in order to get than “insider” knowledge and to be able to read between the lines of the interactions.

This is a difficult proposition to promote commercially as a) the time investment (and thus cost) required will be huge and b) the researcher will need to have a personal as well as professional interest in the topic in order to both be accepted by the community and accept the community. For instance, how many researchers would turn their nose up at being asked to take part in World of Warcraft for 6 months?

Nevertheless, in the right circumstances it could prove to be a fascinating and rewarding exercise.

2. Convenience samples can still be worthwhile

It is true that all non-census research incorporates a convenience sample to some extent. But some methods require more convenience (and thus are less representative) than others.

Annelies Verhaeghe highlighted some of the issues to be aware of when conducting social media research – particularly that we should resolve ourselves to not always know who we are speaking to or monitoring.

Furthermore, something I had not considered but makes sense is that even though companies trumpet the volume of data they scrape and collect, only a sub-sample of that will be analysed due to the diminishing returns of going deeper into a very large data set.

If we’re able to augment social media research with other techniques or data sources – Annie Pettit mentioned some of the benefits of combining social media research with surveys – then it can be a very valuable and insightful method of getting real-world information on past actions and thoughts.

3. Respondents shouldn’t necessarily be viewed equally

Both Rijn Vogelaar and Mark Earls talked about premises realised more thoroughly in their books – The SuperPromoter and Herd respectively.

Segmenting the audience isn’t a new phenomenon – we often restrict our universes to who we are interested in – but within these universes perhaps we should pay more attention to some individuals more than others – particularly given the complex social interactions that cause ideas and opinions to spread. I’m not clever enough to be able to incorporate full network theories into any of my research – in the manner of Duncan Watts, for instance – but perhaps there is an argument for applying simple weights to some projects, to account for some opinions becoming more important than others. Or perhaps it is too contentious to implement without proper academic grounding and proof.

4. More of an effort needs to be made to meet respondents on their terms

Betty Adamou joined the likes of Stowe Boyd in saying that email is on the decline among younger people. This is problematic for online surveys, given that online panels are predominantly administered via email. Given the trends, perhaps we should be looking to Facebook, Twitter, instant messenger etc for both initial recruitment of these audiences and then allow them to dictate how we can contact them to alert them with new surveys. I’m not sure whether a note on a Facebook group could be as effective as an email, but it is certainly worth exploring.

5. Survey structures can do more to take advantage of the online environment

Our media and communications channels have fragmented but the content providers retain a centralised hub of controlling activity. Why can’t the research industry do this? John Dick talked through Civic Science’s modular research methodology, whereby questions are asked in chunks of two or three at a time, but combined at the back-end to build up extensive profiles of respondents.

This approach makes intuitive sense. In face to face research, the main cost was in finding people to speak to. Thus, once they were located, it was efficient to collect as much information as possible. The web is abundant with people, who are time-poor. The cost isn’t in finding them, it is keeping them. People could easily answer three questions a day if there was the possibility of a prize draw. They would be less willing to spend 30 minutes going through laborious and repetitive questions.

There are clearly downsides to this method and plenty of issues to overcome regarding data quality assurances, but the notion of Facebook users answering a couple of questions a day sounds like a feasible way to collect information among people who might be unwilling to sign up to an online survey

6. Surveys don’t have to be boring or static…

Another aspect of the online world that should be explored further is the level of interactivity. Many online surveys are straight ports of face to face surveys – a shame when there are so many more things that a web survey can – in theory – be capable of.

Jon Puleston of GMI highlighted several of their experiments in this area. Interestingly, although interactive surveys take longer, respondents are more engaged, enjoy them more and give “better” answers. I particularly like the idea of timing respondents to give as many answers as possible within a given timeframe. This appeals to people’s competitive nature, and means they’d spend far longer on it than they normally would.

Jon Puleston of GMI at New MR Virtual Festival

The King of Shaves case study was very interesting. Rather than a 1 minute introduction for a 10 minute survey, this example reversed the process. People were given a detailed briefing on the role of a copywriter, and asked to come up with a creative slogan. In subsequent testing, seven “user-generated” ideas scored better than the advertising agency.

7. But we should be aware of the implications of survey design on data capture…

Jon’s examples showed how framing questions can improve data collection. But Bernie Malinoff warned us that even minor superficial changes to a survey can have a big impact on how people answer questions. For instance, the placement of the marker on a slider scale can heavily impact the distribution of answers.

Bernie Malinoff at the New MR Festival

Bernie also had some important lessons in survey usability – ranging from the wordiness of the questions (something I’ve been guilty of in the past) to the placement of error messages and how they can influence subsequent responses.

Surprisingly, his research found that survey enjoyment was comparable among people who did traditional “radio button” style surveys versus richer experiences, and that people were less willing to take part in future after having completed a flash-based survey.

It acts as a sobering counter-point to Jon’s presentation, but I inferred some caveats to this research (or perhaps I am only seeing what I want to see). I suspect some of the resistance to flash might be down to the “newness” of the survey design rather than a genuine preference for radio-button style surveys. Similarly, design iterations aren’t neutral – I wouldn’t mind different results so long as I felt they were “better” (and any methods to discourage survey cheaters are welcome). Nevertheless, it an important reminder that a better designed survey is only an improvement if it makes the survey more usable and easier to understand, and I completely agree with the final point that the industry should reinforce best practices for interface design.

8. …And whether they are suitable for the audience

Tom Ewing’s talk on gaming and research covered many interesting points, but the one that stuck with me is that it isn’t for everyone. As he points out, FourSquare holds little appeal to him (unless he wanted to be Mayor of his child’s nursery). Similarly, while the number of gamers is rising, it isn’t everyone and so we cannot assume that introducing interactive, exploratory or personalised experiences will automatically make respondents enjoy our surveys more.

Particularly since games design is pretty hard – Angry Birds and Farmville may look simple, but I wouldn’t expect any research agency to devise and incorporate something as addictive to their research methodologies. The latter in particular seems to purely encourage the completion of repetitive, monotonous tasks – not something that would benefit the quality of research outputs.

9. There is plenty of scope to improve beyond the debrief

John Clay talked about ways in which researchers can improve the way that debriefs are communicated. This is an area that many (too many) researchers need to improve upon, but an even more pressing area of improvement is what occurs after the debrief.  Spencer Murrell’s presentation on insight translation covered this.

Spencer Murrell at New MR Virtual Festival

Summary slides and executive summaries are important in debriefing research, but it is important to go beyond the report/presentation into a framework that can be referenced in future. Whether it is a model that can be stuck on a wall, or a cheat sheet that can be put on a post-it note, there are many creative ways in which the core findings of a project can be transformed into an ongoing reminder. Evidently, this could easily descend into a gimmicky farce, but it is important to remember that the debrief isn’t the end of the project. In many ways, it is only the end of the beginning. The next phase – actually using that information to improve the organisation – is the most important. Any ways in which researchers can add value to this stage can only improve their standing with their clients.

10. Online conferences can work

On the whole, I think the event can be viewed as a huge success. For an affordable fee ($50), I listened to many intelligent speakers on a variety of topics, as shown by both this post and the previous post.

There was also plenty of excellent discussion around the talks and the content on Twitter, using the #NewMR hashtag. I’m usually reticent to tweet during events, but given the lack of face-to-face contact and the fact I was facing my computer at the time, Twitter worked excellently as a forum to amplify the usefulness of the content presented.

An idea is one thing, but executing it is something very different. Aside from the odd technical hitch (inevitable given the volume of speakers from across the globe), the day ran impeccably. So Ray Poynter and his board deserve huge congratulations for not only the concept, but also the organisation and output of the event. I would wholeheartedly recommend people with an interest in research investigate the New MR site and list of upcoming events.

sk

The battle of big versus small

EDIT: An updated version of this article can be found here

Précis

Particularly in research, but also in other marketing disciplines, big agencies and small agencies will compete for tenders against one another. Normally, one agency is successful. I find this strange as the benefits of several agencies specialising appear, to me at least, to be greater than that of consolidation.

Introduction

Having worked at both a large research agency (GfK NOP, though it was NOP World when I joined) and small agency (Essential Research, where I’m currently employed), I read with interest the “Is Bigger Better?” article in October’s Research magazine (EDIT: The link to “Is Bigger Better?”, which is now online).

The article took the form of a debate, with Paul Edwards (Chairman of TNS-RI) representing “big” and Jan Shury (Joint Managing Director of IFF Research) representing “small”. Their main arguments are summarised below

The Initial debate

Why bigger is better – Paul Edward:

  • Doing everything in-house, worldwide, affords a consistent standard
  • There are big, validated products that are economical to use
  • There is a wealth of diverse talent for more bespoke requirements
  • The size of the company means clients are more likely to find a suitable contact with the right frame of mind
  • There is a wider range of people with different training, sector experience and tenure
  • The resources are available to be proactive in thought leadership, conference attendance and so on
  • Investment in IT can be made to fuse different techniques or data sets together
  • It is safer than a small agency – efficient, economical, fast, financial secure and properly audited
  • “For me, it is about playing the odds”

Why smaller is better – Jan Shury:

  • Thinking small makes for a more bespoke and friendly experience
  • Who runs the business owns the business – there is no plc board to report to
  • There is one building with one culture
  • People live the brand by getting more involved and having an entrepreneurial spirit
  • Management are involved and can apply their knowledge of running a business
  • There is high visibility and high reward
  • There is no “One Size Fitz Hall” career progression
  • There is no separate sales team – the people pitching are the team that will work on the project
  • They are more adaptable to client needs
  • “The client views us as the brains of the operation, and the large research companies as the data factories”

My criticisms of the initial debate

Why bigger isn’t always better

  • I question the existence of a consistent standard. There may be consistent processes, but a team and an output is only as good as the weakest link. With more people involved, the weaker the link
  • The need for many staff (and high staff turnover in general) means that recruitment isn’t as careful or deliberate as a smaller agency
  • Do projects really get assigned based on personality? Surely workloads and specialisms are more pragmatic
  • Furthermore, the high staff turnover means relationship are lost and working cultures are rarely maintained
  • Big companies do offer great training programmes, but there is rarely the opportunity to apply learnings. Graduates get trained up and leave. When I left GfK after 3 years, only 5 of my graduate intake of 20 people remained. Furthermore, we were all promoted at similar rates for “equality”
  • Big companies may have more resource to sink into being pro-active, but small companies are also able to do this, if marketing is viewed as an investment. I’ll be speaking at a conference in a few weeks, for instance.
  • Big companies may be fast in turning things around, but how agile are they when it comes to experimentation?
  • Large agencies have departments to meet all research needs, but are they masters of jacks or these trades?
  • “Playing the odds” makes it sound like a science, when I think the interpretation and implementation of research is very much an art

Why smaller isn’t always better

  • Small agencies need to pay the bills and so their high morals may be compromised in order to keep the business afloat
  • Small agencies have very particular cultures and personalities – it can take buyers a long time to find the company with the right fit
  • Personality and quality of work flow from the owner – being able to run a company isn’t analogous to being a good researcher
  • There is less inherent experience in specialised requirements, so there can be elements of experimentation and failure on projects
  • Workloads for small agencies vary to a greater degree as there is less ability to spread jobs around – this creates uneven working hours for staff and can mean slightly more variable quality for clients
  • Staff do take on more responsibility but can also burn out, or seek other employment opportunities with more forgiving schedules (particularly when children enter the picture)
  • Credit can often be centred on the owner, who is the company figurehead. I note that there is small note at the bottom of Jan’s article saying that Mark Speed also contributed (though admittedly they are both joint MDs)

Does it have to be a zero sum game?

It had been something I’d already been considering, but Jan’s quote above is very telling: “The client views us as the brains of the operation, and the large research companies as the data factories”.

This quote isn’t necessarily asserting small agencies are better than big agencies; just that they are different. So why cannot both be employed for different aspects of a research project? It is no different to a brand employing both a media agency and a creative agency, or a sommelier not serving bread at the dinner table.

My solution

The set-up

As outlined above, there are advantages and drawbacks to working with both big and small agencies. So, why not try to leverage the benefits of both while minimising the drawbacks.

  • Big agencies have scale, security and have the resource to standardise processes. They should focus upon data collection, management and administration, perhaps extending into training of systems or processes. Reliability is prioritised.
  • Smaller agencies have more rigid recruitment criteria, generally employing more driven people who get involved in a broad range of tasks. They can use their experiences and immersion to hypothesise on research findings and assist in implementation. Creativity is prioritised.

A big agency is thus employed for data management. A small agency for consultancy.

Evidently, employing two very different agencies to work alongside each other on a project (whether ad-hoc or continuous) is problematic

The challenges

  • It has the potential to be an uneven relationship. The small agency is the driver of the project, which effectively makes the larger agency the car. The driver steers and provides direction and control; the car uses its horsepower to get the job done
  • The extra investment in finding agencies that meet client requirements and also work well with each other requires a more long-term strategic approach to research (not dissimilar to advertising accounts). In media at least, research budgets tend to be set on an annual basis as a response to strategic objectives (which can differ vastly, year on year)
  • The client may need to get more involved in mediating between the two agencies, ensuring a fair division of labour on a project that continues to focus on the end objectives
  • Communication becomes more difficult as it becomes more open. A small agency may have previously sub-contracted to a large agency and hidden all the processes (which, let’s face it, clients don’t always care about). Now they are given equal exposure
  • The goal of most small agencies is to grow (they don’t seem to believe in Seth Godin’s Dip) – at what point do they become too large to continue to offer the benefits of being a smaller agency
  • The model is a bit unkind to larger agencies – it is possible to have a boutique presence within a larger infrastructure. I know QMedia have been pretty successful at this in the past
  • The goal is to leverage the benefits of both types of agency, but it is also possible to get the worst of both worlds – inflexible data collection poorly implemented

The way to get it to work

  • The big agency would probably be recruited first – there is a smaller number to shortlist, and it is important to get a partner that can be trusted across the gamut of potential methodologies required
  • The small agency is harder to recruit due to the many available, and the importance of finding the people with the right skills and culture. Extreme care should be taken in selection
  • Before finalising partnerships, chemistry meetings should take place and “rules of engagement” clearly established
  • The client needs to be very clear in their priorities in project management (quality, time, cost or scope), logistics of the relationships and the anticipated communication lines. Is it going to be a three-way relationship; will the client act as a central point of contact between both agencies or will the smaller agency interject between the client and larger agency
  • Piloting. There will inevitably be teething issues. The first project commissioned using this method shouldn’t be of critical importance with strict time-sensitive outputs

Conclusions

Could this system work? I think it could for companies that operate long-term strategic research programmes that require both consistency of practice and nuanced interpretation. While there are many challenges in getting competitors working together (though, arguably, they are no longer competitors if the industry fragments into consultancy and data management), I see it as potentially being more beneficial to employing a single agency that can effectively perform some, but not all, of the requirements.

sk

Image credit: http://www.flickr.com/photos/hand-nor-glove/2240709916

The gamification of surveys

How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.

I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.

The presentation is embedded below.

The problem

There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.

In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.

As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.

The solution

Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.

Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as

  • Time taken to complete the survey (against what time it “should” take)
  • Time taken on a page before an answer is selected
  • Consistency in time taken to answer similar forms of questions
  • Length of response in open-ended answers
  • Variation in response (or absence of straight lines)
  • Absence of contradictions (a couple of factual questions can be repeated)
  • Correct answers to “logic” questions

A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via

  • Achievement – aiming to improve a quality score over time
  • Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
  • Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)

The challenges

For this to work successfully, several challenges would need to be overcome

  • Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
  • Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
  • Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
  • Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
  • Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.

Would it work? I can’t say with any certainty, but I’d like to see it attempted.

sk

Enhanced by Zemanta

Avoiding insights

I really don’t like using the word “insight”.

As I wrote here, the word is hideously overused. Rather than being reserved for hidden or complex knowledge, it is used to describe any observation, analysis or piece of intelligence.

And so I’ve avoided using it as much as possible. In an earlier tweet, I referred to the Mobile Insights Conference that I’ve booked to attend as the MRS Mobile thing. And I even apologised for my colleague (well, technically, employer) littering our Brandheld mobile internet presentation with the word.

But this is irrational. I shouldn’t avoid it, if it is the correct word to use. After all, substituting it for words like understanding, knowledge or evidence might be correct in some instances, but not all.

Does it really matter? After all, isn’t a word just a word? As someone once said, “What’s in a name? That which we call a rose by any other name would smell as sweet“.

But he’s talking complete rubbish. Because words do matter. They cloud our perceptions. It is why brands, and brand names, are so important. And why blind taste tests give different results to those that are open.

In fact, this emotional bond we have with words has undoubtedly contributed to my disdain. And this should stop. So I vow to start reusing the word insight, when it is appropriate.

But when is it appropriate? I’ve already said that an insight is hidden and complex, but then so is Thomas Pynchon and he is not an insight.

In the book Creating Market Insight by Drs Brian Smith and Paul Raspin, an insight is described as a form of knowledge. Knowledge itself is distinct from information and data

  • Data is something that has no meaning
  • Information is data with meaning and description, and gives data its context
  • Knowledge is organised and structured, and draws upon multiple pieces of information

In some respects it is similar to the DIKW model that Neil Perkin recently talked about, with insight replacing wisdom.

However, in this model – which was created in reference to marketing strategy – an insight is a form of knowledge that conforms to the VRIO framework.

  • Valuable – it informs or  enables actions that are valued. It is in relation to change rather than maintenance
  • Rare – it is not shared, or cannot be used, by competitors
  • Inimitable – where knowledge cannot be copied profitably within one planning cycle
  • Organisationally aligned – it can be acted upon within a reasonable amount of change

This form of knowledge operates across three dimensions. It can be

  • Narrow or broad
  • Continuous or discontinuous
  • Transient or lasting

How often do these factors apply to supposed insights? Are these amazing discoveries really rare and inimitable, and can they really create value with minimal need for change? Perhaps, but often not.

And Insight departments are either amazingly talented at uncovering these unique pieces of wisdom, or they are overselling their function somewhat.

When I’m analysing a piece of privately commissioned work, a finding could be considered rare and possibly inimitable (though it could be easily discovered independently, since we don’t use black box “magic formula” methodologies). But while it is hopefully interesting, it won’t always be valuable and actionable.

But if it is, I shall call it an insight.

sk

Image credit: http://www.flickr.com/photos/sea-turtle/2556613938/

Reblog this post [with Zemanta]

My name is my name

Marlo Stanfield from the WireSo says Marlo Stanfield. And he has a point.

Reputation means a lot. But reputation is about perception, and there are multiple perspectives in which it can be viewed.

Broadly, reputation can be thought of in four inter-related spheres

  • Yourself – your personal brand
  • Your organisation (this itself can have several facets, if your organisation is part of a larger conglomerate or affiliation)
  • Your industry
  • The wider public

Marlo is concerned with his personal reputation among people in the industry – “the game”. He isn’t so worried about the other facets.

With the prominence of polling in the upcoming general election, the research industry is contemplating its reputation among the wider public.

I don’t think it really matters.

This election is more partisan and contentious than any I recall (most likely driven by the likelihood of change, rising prominence of online media giving a voice to more people, and the novelty of the leadership debates). Pot-shots, such as those against YouGov, are inevitable. This article from Research Live shows how YouGov aren’t doing themselves any favours in their need for speed (and this is leaving aside their associations with The Sun/Murdoch/Conservative Party).

I don’t think it matters because the research industry is rarely public facing – the only publicity it really receives is through political polls and PR research.

I’ve written about the problems with PR research in the past, but there is evidently a market for it and so the method prospers. It might damage the reputation of the industry to the wider public but outside of recruitment  (of staff and respondents/participants) it isn’t really relevant.

As Marlo noted, it is industry reputation – for yourself and your organisation – that really matters.

It is similar to the advertising industry. Successful companies have a lot of brand equity through the quality and associations of their work – Wieden & Kennedy and Nike, Fallon and Cadbury, HHCL and Tango, and Crispin Porter & Bogusky and Burger King, to give but four examples.

But what proportion of the general public has heard of these companies, let alone recognises and appreciates their work? Not many. Is it a damning indictment of the strength of the marketing industry that it fails in promoting the most basic thing – itself? Not really. Companies attract talent and business through their successes and image – public perception doesn’t factor.

Ray Poynter is rightly concerned with the the ethics of market research but for me, the importance of this is in maintaining business links. There is no adequate means of policing the research industry – anyone can knock on a door and say they are doing a survey – so it is not a battle worth fighting.

Companies stand and fall by the quality of their work – or at least the perception of it within the industry. Sub-standard work that is openly criticised will only harm long-term prosperity.

Self-regulation and recognition, whether through a recognised body like the Market Research Society, or at a more ad hoc level, can achieve this through highlighting good and bad practice.The research industry needs to be more vocal in showcasing good work, and castigating poor work.

This in turn will filter to the individual level, where the talented and ambitious will compete to work for the top companies. This in turn strengthens the work, and thus the industry. It could even permeate to the public.

There is no quick fix to improve the standing of an industry, and in some cases it isn’t necessarily desirable. Rather than look to the big picture, we should focus on the more immediate challenges.

If we all concentrate on undertaking the best possible work, then a strong reputation – for ourselves, our organisation and our industry – will follow.

sk

NB: The clip of the scene with the quote is below (it is from Series 5, so beware of potential spoilers)

Reblog this post [with Zemanta]

Criteria for agency selection

According to my CIM coursebook, the following criteria should be used to shortlist agencies

  1. Area of expertise
  2. Quality of existing clients
  3. Reputation of principals and experience of staff
  4. Agency fees and methods of charging/payment
  5. In-house resources
  6. Geographical cover

While the selection of the agency should be based upon

  1. Credentials – track record and feedback
  2. Creative techniques – evidence of creativity and innovation
  3. Staff – number, tenure and experience
  4. The agency – resources, objectives, service level agreements
  5. Specialism – focus
  6. Price – clear and reasonable structure
  7. Legal – methods to ensure compliance with regulations
  8. Pitch – whether it met the requirements of the brief

The list isn’t fully comprehensive, but it acts as a reasonable guide – assuming you want to ignore slightly shadier aspects like favouritism

It can act as a useful checklist when pitching for new business. Of course, this only considers absolute performance/measures. When in a competitive pitch, the relative strengths become most important as pitching agencies are traded off against one another.

With open pitch processes, comparative advantage can be identified and relative strengths can be focused upon. With closed/opaque bids, this isn’t possible. So an agency will need to estimate where its relative and absolute strengths lie.

A good agency will have the relevant market intelligence to make a decent stab at this. A bad agency won’t. (Though of course industry fragmentation and lack of market definiton makes the potential competition so broad that it may not be possible/efficient to undertake)

Incidentally, the book also lists five key roles for an account planner (derived from Yeshin).

  1. Defining the task and bringing together the key information
  2. Preparing the creative brief
  3. Creative development, including being the “custodian” of brand values
  4. Presenting to the clients to convey concepts and defent rationale
  5. Tracking performance

The focus of this can be adjusted so that it is also applicable to researchers – on the assumption that planners/researchers play a central role throughout the project or campaign. Some people might disagree about that.

sk

Image credit: http://www.flickr.com/photos/mistersnappy/2282846520/

The mobile phone is the drill to extract the data

Last week I wrote a blog post entitled “If data is the new oil, we need a bigger drill“, where I complained that we weren’t making enough use of the potential data available to us.

That post was in relation to online research. But on reflection, the opportunity is far greater elsewhere.

On the mobile phone.

Introduction

And, as far as I am aware, it is an area even more underexploited than online data capture. Aside from the odd application (such as Everyday Lives – which looked very similar to Evernote last time I looked), mobile survey panels (such as One Point) or academic experiment (Contextphone in Helsinki), I’m not aware of any innovations in mobile.

Which is a shame, since it is arguably the most powerful media platform for data capture. The Wikipedia page on the 7th mass media lists the eight unique benefits mobile has. Of most relevance are that the mobile is

  • Always on
  • Always (well, usually) carried on the person
  • Available at the point of creative inspiration
  • Highly personal, and personalised

The unique benefits of mobile make it an ideal instrument for both active and passive data capture – for explicit answers and for implicit inferences.

Forms of data capture

I’ve drawn an arrow below of the five primary means a mobile can capture information. It is very much an early draft, so feedback or criticism is very much appreciated.


Ways in which data and information can be extracted from the mobile phone

Background capture

As mobile technology advances, devices incorporate more features that produce information on the location of the phone, and thus the user. These include:

  • Time – the time itself, and the time it takes to do something via a clock and stopwatch
  • Date – via a calendar
  • Space – via GPS
  • Proximity – to people, objects or events via GPS, bluetooth or RFID chips
  • Movement – in three dimensions via an accelerometer, or inferred through GPS and clock
  • Environmental factors – through thermometers, altitude readers and so forth

In addition to location, the following can also be determined through past or current behaviour:

  • Spend – via the in-built payment mechanism
  • Social graph – via the address book
  • History – via cookies or memory
  • Broad character traits – by how the phone has been customised or used

While in future, these will be augmented with innovations such as voice and face recognition (a Google Goggles type of service).

Either on their own or in combination, these features facilitate some extremely powerful data capture. They effectively allow us to understand the “where” and “when”, and potentially the “with”.

But it is only the first level of information capture.

Activity Capture / Activity Follow-up / Prompted Activity

I’ve grouped these three stages together, as they are essentially variations on a theme.

The mobile phone has a large number of features and services that can be used for data capture. These include

  • Voice calls
  • Text messaging
  • Voice recorder
  • Note taker
  • Calendar
  • Bluetooth
  • Games
  • Camera/scanner
  • Video camera/editor
  • Music player
  • Web browser
  • Email/social network use
  • Application downloads/use
  • Shopping/purchasing

The most passive form of data capture is in recording the functions that a person uses their phone for. Forms of analysis this facilitates include

  • Combining activities with the data dimensions outlined in the first section for understanding of individual uses
  • Utilising path analysis across all feature uses to understand how the phone is used as a single device, rather than as a collection of services
  • Converting phone calls to text, and then using sentiment analysis to infer meaning across all forms of communication.

These aspects augment the “where”, “when” and “with” with the “what” and “how” – at least in terms of mobile phone behaviour.

A slightly more active form of data capture would move closer to capturing the “why”.

For instance, a push notification could be triggered when a certain activity is undertaken. This could request a simple answer to a question.

For instance, if I were to use my camera to take a picture, I would know

  • Where it was taken
  • When it was taken
  • What it was taken with
  • How it was taken (landscape or portrait, flash or natural, first attempt or fifth)
  • Potentially who/what it was taken of
  • Potentially who the person was with at the time

But it wouldn’t be known why the photo was taken, or whether the person was happy with the photo taken. A simple question or two would solve that.

An even more active version of data capture would be to explicitly ask the person to use their phone for a particular person. For instance, they could be asked to use the camera to scan each item they buy on the high street or to use the voice recorder or note taker each time they spot a certain advertising campaign. These methods are used by a couple of organisations – MESH spring to mind – but have little noticeable traction to date.

This manual mechanism may eventually be superseded, as technology allows us to automate more of the data capture. Its only real relevance would be in forcing someone to participate in a behaviour where they naturally wouldn’t.

Direct questioning

As should be obvious, the more explicit forms of data capture are those that are most prevalent – primarily because their implementation is independent of technological advancement. For instance, we’ve always been able to interview people over the phone. As technology improves, the interfaces underpinning this method will also improve – we will move from SMS surveys to java to html to html5 or native applications, with touch screen drag and drop functionality.

Benefits

As I mentioned in my previous post, we aren’t close to reaching the level of data capture that is possible. We need to augment explicit questioning with the context that can be inferred from the situational data collected. The mobile phone, moving across space and time and with its unique benefits, offers even more scope for collecting meaningful data.

Potential uses for the data capture include

  • Calculating sleep quality/efficiency (an iPhone app already does this, to a degree)
  • Monitoring movement, speed and proximity of people across an environment could be used for town planning
  • Alternatively, it could be used to plot the efficiency of layouts in supermarkets. If the phone could calculate eye line (it would probably need to be attached to a necklace), it could even inform how the shelves are stacked
  • Providing an understanding of people’s lives – when using and not using their phones.
  • Exploring how things spread across mobile phones. For instance, one person could undertake a type of behaviour, come into contact with someone else, and then the second person undertakes the behaviour. Network effects could be used to identify the mythical influencers
  • Tracking spend can be used for financial management
  • A networked calendar/diary could become predictive e.g. rescheduling a meeting to take place 15 minutes later due to traffic
  • Tracking movement can improve the measurement of exposure to outdoor advertising
  • Sound recognition could be used for radio or TV exposure, and improve out-of-home consumption measurements
  • Inefficiencies of usage could be explored e.g. the time it takes to connect a phone call can be compared across devices and networks

Practical obstacles

Evidently, the previous section was quite speculative and fantastical, but I hope it underlines the potential. Nevertheless, several obstacles need to be overcome before this point is reached

  • What is the best way to collect such information? Within the operating system? The network/SIM? Via the web or an application? The O/S with control of the API would appear to be best placed, but do they have the inclination?
  • Although phones are always on and regularly used, they are also regularly upgraded. Information collected would need to be portable for long-term tracking
  • Similarly, a phone is more susceptible to breakage, theft or loss
  • Background data capture would be a tremendous drain on the battery
  • Effective data capture would require an entire network of people using it – this is highly unlikely, not least because there will always be a significant proportion of people for whom a mobile phone will just be a device to make and receive emergency calls
  • More behaviour will be transferred to the mobile, but it will only ever capture a small proportion of our lives
  • Coverage and connectivity isn’t good enough (in the UK) for full capture – unless information can be stored natively before it is uploaded to a central server
  • Massive issues of data protection and privacy. Some people (such as Nicholas Felton) would enjoy tracking their movements, but I suspect – outside of paid-for testing – few would appreciate it. Particularly since the mobile is the most personal of devices Imagine if large corporations were able to track the movements and social graph of its employees through mobile phone usage?

Conclusion

This post seems to have been sidetracked into future gazing, but my underlying point remains. The technology is available for us to capture far more information – and thus understanding – then we currently do. Organisations should look to harness and utilise this data, to provide contextual meaning to what people are doing.

Thoughts on how we could do this – or on how people are already doing this – would be much appreciated

sk

Image credits: http://www.flickr.com/photos/kioan/3011984637/ and http://www.flickr.com/photos/_parrish_/2575256484/