• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Data should be used as evidence and not illustration

I read the Guardian article on journalist’s struggles with “data literacy” with interest. The piece concentrates on inaccurate reporting through a lack of understanding of numbers, and the context around them. “Honest mistakes”, of a sort.

Taken more cynically, it is an example of a fallacy that I see regularly in many different  disciplines (I’m loath to call it a trend as, for all I know, this could be a long-standing problem) – fitting data around a pre-constructed narrative, rather than deducing the main story from the available information.

This is dangerous. It reduces data to be nothing more than anecdotal support for our subjective viewpoints. While Steve Jobs may have had a skill for telling people what they really wanted, he is an exception rather than the rule. We as human beings are flawed, biased and incapable of objectivity.

Given the complexity of our surroundings, we will (probably) never fully understand how everything fits together – this article from Jonah Lehrer on the problems with the reductionist scientific method is fascinating. However, many of us can certainly act with more critical acumen that we currently do.

This is as incumbent on the audience as it is the communicator – as MG Siegler recently wrote in relation to his field of technology journalism, “most of what is written… is bullshit”, and readers should utilise more caution when taking news as given.

Whether it is due to time pressures, lack of skills, laziness, pressure to delivery a specific outcome of otherwise, we need to avoid this trap and – to the best of our abilities – let our conclusions or recommendations emerge from the available data, rather than simply use it to illustrate our subjective biases.

While I am a (now no more than an occasional) blogger, I am not a journalist and so I’ll limit my potential criticisms of that field. However, I am a researcher that has at various points worked closely with many other disciplines (some data-orientated, some editorial, some creative), and I see this fundamental problem reoccurring in a variety of contexts.

When collating evidence, the best means to ensure its veracity is to collect it yourself – in my situation, that would be to conduct primary research and to meet the various quality standards that would ensure a reliable methodology, and coherent conclusions

Primary research isn’t realistic in many cases, due to limited levels of time, money and skills. As such, we rely on collating existing data sources. This interpretation of secondary research is where I believe the problem of illustration above evidence is most likely to occur.

There are two stages that can help overcome this – critical evaluation of sources, and counterfactual hypotheses.

To critically evaluate data sources, I’ve created a CRAP sheet mnemonic that can help filter the unusable data from the trustworthy:

  • Communication – does the interpretation support the actual data upon scrutiny? For instance, people have been quick to cite Pinterest’s UK skew to male users as a real difference in culture between the UK and US, rather than entertain the notion that UK use is still constrained to the early adopting tech community, whereas US use is – marginally – more mature and has diffused outwards
  • Recency – when was the data created (and not when was it communicated)? For instance, I’d try to avoid quoting 2010 research into iPads since tablets are a nascent and fast-moving industry. Data into underlying human motivations is likely to have a longer shelf-life. This is why that despite the accolades and endorsements, I’m loath to cite this online word of mouth article because it is from 2004 – before both Twitter and Facebook
  • Audience – who is the data among? Would data among US C-suite executives be analogous to UK business owners? Also, some companies specialising in PR research have been notoriously bad at claiming a representative adult audience, when in reality they are usually a self-selecting sub-sample
  • Provenance – where did the data originally come from? In the same way as students are discouraged from citing Wikipedia, we should go to the original source of the data to discover where the data came from, and for what purpose. For instance, data from a lobby group re-affirming their position is unlikely to be the most reliable. It also helps us escape from the echo chamber, where myth can quickly become fact.

Counterfactual hypotheses are the equivalent of control experiments – could arguments or conclusions still be true with the absence of key variables? We should look for conflicting conclusions within our evidence, to see if they can be justified with the same level of certainty.  This method is fairly limited – since we are ultimately constrained by our own viewpoints. Nevertheless, it offers at least some challenge to our pre-existing notions of what is and what isn’t correct.

Data literacy is an important skill to have – not least because, as Neil Perkin has previously written about, it is only the first step on the DIKW hierarchy towards wisdom. While Sturgeon’s Law might apply to existing data, we need to be more robust in our methods, and critical in our judgements.  (I appreciate the irony of citing an anecdotal phenomenon)

It is a planner trope that presentations should contain selective quotes to inspire or frame an argument, and I’ve written in the past about how easily these can contradict one another. A framing device is one thing; a tenet of an argument is another. As such, it is imperative that we use data as evidence and not as illustration.

sk

Image credit: http://www.flickr.com/photos/etringita/854298772/

What do we mean by engagement?

Engagement is one of those nebulous buzzwords that often get thrown into business or strategy conversations because it sounds like something that should be sought after.  To be encouraged, measured and reported on. Yet it needs to be defined before any of these can occur. And few of the many articles on engagement actually do so.

When Anne Mollen spoke at the MRG Conference last month, she outlined three schools of thought on engagement:

  • The behaviourist school that views “Engagement” as the outcome of a complex algorithm of behavioural footprints
  • The experiential school that views engagement as something that happens in the mind of the consumer
  • The hybrid pragmatist school that asserts that consumer engagement is a psychological state, consistent with certain behaviours, and dependent on environmental context.

Most mentions of engagement I have seen tend to be in relation to behaviour, principally because this is the easiest to measure. Whether the model posited by Forrester, Eric Peterson or one of the myriad social media engagement models, these tend to involve metrics such as frequency (e.g. visits per day), depth (e.g. time spent or number of pages) and actions (e.g. clicks).

The Advertising Research Foundation belongs to the experiential school. They define as engagement as  “turning on a prospect to a brand idea enhanced by the surrounding context”, which is fairly meaningless. The AOP’s engagement study also falls into this camp, using surveys to understand the key emotions underpinning their perception of engagement.

Given that we are now in abundance thinking rather than scarcity thinking, an era of greater customer choice and with greater prominence to word of mouth, the idea of creating “engaged” customers/users as brand advocates is  more widespread.

But before a programme of engagement can be integrated within a company, several big questions need to be answered:

  • Why is engagement important? How does it link to the overall business objectives?
  • What should engagement seek to achieve?
  • What do we mean by engagement (actions? Emotions?)? What don’t we mean by engagement (Satisfaction? Advocacy?)?
  • Are we thinking about tactical engagement (engagement per interaction) or strategic engagement (overall engagement)?
  • Are we more interesting in engagement with content, channels, platforms, individual brands or the overall masterbrand?
  • How can engagement be measured and reported upon? Is our conception of engagement resulting from what is possible to be measured, or is it based on what is most important to us?
If these questions can be answered, then the organisation in question is already pretty advanced. However, there are many more questions than then need to be considered, such as:
  • Does engagement have degrees, or is it binary engaged/not engaged?
  • Can engagement be negative as well as positive?
  • Is engagement averaged, or is the audience segmented?
  • Is our definition of engagement unique to our organisation, or can it be benchmarked against competitors?
  • Is engagement a single metric or a collection? If a collection, are they combined and weighted into a single score?
  • Does engagement mean the same thing across different screens, platforms, audiences, products?
  • How does engagement vary by need state (e.g. browsing vs habit)
  • Should different types of customer/user be conceived differently
  • Can the engagement metrics be gamed? How can this be avoided?

These 15 or so questions only reference part of the challenge of measuring engagement, and don’t even touch upon how it can be built into strategies. It is a very complex area, and as yet I’m not aware of anyone that has a definitive answer.

sk

Image credit: http://www.flickr.com/photos/thecaucas/2232897539/

Notes from MRG Conference 2011

A couple of weeks ago I took part in a short session at the 2011 Media Research Group Conference, which took place in London. I took some notes during the day (mainly with the earlier speakers). They are below and in chronological order, though firstly a quick exec summary:

The four papers I enjoyed most were

Synthesising these talks, my key take-aways were:

  • Run lots of prototypes and versions
  • Ask audiences what they think, rather than just infer from behaviour
  • Set up the tests in such a way to drive people towards the behaviours/answers you desire
  • Be aware of contextual reasons that might provide counter-intuitive answers

And now for the detail…

 

Tim Harford – Problem Solving In a Complex World

Tim Harford, author of books such as The Undercover Economist) , initially walked through examples of problem solving such as

  • Archie Cochrane – a Prisoner of War who conducted experiments to find out what was making people ill in the camp
  • Thomas Thwaites – a student who took 9 months and spent over £1,000 to try and make a toaster from scratch and even when cheating largely failed
  • Cesar Hildago – who has mapped 5,000 product categories. But Wal-Mart has 100,000 types of product in a store, and in New York there are probably 10bn

His point was around the God Complex – the conviction that no matter how complex something is or how little data is available, you know the answer. It is dangerous and yet you see it everywhere.

We need to step away from the god complex as we can’t solve things in one step. Instead, we gradually learn over time through trial and error.

For instance, Unilever wanted to create a new nozzle through for their detergent production. They hired a mathematician who failed to sufficiently improve it. Instead, they created ten random computer generated models and picked the best. They then created ten variations of this. They repeated this process twenty times. Ultimately the nozzle was much improved, although they don’t know why.

Business successes are random processes – there is no silver bullet for the perfect CEO or strategy. However, instilling a start-up culture allows experimentation to see what is best. Google has a target failure rate of 80%, but this failure has to be quick, rather than being too big to fail. In order to do this, we have to overcome loss aversion.

In the BBC documentary about Fermat’s last theorem,  Goro Shimura said in reference to his colleague Yutaka Taniyama :

Taniyama was not a very careful person as a mathematician. He made a lot of mistakes, but he made mistakes in a good direction so eventually he got the right answers. I tried to imitate him but I found out that it is very difficult to make good mistakes.

Tim fielded a couple of questions relating to popular business books

  • Tom Peters’ In Search of Excellence profiled many companies to see what made them successful, but three years after the book was published around one third of them were in trouble (e.g. Wang, Atari). Were they actually excellent, or is excellence fleeting?
  • James Surowiecki’s Wisdom of Crowds is often misunderstood as he himself said that it only works in specific situations – when expert judgement is no help and where the crowd can be polled independently (Duncan Watts has shown how randomness becomes important when things are dependent

Claire McAlpine – Mediacom – How are you integrating behavioural economic thinking into your work?

Inspired by thinkers such as Steven Johnson (Where Good Ideas Come From) and Chip & Dan Heath in addition to Thaler & Sunstein etc.

Hunches are where we collide ideas – these could be our ideas over time, or our ideas with other people’s. For instance, the Gutenberg printing press was inspired by the wine press.

We need to overcome cognitive biases (such as picking the second cheapest wine on the list) and recognise things such as information deficit and availability bias. We are more Homer Simpson than Spock – we are not rational agents. We may have good intentions but these can quickly be forgotten if we are in a “hot state”.

There are three stages to integrating behavioural economics

  • Identifying the behavioural context
  • Identifying the behavioural journey
  • Identifying choice context and ultimately creating choice architecture

Claire gave the example of Special Constable recruitment. By identifying two choice contexts – career and inspiration – Mediacom were able to frame their media strategy (both in terms of creative and placement) for two separate audiences

By understanding how behaviours differ, we can seek out how to encourage the desirable ones to be replicated. The ultimate goal is to be able to switch the default behaviour, which we often resort to as a mental shortcut.

 

Mark Barber (RAB) and Jamie Allsopp (Sparkler) – Media & the Mood of the Nation

Mark and Jamie went through the research findings of this research which covered 3,500 smartphone survey responses from 1,000 people, qualitative depth interviews and diaries and EEG brain scan experiments.

The research came about from the general move in advertising from systematic (logical) to heuristic (emotional) processing, and observations that advertising works better in mood-enhancing environments.

The findings were framed using James Russell’s Circumplex Model of Affect, which places results on two -5 to +5 scales of arousal (energy) and valence (happiness).

Radio was compared to both TV and online. While all displayed rises in happiness and energy, radio showed the highest average increases in total and across the most dayparts. While this may be caused by other activities people are doing while they listen to the radio, it nevertheless means that people are in a more receptive frame of mind when it comes to processing advertising messages.

 

Becky McQuade (Sky) and Anne Mollen (Cranfield School of Management) – Online Engagment: We might be getting there

Anne said that there are two schools of thought with engagement

  • It is bankrupt as it is not a metric since it is too abstract and not credible (unlike retention and acquisition)
  • It is viable (she is in this camp)

The academic studies in this area have been focused on perceived interactivity and telepresence (her paper is here), but it hasn’t as yet properly been joined up to commercial requirements.

Her definition of engagement is “cognitive and affective commitment to an active relationship” and requires

  • Utility/relevance
  • Pleasure/enjoyment
  • Dynamic and sustained cognitive process

Using Survey Interactive, they ran an online pop-up survey with 60 engagement statements (reduced from an original list of 150) on 12 point scales across 14 Sky websites (and on a NetMums panel), resulting in over 12,000 responses. This found four drivers of correlation. From the largest to smallest, these are:

  • Cognitive processing e.g. enjoyment
  • Temporal needs e.g. hedonic and utilitarian value (what we need and want)
  • Self-congruence (identity with the brand)
  • Social identity (context, environment, peer to peer communication)

Conversely, engagement isn’t

  • A measure of human behaviour – there was low correlation between engagement and time spent, frequency and recency
  • Behavioural footprints (actions such as subscriptions or likes) – there was only a small positive correlation among a subset of those engaged
  • Activism (such as loyalty) – engagement is context dependent and not a behavioural type

The study was specific to advertising, and found those engaged had higher ad recall, improved core message delivery, more favourable opinions towards the brand and a higher likelihood to purchase (but not higher purchase intent).

Becky and Anne closed by saying for engagement to be viable it has to have a close relation to ROI and KPIs. Their NetMums study showed engagement has an impact on trust, satisfaction, loyalty and add responsiveness and has a high positive correlation with the Net Promoter Score.

Anne isn’t linked exclusively to Sky and will talk to others on a confidential basis around her engagement scale, but given academic competition to publish there is only a limited amount she can say publicly.

 

Stuart McDonald (News International) and Euan Mackay (Kantar Media) – Show Me the Money: Proving the value of tablets

Given that the results of the research are being used to inform News International’s commercial strategy, they didn’t really go into how value was proved. The research was conducted among News International’s subscriber base, and tested interactive advertising on a beta app (The Times app doesn’t yet have advertising) against a premium engagement index, comprising of perceptions of an ad being

  • Memorable
  • Relevant
  • Engaging
  • Trustworthy
  • Premium

 

Richard Curling (Google) – YouTube Skippable Pre-Rolls: Measuring the power of choice

Given “Hurry sickness” – the malaise where people feel short of time so perform tasks faster and get flustered by any delays – we’re increasingly looking for shortcuts.

YouTube “true view” means that users get to choose their adverts – if they don’t like an advert, they can skip it. Advertisers only pay for adverts that are viewed all the way through. Google interpret a high view rate as a high quality score, and this will factor in alongside price when bidding in an auction for advertising space. Thus, high quality ads are rewarded (though arguably very low quality advertising can benefit from a lot of free, interrupted views).

Using Ipsos MediaCT, Google tested the effectiveness of these ads using biometrics (heart rate, respiratory rate, skin conductance, motion- via Innerscope), depth interviews and eye-tracking. These found that both skipped and “true view” ads scores higher on their engagement metrics, though the true view ads scored highest. However, this wasn’t as clear-cut as you might expect – people opting in might have higher expectations and so could be harder to please. Conversely, the engagement of people forced to watch an ad might pick up towards the end as they get ready for their content to start

Richard’s recommendations for advertisers were to

  • Entertain the user, since you are the content
  • Be clear, and support user choice
  • Embrace “natural” targeting

 

Afternoon sessions

I was paying less attention to these, since I was mentally rehearsing my speech

  • Ross Williams and Becky from Ipsos MediaCT presented their “Big Brother Research – Who’s Watching Who?”, which combined social media monitoring of Big Brother properties. with Facebook polls. While Big Brother wasn’t as big as other properties, it had a 80-20 proportion of comments to likes on Facebook (indicating an engaged audiences), while alternative programmes had the opposite ratio
  • Steve Cox of JC Decaux presented “Airport Live” – following a small number of passengers at both their departure and arrival airports to see what they were noticing
  • Matthew Dodds of Nielsen and Nick Metcalfe of the Telegraph presented “Telegraph Print + Net Online Multiplier study” which took 5 groups of people (Telegraph print readers, Telegraph online readers, readers of both, non-print readers with matched demographics, non online readers with matched demographics) from UKOM to test uplift in advertising measures
  • The Good The Bad & The Ugly of Media Research was hosted by Max Willey and featured myself, Dave Brennan, David Fletcher, John Fryer, Stef Hrycyszyn and Loraine Cordery talking about whatever we wanted to for three minutes. David Fletcher won the prize, for his tale of why people think they want online dashboards but don’t

 

Industry Updates

  • BARB is looking into a non-linear database that would report on archive programmes on demand, and catch-up from longer than seven days after transmission. They will also evaluate, and possibly publish topline results of, the TV+online data
  • POSTAR – now have tube and bus data, and are looking at GPS devices to see how people move around. This is being validated and they hope to get it into a reporting system soon
  • NRS – concentrating on fusion with UKOM data, but hope to get more granular data and move online in future
  • RAJAR – moving the diary online, and continuing to explore the viability of passive meters
  • IPA – bedding down touchpoints. Touchpoints 3 included word of mouth, mobile internet, social media, gaming and on-demand. Touchpoints 4 will bring in tablets and apps, and change from a device-first structure to a content-first structure. It now has 60 subscribers (including each of the top 20 agencies) and has launched in the US. They are also piloting an app to go alongside the diaries
  • UKOM – the past year has been about stabilisation after some data issues. The contract is currently out to tender and whomever is successful (they would take over in January 2013) would look to measure all devices and locations (ie beyond home/work fixed internet to include mobile and video)

sk

My MRG Conference 2011 speech

At the MRG Conference 2011 (pdf link to programme here) I was given a three-minute slot to talk about anything I wanted under the banner “Six industry speakers share the good, the bad and the ugly from our industry”.

This is (roughly) what I said:

Good afternoon everyone. As Research Manager for Mobile, Social and Syndication at the BBC I’m understandably enthusiastic about these areas. So today I’m going to take the first area I mentioned – mobile – and explain how its characteristics make it appropriate as a research platform.

The first is universality – mobile has more coverage than any other research method. A big claim maybe, but Ofcom stats say that

  •  77% of households have PC-based internet
  • 85% of adults have a landline
  •  91% of adults have a mobile, and this rises to 98% among 16-54 year olds

More than 91% of the UK might have a home and can be reached by door to door, but realistically, once you factor in accessibility and interviewer safety, mobile will have the largest potential audience for research – though the key word there is potential; there is still the small hurdle of getting the audience’s contact details.
The second characteristic I want to mention is relating to proximity. More than any other platform, mobile is our go-to device. It is nearly always turned on, it is nearly always on our person and thus it is when we have some free time or are bored it is the first thing we turn – in fact I can see a few phones in the audience now. This captive audience on mobile has massive potential for research purposes, though we need to ensure what we ask them to do is both interesting and relevant. Easier said than done, perhaps.

But, this is predicated on the notion that we need our respondent to interact. We can do many great things on mobile – video diaries, photos, status updates etc and in real-time. But one of the real strengths of mobile is its latency. Why ask people what media they are consuming when mobile sensors can match sound to TV and radio; record web browsing, use GPS to plot outdoor reach and time spent; and soon use near field communication to record sales of newspapers and magazines. Admittedly, not all phones can do this just yet, and privacy is obviously an issue, but again, there is big potential.

The young will drive this, for mobile is a youthful medium – 16-24s say they would miss mobile the most if they had to give up media. These behaviours might not be mainstream yet, but a dozen years ago owning a mobile wasn’t mainstream, and look where we are now. But there is also a second aspect to this point around youth, and that is that the medium is nascent. We’re still learning all the time – no one can say they’ve cracked mobile in terms of capturing and utilising. This is a huge opportunity for research agencies both big and small to move into.

This is an opportunity because it doesn’t yet exist. There is plenty of innovation at the edges, but the market isn’t yet mature. So while I’ve identified several benefits to mobile research, they come with caveats and are more theoretical than practical. So as much as I want to say mobile is good, I can’t really. I’ve talked about the universality, the go-to nature, the latency and the youthfulness. That’s U.G.L.Y and it ain’t got no alibi, it’s ugly.

sk

People like people

Senior business folk like numbers. Facts and statistics to base decisions on and to evaluate performance. It’s both rational and sensible.

But occasionally, it is beneficial not to be rational or sensible. As the Apple “Think Different” campaign so memorably reminded us.

Organisations should have plenty of talented members capable of coming up with creative and innovative strategies to immediate and potential business concerns.

But when you want the opposite to rational or sensible, the best thing might be to consult the public. Whether consumers, users, viewers, prospects, advocates, rejecters, indifferents, promoters, lovers, haters or otherwise, each person will have a unique take on a situation.

Each person has their own behaviours, needs, habits, lifestyle, attitudes, hopes, fears and opinions which can relate directly or indirectly to an organisation, market or industry.

And every so often it is beneficial for senior business folk to hear these. To be reminded, inspired, provoked, amused, horrified, informed, affirmed or corrected.

What they hear will either be

  • Something they already knew, and should respond to
  • Something they already knew, but shouldn’t respond to
  • Something they didn’t know, and should respond to
  • Something they didn’t know, but shouldn’t respond to

All are valuable. Whether delivered through ethnographic videos, photo logs, social media listening, user-generated content competitions or through other means, each new piece of stimulus helps evolve the thinking of those making the key decisions.

Facts and numbers are powerful. But people are also powerful. Even hearing the same opinion heard many times before but by a different voice in an unusual situation creates new context and new meaning.

Therefore, we should strive to complement our rational decision-making with the creative expression that comes from voices that may not be found in the board room.

sk

NB: Inspiration for the post’s title is from the Riz MC song of the same name (who, to my knowledge, is the first and thus far only one of my university peers to achieve public success – measured by having a Wikipedia page). The lyrics have nothing to do with the content above, but the title led me to start thinking in this direction.

 

Learning from Steve Jobs

Steve Jobs' fashion choices over the years

Understandably, technology news over the past week has been dominated by Steve Jobs’ resignation as Chief Executive from Apple. While he will stay on as Chairman, Tim Cook – former Chief Operating Officer – will take the helm.

There have been many wonderful pieces on Jobs (though some do read like obituaries) – these from Josh Bernoff and John Gruber being but two – which cover many angles – whether personal, professional, industry or other. I’m neither placed nor qualified to add anything new but I have enjoyed synthesising the various perspectives. Yet invariably, the person saying it the best was Jobs himself:

  • He knew what he wanted – “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking” (Stanford commencement speech)
  • He felt he knew better than anyone else – “The only problem with Microsoft is they just have no taste. They have absolutely no taste. And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.” (Triumph of the Nerds)
  • He, along with empowered colleagues, relentlessly pursued this – “You have to trust in something — your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.”(Stanford commencement speech)
  • He was a perfectionist – “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.2 (Playboy)

NB: The quotes above were taken from this Wall Street Journal article.

In Gruber’s words “Jobs’s greatest creation isn’t any Apple product. It is Apple itself.”

In 14 years he took Apple from near-bankruptcy to – briefly – the biggest company in the world by market capitalisation. He has been enormously successful. And while possibly unique – his methods run counter to textbook advice on how to run an organisation – a lot can be learned from him.

The thing I have taken most from this is Jobs’ uncompromising nature. If people weren’t on board with him, then to hell with them. This of course led to his dismissal from Apple in 1985. And his dogged focus on his preferences has informed his fashion choices over the years, as the above picture illustrates.

It might seem strange for a market researcher to take this away, particularly since research is stereotyped as decision-making by committee – something which Jobs despised:

  • “We think the Mac will sell zillions, but we didn’t build the Mac for anybody else. We built it for ourselves. We were the group of people who were going to judge whether it was great or not. We weren’t going to go out and do market research. We just wanted to build the best thing we could build.” (Playboy)
  • “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” (BusinessWeek)

Unfortunately, this stereotype is often true, and I have been guilty of perpetuating it on occasion.

One example was when trying to get a project up and running (on a far smaller scale than rescuing Apple admittedly). With a lot of stakeholders, I tried to include as many of their wishes and requests is possible. The end result was bloated, incoherent, unfocused and over-deadline. It wasn’t one of my finer moments.

Rather than bolt everything on, I should have appraised all the input and only included that which remained pertinent to the core objective. I lost authorship of the project, and it suffered.

While there will be counter-arguments, many public failures do seem to be the result of committee-made decisions. Two bloated, incoherent examples that immediately spring to mind are Microsoft Office 2003 and the Nokia N96. Conversely, there are many examples of visionary micro-managing leaders that have driven a company to success – Walt Disney, Ralph Lauren and Ron Dennis to name but three.

I am a researcher rather than a consultant, and so don’t intend to fully adopt this approach. However, it appears that there is a greater chance of success when primary research or stakeholder input informs, rather than dictates, the final decision.

Steve Jobs knew this. His flagship products weren’t revolutionary (IBM, Microsoft, Nokia and the like were the primary innovators). But his genius was in refining a variety of inputs and stimulus, and moulding them into an expertly designed final product.

And that is something to aspire to.

sk

Overhauling the agency pricing model

Agencies are potentially losing out on beneficial and worthwhile commissions due to a fundamentally flawed approach to pricing their work.

(Note: My experience with pricing is almost exclusively tied to research agencies but I think this is broadly applicable to all industries).

Projects are commissioned when there is agreement between what an agency is willing to offer, and what a client is willing to pay.

My issue is that both of these components are based on cost.

Instead, they should be based on value.

£1 price tag

The agency side

The current model

Looking at the agency side first, it is clear that the focus upon cost makes the process far more transactional than it should be.

Using a dodgy equation (channelling John. V Willshire, who does this sort of thing far better).

P = d + αi + βt + p where P =< B

In English, Price =direct costs + a proportion of indirect costs/overheads + an estimate of the time spent + profit, where price is less than or equal to the client budget

(The alpha sign has arbitrarily been assigned to meaning a proportion, and beta an estimate)

d + αi + βt can be simplified to C for costs. Thus:

P = C + p where P =< B

Explaining the equation (this can be skipped if you trust me)

Of course, this is an oversimplification (though if agencies don’t use timesheets then the equation will lose the time segment and become even simpler) but it does explain the majority of the considerations.

Competitor pricing will be a factor. Market rates are to an extent set by those that have offered the service – an agency will seek to match, undercut or add to a premium to this depending on the relative positioning. This is reflected in the equation through time (premium agencies will generally spend longer on the delivery) and in desired profit.

An agency’s price will miraculously match the stated client budget (or in some instances, come in £500 under which I don’t understand since a) I thought psychological pricing had been phased out b) that spare £500 is not going to be able to cover any contingencies, expenses or VAT that aren’t included in the cost).

However, there are (at least) two things that aren’t yet factored in:

  • Opportunity cost – the cost in terms of alternatives foregone. This isn’t included since the only time you can really be sure that new requests for proposals appear is at the end of the financial year. Otherwise – for ad-hoc project work at least – there is no way to accurately predict the flow of work.
  • Competitive bidding – where profit is multiplied with expected success rate to give expected profit. While guesses can be informed by previous success rates, I don’t rate it as a) closed bidding processes mean competitor bidding strategies are unknown and b) perceived favourites are just that – perceptions (for instance, an incumbent may be secretly detested)

So what does this mean?

Ultimately, an agency will only submit a proposal if they think the profit they will make is worthwhile. The above equation can be reframed to reflect this:

p = P – C where P =< B

Or profit is price minus cost.

And this is where my main problem is with agency pricing. Profit is expressed purely financially.

Undoubtedly, finance is crucial. An agency requires cashflow to operate, it cannot survive solely on kudos. But it shouldn’t be the sole consideration

What I think should be included

Value should be added to the equation.

An agency should think not only about the financial margin, but about the business margin.

In addition to revenue, an agency can receive:

  • Knowledge – will the project increase knowledge of markets, industries, processes or methodologies that can be applied to other projects in future? This can be used to improve the relevance of business proposals, or be incorporated into frameworks of implementation
  • Skills – is the process repeatable, which can create future efficiencies? Does the project offer opportunities for junior staff to train on the job? If so, savings in training and innovation can be made
  • Reputation – will the results of the project be shared publicly – in testimonials, trade press, conference circuit or otherwise. If the agency is fully credited, there is PR value in terms of profile and attracting new business
  • Follow-up sales – will the project lead to additional work, either repeating the process for another aspect of the business or in up-selling follow-on work? Again, this can save on business development and can offer some future financial assurances (which will influence the amount of money borrowed and subsequent interest paid)
  • Social good – perhaps not as relevant for those in commercial sectors, but will the project create real and tangible benefits for a community – referencing Michael Porter’s concept of shared value

Thus, project gains are far more than financial. These intangible benefits should be applied as a discount to financial profit

Dodgy algebra (this can be skipped unless you want to pick holes in my logic)

Because while net gain would be:

N = p + β(k+s+r+f+g)

The net gains from a project are profit plus estimated gains in knowledge, skills, reputation, follow-up sales and social good (note that these factors can be negative or zero as well as positive). These can be simplified as intangibles:

N = p + I

These intangibles offer alternatives to financial profit. Increasing the amount can be gained effectively increases the budget:

P = C + p where P =< B + I

Assuming that an agency won’t offer psychological pricing, we can assume that P = B. This makes the equation

B + I = C + p

Substituting budget back in for price, and rearranging gives:

P = C + p – I

However, this assumes that the entire surplus is passed onto the client. Obviously, this shouldn’t be the case but equally the agency shouldn’t keep all of this surplus. Instead, I propose a proportion of the benefit is passed onto the client via a discount (in order to make the agency more competitive and improve chances of success).

Value is therefore a function of profit and discounted intangible gain:

V = fn(p – ɣI) where gamma is a discounted proportion

What this means – the conclusions bit

All of this long-winded (and probably incorrect) algebra effectively changes to equation

P = C + p

becomes

P = C + V

Financial profit is substituted for value.

I believe that the price an agency charges should be a reflection of their costs and the overall value that is received from the profit – both in tangible revenue and intangible benefits. Some of these benefits should be passed on to the client in the form of a price reduction, in order to make the bid more competitive and improve chances of success.

This also works in the converse. If there is a project that an agency isn’t enthusiastic about – it might be laborious or for an undesirable client – then the intangibles are negative and so profit needs to increase in order to make the project worth undertaking (in a purely financial equation, this means costs will need to fall within a fixed price/budget).

I should also make it explicit that I am not advocating a purely price-driven approach to bidding. Other factors – communicable skills and expertise, vision and so forth – are still vital. The reality is that markets are highly competitive, and price (or more accurately, the volume of work that can be delivered within a fixed budget) will be a large factor on scorecards used to rate bids.

The client side

This section doesn’t require algebra (fortunately).

My main issue with client budgeting is that it only concentrates on purchasing outputs. While these are tangible, these outputs (at least in research) are a means to an end. A client may want eight groups and transcripts, or a survey and a set of data tables, but the client doesn’t want these for the sake of it. They are purchased to provide evidence to validate or iterate a business process.

Therefore, I believe the client budget should be split into two.

  • The project budget – the amount that a client is willing to pay for the tangibles – the process required to complete the delivery of the project. These outputs are outcome-independent.
  • The implementation budget – which is outcome-dependent. The complexity or implications of a project are often unknown until completion. A project could close immediately, or it could impact critical business decisions in nuanced ways. If the latter, additional resource should be assigned to ensure the business can best face any challenges identified.

The majority of costs are incurred in the project, but the real value to the client comes in the implementation. This needs to be properly reflected; it currently isn’t.

Effectively, I propose a client should commission an “agency” to manage the project and a “consultancy” to manage the implementation. These could be the same organisation, they could not.

Wrapping up

There are undoubtedly things I have overlooked, and I’m pretty sure my algebra is faulty.

However, I believe my underlying hypothesis is valid. The current agency pricing model is flawed and needs overhauling because

  • Agencies ignore non-financial benefits
  • Clients ignore implementation requirements
Both of these are easily correctable, and these corrections can only improve the process.

sk

Image credit: http://www.flickr.com/photos/chrisinplymouth/3222190781

Observation and participation

One of the (many) criticisms of market research is that it is based on artificial, post-rationalised claimed responses. This line of thinking contends that there have been plenty of studies showing us to be unreliable witnesses to our own thoughts and actions – therefore surveys, focus groups and the like can’t be trusted.

Obviously, the reality is no so black and white. There are some things I can recall perfectly well – places I’ve shopped in the past week, why I like to blog etc. My answers would be truthful, though with the latter example the analysis might not take my literal answer but instead interpret it into a broader motivation.

Nevertheless, what I know I know is only one part of the Johari Window (which was channeled by Donald Rumsfeld for his known knowns speech) – the arena quadrant. For the other three quadrants, these methods are insufficient.

Fortunately, there is more to research than surveys and focus groups.

To slightly paraphrase the hidden quadrant, this would involve a methodology that would provide us with previously unknown information. This can be achieved through participation. IDEO are big proponents of this – I particularly like the example Paul Bennett gives of improving hospital waiting. The best way for them to discover the patient experience was to become the patient and spend a day strapped to a gurney. The view from the gurney is boring ceiling after boring ceiling, so IDEO used this space to provide both information and soothing designs.

The blind spot quadrant is where we battle the unreliable witness through observation. This could either be straight-forward observation or a mixture of observation and participation such as ethnography (remember: ethnography is not an in-home interview). Siamack Salari of Everyday Lives gives the fantastic example of a project he did for a tea company. This tea company had invested a great deal of money in research to understand the different perspectives people had on the perfect cup of tea. For the colour, they had even developed a colour palette outlining varieties. In closed research, people would pick their perfect colour. Yet, when observed, the colour of tea would never match. This is because people don’t concentrate on making the perfect cup of tea – the colour depends on the amount of milk they have left in the fridge and whatever else is capturing their attention (such as the toaster). Valuable information though, as Siamack noted in a training session I attended, an expensive way of finding out the answer you want doesn’t exist.

Thus, two simple examples to show the role of observation and participation in improving our understanding of things. As for the unknown window…

sk

Image credit: http://www.flickr.com/photos/colorblindpicaso/3932608472

Mediatel Media Playground 2011

My previous blog post covered my notes on Broadcast in a Multi-Platform World, which I felt was the best session of the day. Below are my notes from the other 3 sessions (I didn’t take any notes during the bonus Olympics session)

The data debate

Chaired by Torin Douglas, Media Correspondent for the BBC

Speakers:
Andrew Bradford, VP, Client Consulting, Media at Nielsen
Sam Mikkelsen, Business Development Manager at Adalyser

Panellists:
David Brennan, Research & Strategy Director at Thinkbox
Kurt Edwards, Digital Commercial Director at Future
Nick Suckley, Managing Director at Agenda21
Bjarne Thelin, Chief Executive at BARB

Some of the issues touched upon in this debate were interesting but I felt they were dealt with too superficially (but as a researcher, I guess it is inevitably I’d say that).

David Brennan thinks we need to take more control over data and how we apply it. There is a dumb acceptance that anything created by a machine must be true and we’ve lost the ability to interrogate the data

Nick Suckley thinks the main issue is the huge productivity problem with manual manipulation of data from different sources (Google has been joined by Facebook, Twitter and the mobile platforms), but this also represents a huge opportunity. He thinks the fight is not about who owns the data, but who puts it together

Torin Douglas posited whether our history of currencies meant that we weren’t so concerned with data accuracy, since everyone had access to the same information. Bjarne Thelin unsurprisingly disagreed with this, pointing out the large investment in BARB shows the need for a credible source.

David Brennan said his 3 Es of data are exposure (buying), engagement (planning) and effectiveness (accountability)

Nick Suckley thinks people would be willing to give up information for clear benefits but most don’t realise what already is being collected on them

Kurt Edwards thinks social media is a game-changer from a planning point of view as it sends the power back to the client. There is real-time visibility, but the challenge is to not react to a few negative comments

David Brennan concurred and worried about the possibility of social media data conclusions not being supported by other channels. You need to go out of your way to augment social media data with other sources to get the fuller picture

Bjarne Thelin gave the example of BBC’s +7 viewing figures to show that not all companies are focusing purely on real-time. He also underlines the fact that inputs determine outputs and so you need to know what goes in

David Brennan concluded by saying that in the old days you knew what you were getting. Now it is overblown, with journalists confused as to what is newsworthy or significant

Social media and gaming

Chaired by Andrew Walmsley, ex i-Level

Speakers:
Adele Gritten, Head of Media Consulting at YouGov
Mark Lenel, Director and senior analyst at Gamesvison

Panellists:
Henry Arkell, Business Development Manager at Techlightenment
Pilar Barrio, Head of Social at MPG
Toby Beresford, Chair, DMA Social Media Council at DMA
Sam Stokes, Social Media Director at Punktilio

The two speakers gave a lot of statistics on gaming and social gaming, whereas the panel focused upon social media. This was a shame, as the panel could have used more variety. All panel members were extolling the benefits of social media, and so there was little to no debate.

There was discussion about the difficulty in determining the value of a fan, the privacy implications, Facebook’s domination across the web and the different ways in which social media can assist an organisation in marketing and other business functions.

Mobile advertising

Chaired by Simon Andrews, Founder of addictive!

Speaker:
Ross Williams, Associate Director at Ipsos MediaCT

Panellists:
Gary Cole, Commercial Director at O2
Tamsin Hussey, Group Account Director at Joule
Shaun Jordan, Sales Director at Blyk
Will King, Head of Product Development at Unanimis
Will Smyth, Head of Digital at OMD

Ross Williams gave an interesting case study on Ipsos’ mobi app, which tracked viewer opinion during the Oscars.

Simon Andrews’ approach to chairing the debate was in marked contrast to the previous sessions. He was less a bystander and more a provocateur – he clearly stated his opinions and asked the panel to follow-up. He was less tolerant of bland sales-speak than the previous chairs, but was also more biased in approaching the panel with the majority of panel time filled with Simon speaking to Will Smyth.

Will King things m-commerce will boost mobile like e-commerce did with digital. Near field communication will move mobile into the real world.

Gary Cole pointed out that mobile advertising is only a quarter of a percent of ad spend but that clients should think less about display advertising and of mobile as a distinct channel. Instead, mobile can amplify other platforms in a variety of ways.

Tamsin Hussey said that as there isn’t much money in mobile, there is no finance to develop a system for measuring clicks and effectiveness of all channels. Currently, it has to be done manually.

Will Smyth said the app store is the first meaningful internet experience on the mobile. The mobile is still young and there is a fundamental lack of expertise at the middle management level across the industry. Social is currently getting all the attention (“Chairman’s wife syndrome”) but mobile has plenty to offer.

sk

Dynamic Knowledge Creation Model

The Dynamic Knowledge Creation Model was created by Nancy Dixon, building on the work by Ikuijro Nonaka. It refers explicitly to how organisations deal with knowledge, though other academics have noted its relevance in other fields.

Nonaka posited that there are four processes of knowledge creation that link across tacit and explicit knowledge. These are illustrated below.

SECI modelImage linked from here.

This shows that the four processes are

  • Tacit to tacit knowledge – acquired through conversation and socialisation. It may not be the primary subject of the conversation, but new data points can be joined up new ways to create additional meaning
  • Tacit to explicit knowledge – this can be again acquired through conversation or another form of communication, but in this instance the transference is intentional
  • Explicit to explicit knowledge – where multiple data sources are combined in intended ways, to create additional understanding that can be greater than the sum of their parts
  • Explicit to tacit knowledge – where individuals take things they have learnt and apply them to their thinking and actions

In Rachel Bodle’s article, she combines this with Dixon’s thinking to come up with the composite diagram below.

A Model of Dynamic Knowledge Creation The diagram shows that there are four types of knowledge assets within an organisation (or individual)

  • Routine knowledge (explicit to tacit) – learning by doing
  • Experiential knowledge (tacit to tacit) – judgement of individuals
  • Conceptual knowledge (tacit to explicit) – frameworks and models to utilise
  • Systemic knowledge (explicit to explicit) – editing and synthesising multiple sources
Market research agencies traditionally reside in the conceptual sphere – it takes the tacit knowledge from stakeholders and the target audience and converts them into meaningful, actionable recommendations and frameworks. The best agencies will frame their solution in such a way that makes it transferable beyond the confines of the specific brief.
However, there are also opportunities for agencies to assist organisations in the other areas
  • Routine knowledge – research may not necessarily help people or departments do their jobs better. But in certain circumstances, research tools extend into these areas. Workshop debriefs can walk through the practical implications of implementing the findings, ideally in a real situation. An example of this would be in processing and responding to consumer feedback.
  • Experiential knowledge – debriefs shouldn’t be reserved for the immediate stakeholder. By inviting everyone within an organisation, those inquisitive minds with a gap in their schedule can listen to the findings. There may not be any obvious, explicit benefit but the opportunity for serendipity arises
  • Systemic knowledge is traditionally the preserve of the client but, with resources increasingly stretched, some are looking to outsource this. Good research agencies should already be doing this – surveys and focus groups don’t reside within a black box. Secondary data collection and bricolage solutions using cost-effective online tools (the precise ones depend on the nature of the brief) should be pre-requisites in complementing the core research offering
I’ve only recently become aware of these models, but I’ve already found them extremely useful in reframing the nature of my projects. Organisations thrive on knowledge. It can only be a good thing if I can identify additional means of them harnessing and applying that

sk