Data should be used as evidence and not illustration

I read the Guardian article on journalist’s struggles with “data literacy” with interest. The piece concentrates on inaccurate reporting through a lack of understanding of numbers, and the context around them. “Honest mistakes”, of a sort.

Taken more cynically, it is an example of a fallacy that I see regularly in many different  disciplines (I’m loath to call it a trend as, for all I know, this could be a long-standing problem) – fitting data around a pre-constructed narrative, rather than deducing the main story from the available information.

This is dangerous. It reduces data to be nothing more than anecdotal support for our subjective viewpoints. While Steve Jobs may have had a skill for telling people what they really wanted, he is an exception rather than the rule. We as human beings are flawed, biased and incapable of objectivity.

Given the complexity of our surroundings, we will (probably) never fully understand how everything fits together – this article from Jonah Lehrer on the problems with the reductionist scientific method is fascinating. However, many of us can certainly act with more critical acumen that we currently do.

This is as incumbent on the audience as it is the communicator – as MG Siegler recently wrote in relation to his field of technology journalism, “most of what is written… is bullshit”, and readers should utilise more caution when taking news as given.

Whether it is due to time pressures, lack of skills, laziness, pressure to delivery a specific outcome of otherwise, we need to avoid this trap and – to the best of our abilities – let our conclusions or recommendations emerge from the available data, rather than simply use it to illustrate our subjective biases.

While I am a (now no more than an occasional) blogger, I am not a journalist and so I’ll limit my potential criticisms of that field. However, I am a researcher that has at various points worked closely with many other disciplines (some data-orientated, some editorial, some creative), and I see this fundamental problem reoccurring in a variety of contexts.

When collating evidence, the best means to ensure its veracity is to collect it yourself – in my situation, that would be to conduct primary research and to meet the various quality standards that would ensure a reliable methodology, and coherent conclusions

Primary research isn’t realistic in many cases, due to limited levels of time, money and skills. As such, we rely on collating existing data sources. This interpretation of secondary research is where I believe the problem of illustration above evidence is most likely to occur.

There are two stages that can help overcome this – critical evaluation of sources, and counterfactual hypotheses.

To critically evaluate data sources, I’ve created a CRAP sheet mnemonic that can help filter the unusable data from the trustworthy:

  • Communication – does the interpretation support the actual data upon scrutiny? For instance, people have been quick to cite Pinterest’s UK skew to male users as a real difference in culture between the UK and US, rather than entertain the notion that UK use is still constrained to the early adopting tech community, whereas US use is – marginally – more mature and has diffused outwards
  • Recency – when was the data created (and not when was it communicated)? For instance, I’d try to avoid quoting 2010 research into iPads since tablets are a nascent and fast-moving industry. Data into underlying human motivations is likely to have a longer shelf-life. This is why that despite the accolades and endorsements, I’m loath to cite this online word of mouth article because it is from 2004 – before both Twitter and Facebook
  • Audience – who is the data among? Would data among US C-suite executives be analogous to UK business owners? Also, some companies specialising in PR research have been notoriously bad at claiming a representative adult audience, when in reality they are usually a self-selecting sub-sample
  • Provenance – where did the data originally come from? In the same way as students are discouraged from citing Wikipedia, we should go to the original source of the data to discover where the data came from, and for what purpose. For instance, data from a lobby group re-affirming their position is unlikely to be the most reliable. It also helps us escape from the echo chamber, where myth can quickly become fact.

Counterfactual hypotheses are the equivalent of control experiments – could arguments or conclusions still be true with the absence of key variables? We should look for conflicting conclusions within our evidence, to see if they can be justified with the same level of certainty.  This method is fairly limited – since we are ultimately constrained by our own viewpoints. Nevertheless, it offers at least some challenge to our pre-existing notions of what is and what isn’t correct.

Data literacy is an important skill to have – not least because, as Neil Perkin has previously written about, it is only the first step on the DIKW hierarchy towards wisdom. While Sturgeon’s Law might apply to existing data, we need to be more robust in our methods, and critical in our judgements.  (I appreciate the irony of citing an anecdotal phenomenon)

It is a planner trope that presentations should contain selective quotes to inspire or frame an argument, and I’ve written in the past about how easily these can contradict one another. A framing device is one thing; a tenet of an argument is another. As such, it is imperative that we use data as evidence and not as illustration.

sk

Image credit: http://www.flickr.com/photos/etringita/854298772/

Advertisements

What do we mean by engagement?

Engagement is one of those nebulous buzzwords that often get thrown into business or strategy conversations because it sounds like something that should be sought after.  To be encouraged, measured and reported on. Yet it needs to be defined before any of these can occur. And few of the many articles on engagement actually do so.

When Anne Mollen spoke at the MRG Conference last month, she outlined three schools of thought on engagement:

  • The behaviourist school that views “Engagement” as the outcome of a complex algorithm of behavioural footprints
  • The experiential school that views engagement as something that happens in the mind of the consumer
  • The hybrid pragmatist school that asserts that consumer engagement is a psychological state, consistent with certain behaviours, and dependent on environmental context.

Most mentions of engagement I have seen tend to be in relation to behaviour, principally because this is the easiest to measure. Whether the model posited by Forrester, Eric Peterson or one of the myriad social media engagement models, these tend to involve metrics such as frequency (e.g. visits per day), depth (e.g. time spent or number of pages) and actions (e.g. clicks).

The Advertising Research Foundation belongs to the experiential school. They define as engagement as  “turning on a prospect to a brand idea enhanced by the surrounding context”, which is fairly meaningless. The AOP’s engagement study also falls into this camp, using surveys to understand the key emotions underpinning their perception of engagement.

Given that we are now in abundance thinking rather than scarcity thinking, an era of greater customer choice and with greater prominence to word of mouth, the idea of creating “engaged” customers/users as brand advocates is  more widespread.

But before a programme of engagement can be integrated within a company, several big questions need to be answered:

  • Why is engagement important? How does it link to the overall business objectives?
  • What should engagement seek to achieve?
  • What do we mean by engagement (actions? Emotions?)? What don’t we mean by engagement (Satisfaction? Advocacy?)?
  • Are we thinking about tactical engagement (engagement per interaction) or strategic engagement (overall engagement)?
  • Are we more interesting in engagement with content, channels, platforms, individual brands or the overall masterbrand?
  • How can engagement be measured and reported upon? Is our conception of engagement resulting from what is possible to be measured, or is it based on what is most important to us?
If these questions can be answered, then the organisation in question is already pretty advanced. However, there are many more questions than then need to be considered, such as:
  • Does engagement have degrees, or is it binary engaged/not engaged?
  • Can engagement be negative as well as positive?
  • Is engagement averaged, or is the audience segmented?
  • Is our definition of engagement unique to our organisation, or can it be benchmarked against competitors?
  • Is engagement a single metric or a collection? If a collection, are they combined and weighted into a single score?
  • Does engagement mean the same thing across different screens, platforms, audiences, products?
  • How does engagement vary by need state (e.g. browsing vs habit)
  • Should different types of customer/user be conceived differently
  • Can the engagement metrics be gamed? How can this be avoided?

These 15 or so questions only reference part of the challenge of measuring engagement, and don’t even touch upon how it can be built into strategies. It is a very complex area, and as yet I’m not aware of anyone that has a definitive answer.

sk

Image credit: http://www.flickr.com/photos/thecaucas/2232897539/

Notes from MRG Conference 2011

A couple of weeks ago I took part in a short session at the 2011 Media Research Group Conference, which took place in London. I took some notes during the day (mainly with the earlier speakers). They are below and in chronological order, though firstly a quick exec summary:

The four papers I enjoyed most were

Synthesising these talks, my key take-aways were:

  • Run lots of prototypes and versions
  • Ask audiences what they think, rather than just infer from behaviour
  • Set up the tests in such a way to drive people towards the behaviours/answers you desire
  • Be aware of contextual reasons that might provide counter-intuitive answers

And now for the detail…

 

Tim Harford – Problem Solving In a Complex World

Tim Harford, author of books such as The Undercover Economist) , initially walked through examples of problem solving such as

  • Archie Cochrane – a Prisoner of War who conducted experiments to find out what was making people ill in the camp
  • Thomas Thwaites – a student who took 9 months and spent over £1,000 to try and make a toaster from scratch and even when cheating largely failed
  • Cesar Hildago – who has mapped 5,000 product categories. But Wal-Mart has 100,000 types of product in a store, and in New York there are probably 10bn

His point was around the God Complex – the conviction that no matter how complex something is or how little data is available, you know the answer. It is dangerous and yet you see it everywhere.

We need to step away from the god complex as we can’t solve things in one step. Instead, we gradually learn over time through trial and error.

For instance, Unilever wanted to create a new nozzle through for their detergent production. They hired a mathematician who failed to sufficiently improve it. Instead, they created ten random computer generated models and picked the best. They then created ten variations of this. They repeated this process twenty times. Ultimately the nozzle was much improved, although they don’t know why.

Business successes are random processes – there is no silver bullet for the perfect CEO or strategy. However, instilling a start-up culture allows experimentation to see what is best. Google has a target failure rate of 80%, but this failure has to be quick, rather than being too big to fail. In order to do this, we have to overcome loss aversion.

In the BBC documentary about Fermat’s last theorem,  Goro Shimura said in reference to his colleague Yutaka Taniyama :

Taniyama was not a very careful person as a mathematician. He made a lot of mistakes, but he made mistakes in a good direction so eventually he got the right answers. I tried to imitate him but I found out that it is very difficult to make good mistakes.

Tim fielded a couple of questions relating to popular business books

  • Tom Peters’ In Search of Excellence profiled many companies to see what made them successful, but three years after the book was published around one third of them were in trouble (e.g. Wang, Atari). Were they actually excellent, or is excellence fleeting?
  • James Surowiecki’s Wisdom of Crowds is often misunderstood as he himself said that it only works in specific situations – when expert judgement is no help and where the crowd can be polled independently (Duncan Watts has shown how randomness becomes important when things are dependent

Claire McAlpine – Mediacom – How are you integrating behavioural economic thinking into your work?

Inspired by thinkers such as Steven Johnson (Where Good Ideas Come From) and Chip & Dan Heath in addition to Thaler & Sunstein etc.

Hunches are where we collide ideas – these could be our ideas over time, or our ideas with other people’s. For instance, the Gutenberg printing press was inspired by the wine press.

We need to overcome cognitive biases (such as picking the second cheapest wine on the list) and recognise things such as information deficit and availability bias. We are more Homer Simpson than Spock – we are not rational agents. We may have good intentions but these can quickly be forgotten if we are in a “hot state”.

There are three stages to integrating behavioural economics

  • Identifying the behavioural context
  • Identifying the behavioural journey
  • Identifying choice context and ultimately creating choice architecture

Claire gave the example of Special Constable recruitment. By identifying two choice contexts – career and inspiration – Mediacom were able to frame their media strategy (both in terms of creative and placement) for two separate audiences

By understanding how behaviours differ, we can seek out how to encourage the desirable ones to be replicated. The ultimate goal is to be able to switch the default behaviour, which we often resort to as a mental shortcut.

 

Mark Barber (RAB) and Jamie Allsopp (Sparkler) – Media & the Mood of the Nation

Mark and Jamie went through the research findings of this research which covered 3,500 smartphone survey responses from 1,000 people, qualitative depth interviews and diaries and EEG brain scan experiments.

The research came about from the general move in advertising from systematic (logical) to heuristic (emotional) processing, and observations that advertising works better in mood-enhancing environments.

The findings were framed using James Russell’s Circumplex Model of Affect, which places results on two -5 to +5 scales of arousal (energy) and valence (happiness).

Radio was compared to both TV and online. While all displayed rises in happiness and energy, radio showed the highest average increases in total and across the most dayparts. While this may be caused by other activities people are doing while they listen to the radio, it nevertheless means that people are in a more receptive frame of mind when it comes to processing advertising messages.

 

Becky McQuade (Sky) and Anne Mollen (Cranfield School of Management) – Online Engagment: We might be getting there

Anne said that there are two schools of thought with engagement

  • It is bankrupt as it is not a metric since it is too abstract and not credible (unlike retention and acquisition)
  • It is viable (she is in this camp)

The academic studies in this area have been focused on perceived interactivity and telepresence (her paper is here), but it hasn’t as yet properly been joined up to commercial requirements.

Her definition of engagement is “cognitive and affective commitment to an active relationship” and requires

  • Utility/relevance
  • Pleasure/enjoyment
  • Dynamic and sustained cognitive process

Using Survey Interactive, they ran an online pop-up survey with 60 engagement statements (reduced from an original list of 150) on 12 point scales across 14 Sky websites (and on a NetMums panel), resulting in over 12,000 responses. This found four drivers of correlation. From the largest to smallest, these are:

  • Cognitive processing e.g. enjoyment
  • Temporal needs e.g. hedonic and utilitarian value (what we need and want)
  • Self-congruence (identity with the brand)
  • Social identity (context, environment, peer to peer communication)

Conversely, engagement isn’t

  • A measure of human behaviour – there was low correlation between engagement and time spent, frequency and recency
  • Behavioural footprints (actions such as subscriptions or likes) – there was only a small positive correlation among a subset of those engaged
  • Activism (such as loyalty) – engagement is context dependent and not a behavioural type

The study was specific to advertising, and found those engaged had higher ad recall, improved core message delivery, more favourable opinions towards the brand and a higher likelihood to purchase (but not higher purchase intent).

Becky and Anne closed by saying for engagement to be viable it has to have a close relation to ROI and KPIs. Their NetMums study showed engagement has an impact on trust, satisfaction, loyalty and add responsiveness and has a high positive correlation with the Net Promoter Score.

Anne isn’t linked exclusively to Sky and will talk to others on a confidential basis around her engagement scale, but given academic competition to publish there is only a limited amount she can say publicly.

 

Stuart McDonald (News International) and Euan Mackay (Kantar Media) – Show Me the Money: Proving the value of tablets

Given that the results of the research are being used to inform News International’s commercial strategy, they didn’t really go into how value was proved. The research was conducted among News International’s subscriber base, and tested interactive advertising on a beta app (The Times app doesn’t yet have advertising) against a premium engagement index, comprising of perceptions of an ad being

  • Memorable
  • Relevant
  • Engaging
  • Trustworthy
  • Premium

 

Richard Curling (Google) – YouTube Skippable Pre-Rolls: Measuring the power of choice

Given “Hurry sickness” – the malaise where people feel short of time so perform tasks faster and get flustered by any delays – we’re increasingly looking for shortcuts.

YouTube “true view” means that users get to choose their adverts – if they don’t like an advert, they can skip it. Advertisers only pay for adverts that are viewed all the way through. Google interpret a high view rate as a high quality score, and this will factor in alongside price when bidding in an auction for advertising space. Thus, high quality ads are rewarded (though arguably very low quality advertising can benefit from a lot of free, interrupted views).

Using Ipsos MediaCT, Google tested the effectiveness of these ads using biometrics (heart rate, respiratory rate, skin conductance, motion- via Innerscope), depth interviews and eye-tracking. These found that both skipped and “true view” ads scores higher on their engagement metrics, though the true view ads scored highest. However, this wasn’t as clear-cut as you might expect – people opting in might have higher expectations and so could be harder to please. Conversely, the engagement of people forced to watch an ad might pick up towards the end as they get ready for their content to start

Richard’s recommendations for advertisers were to

  • Entertain the user, since you are the content
  • Be clear, and support user choice
  • Embrace “natural” targeting

 

Afternoon sessions

I was paying less attention to these, since I was mentally rehearsing my speech

  • Ross Williams and Becky from Ipsos MediaCT presented their “Big Brother Research – Who’s Watching Who?”, which combined social media monitoring of Big Brother properties. with Facebook polls. While Big Brother wasn’t as big as other properties, it had a 80-20 proportion of comments to likes on Facebook (indicating an engaged audiences), while alternative programmes had the opposite ratio
  • Steve Cox of JC Decaux presented “Airport Live” – following a small number of passengers at both their departure and arrival airports to see what they were noticing
  • Matthew Dodds of Nielsen and Nick Metcalfe of the Telegraph presented “Telegraph Print + Net Online Multiplier study” which took 5 groups of people (Telegraph print readers, Telegraph online readers, readers of both, non-print readers with matched demographics, non online readers with matched demographics) from UKOM to test uplift in advertising measures
  • The Good The Bad & The Ugly of Media Research was hosted by Max Willey and featured myself, Dave Brennan, David Fletcher, John Fryer, Stef Hrycyszyn and Loraine Cordery talking about whatever we wanted to for three minutes. David Fletcher won the prize, for his tale of why people think they want online dashboards but don’t

 

Industry Updates

  • BARB is looking into a non-linear database that would report on archive programmes on demand, and catch-up from longer than seven days after transmission. They will also evaluate, and possibly publish topline results of, the TV+online data
  • POSTAR – now have tube and bus data, and are looking at GPS devices to see how people move around. This is being validated and they hope to get it into a reporting system soon
  • NRS – concentrating on fusion with UKOM data, but hope to get more granular data and move online in future
  • RAJAR – moving the diary online, and continuing to explore the viability of passive meters
  • IPA – bedding down touchpoints. Touchpoints 3 included word of mouth, mobile internet, social media, gaming and on-demand. Touchpoints 4 will bring in tablets and apps, and change from a device-first structure to a content-first structure. It now has 60 subscribers (including each of the top 20 agencies) and has launched in the US. They are also piloting an app to go alongside the diaries
  • UKOM – the past year has been about stabilisation after some data issues. The contract is currently out to tender and whomever is successful (they would take over in January 2013) would look to measure all devices and locations (ie beyond home/work fixed internet to include mobile and video)

sk

My MRG Conference 2011 speech

At the MRG Conference 2011 (pdf link to programme here) I was given a three-minute slot to talk about anything I wanted under the banner “Six industry speakers share the good, the bad and the ugly from our industry”.

This is (roughly) what I said:

Good afternoon everyone. As Research Manager for Mobile, Social and Syndication at the BBC I’m understandably enthusiastic about these areas. So today I’m going to take the first area I mentioned – mobile – and explain how its characteristics make it appropriate as a research platform.

The first is universality – mobile has more coverage than any other research method. A big claim maybe, but Ofcom stats say that

  •  77% of households have PC-based internet
  • 85% of adults have a landline
  •  91% of adults have a mobile, and this rises to 98% among 16-54 year olds

More than 91% of the UK might have a home and can be reached by door to door, but realistically, once you factor in accessibility and interviewer safety, mobile will have the largest potential audience for research – though the key word there is potential; there is still the small hurdle of getting the audience’s contact details.
The second characteristic I want to mention is relating to proximity. More than any other platform, mobile is our go-to device. It is nearly always turned on, it is nearly always on our person and thus it is when we have some free time or are bored it is the first thing we turn – in fact I can see a few phones in the audience now. This captive audience on mobile has massive potential for research purposes, though we need to ensure what we ask them to do is both interesting and relevant. Easier said than done, perhaps.

But, this is predicated on the notion that we need our respondent to interact. We can do many great things on mobile – video diaries, photos, status updates etc and in real-time. But one of the real strengths of mobile is its latency. Why ask people what media they are consuming when mobile sensors can match sound to TV and radio; record web browsing, use GPS to plot outdoor reach and time spent; and soon use near field communication to record sales of newspapers and magazines. Admittedly, not all phones can do this just yet, and privacy is obviously an issue, but again, there is big potential.

The young will drive this, for mobile is a youthful medium – 16-24s say they would miss mobile the most if they had to give up media. These behaviours might not be mainstream yet, but a dozen years ago owning a mobile wasn’t mainstream, and look where we are now. But there is also a second aspect to this point around youth, and that is that the medium is nascent. We’re still learning all the time – no one can say they’ve cracked mobile in terms of capturing and utilising. This is a huge opportunity for research agencies both big and small to move into.

This is an opportunity because it doesn’t yet exist. There is plenty of innovation at the edges, but the market isn’t yet mature. So while I’ve identified several benefits to mobile research, they come with caveats and are more theoretical than practical. So as much as I want to say mobile is good, I can’t really. I’ve talked about the universality, the go-to nature, the latency and the youthfulness. That’s U.G.L.Y and it ain’t got no alibi, it’s ugly.

sk

People like people

Senior business folk like numbers. Facts and statistics to base decisions on and to evaluate performance. It’s both rational and sensible.

But occasionally, it is beneficial not to be rational or sensible. As the Apple “Think Different” campaign so memorably reminded us.

Organisations should have plenty of talented members capable of coming up with creative and innovative strategies to immediate and potential business concerns.

But when you want the opposite to rational or sensible, the best thing might be to consult the public. Whether consumers, users, viewers, prospects, advocates, rejecters, indifferents, promoters, lovers, haters or otherwise, each person will have a unique take on a situation.

Each person has their own behaviours, needs, habits, lifestyle, attitudes, hopes, fears and opinions which can relate directly or indirectly to an organisation, market or industry.

And every so often it is beneficial for senior business folk to hear these. To be reminded, inspired, provoked, amused, horrified, informed, affirmed or corrected.

What they hear will either be

  • Something they already knew, and should respond to
  • Something they already knew, but shouldn’t respond to
  • Something they didn’t know, and should respond to
  • Something they didn’t know, but shouldn’t respond to

All are valuable. Whether delivered through ethnographic videos, photo logs, social media listening, user-generated content competitions or through other means, each new piece of stimulus helps evolve the thinking of those making the key decisions.

Facts and numbers are powerful. But people are also powerful. Even hearing the same opinion heard many times before but by a different voice in an unusual situation creates new context and new meaning.

Therefore, we should strive to complement our rational decision-making with the creative expression that comes from voices that may not be found in the board room.

sk

NB: Inspiration for the post’s title is from the Riz MC song of the same name (who, to my knowledge, is the first and thus far only one of my university peers to achieve public success – measured by having a Wikipedia page). The lyrics have nothing to do with the content above, but the title led me to start thinking in this direction.

 

Learning from Steve Jobs

Steve Jobs' fashion choices over the years

Understandably, technology news over the past week has been dominated by Steve Jobs’ resignation as Chief Executive from Apple. While he will stay on as Chairman, Tim Cook – former Chief Operating Officer – will take the helm.

There have been many wonderful pieces on Jobs (though some do read like obituaries) – these from Josh Bernoff and John Gruber being but two – which cover many angles – whether personal, professional, industry or other. I’m neither placed nor qualified to add anything new but I have enjoyed synthesising the various perspectives. Yet invariably, the person saying it the best was Jobs himself:

  • He knew what he wanted – “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking” (Stanford commencement speech)
  • He felt he knew better than anyone else – “The only problem with Microsoft is they just have no taste. They have absolutely no taste. And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.” (Triumph of the Nerds)
  • He, along with empowered colleagues, relentlessly pursued this – “You have to trust in something — your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.”(Stanford commencement speech)
  • He was a perfectionist – “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.2 (Playboy)

NB: The quotes above were taken from this Wall Street Journal article.

In Gruber’s words “Jobs’s greatest creation isn’t any Apple product. It is Apple itself.”

In 14 years he took Apple from near-bankruptcy to – briefly – the biggest company in the world by market capitalisation. He has been enormously successful. And while possibly unique – his methods run counter to textbook advice on how to run an organisation – a lot can be learned from him.

The thing I have taken most from this is Jobs’ uncompromising nature. If people weren’t on board with him, then to hell with them. This of course led to his dismissal from Apple in 1985. And his dogged focus on his preferences has informed his fashion choices over the years, as the above picture illustrates.

It might seem strange for a market researcher to take this away, particularly since research is stereotyped as decision-making by committee – something which Jobs despised:

  • “We think the Mac will sell zillions, but we didn’t build the Mac for anybody else. We built it for ourselves. We were the group of people who were going to judge whether it was great or not. We weren’t going to go out and do market research. We just wanted to build the best thing we could build.” (Playboy)
  • “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” (BusinessWeek)

Unfortunately, this stereotype is often true, and I have been guilty of perpetuating it on occasion.

One example was when trying to get a project up and running (on a far smaller scale than rescuing Apple admittedly). With a lot of stakeholders, I tried to include as many of their wishes and requests is possible. The end result was bloated, incoherent, unfocused and over-deadline. It wasn’t one of my finer moments.

Rather than bolt everything on, I should have appraised all the input and only included that which remained pertinent to the core objective. I lost authorship of the project, and it suffered.

While there will be counter-arguments, many public failures do seem to be the result of committee-made decisions. Two bloated, incoherent examples that immediately spring to mind are Microsoft Office 2003 and the Nokia N96. Conversely, there are many examples of visionary micro-managing leaders that have driven a company to success – Walt Disney, Ralph Lauren and Ron Dennis to name but three.

I am a researcher rather than a consultant, and so don’t intend to fully adopt this approach. However, it appears that there is a greater chance of success when primary research or stakeholder input informs, rather than dictates, the final decision.

Steve Jobs knew this. His flagship products weren’t revolutionary (IBM, Microsoft, Nokia and the like were the primary innovators). But his genius was in refining a variety of inputs and stimulus, and moulding them into an expertly designed final product.

And that is something to aspire to.

sk

Overhauling the agency pricing model

Agencies are potentially losing out on beneficial and worthwhile commissions due to a fundamentally flawed approach to pricing their work.

(Note: My experience with pricing is almost exclusively tied to research agencies but I think this is broadly applicable to all industries).

Projects are commissioned when there is agreement between what an agency is willing to offer, and what a client is willing to pay.

My issue is that both of these components are based on cost.

Instead, they should be based on value.

£1 price tag

The agency side

The current model

Looking at the agency side first, it is clear that the focus upon cost makes the process far more transactional than it should be.

Using a dodgy equation (channelling John. V Willshire, who does this sort of thing far better).

P = d + αi + βt + p where P =< B

In English, Price =direct costs + a proportion of indirect costs/overheads + an estimate of the time spent + profit, where price is less than or equal to the client budget

(The alpha sign has arbitrarily been assigned to meaning a proportion, and beta an estimate)

d + αi + βt can be simplified to C for costs. Thus:

P = C + p where P =< B

Explaining the equation (this can be skipped if you trust me)

Of course, this is an oversimplification (though if agencies don’t use timesheets then the equation will lose the time segment and become even simpler) but it does explain the majority of the considerations.

Competitor pricing will be a factor. Market rates are to an extent set by those that have offered the service – an agency will seek to match, undercut or add to a premium to this depending on the relative positioning. This is reflected in the equation through time (premium agencies will generally spend longer on the delivery) and in desired profit.

An agency’s price will miraculously match the stated client budget (or in some instances, come in £500 under which I don’t understand since a) I thought psychological pricing had been phased out b) that spare £500 is not going to be able to cover any contingencies, expenses or VAT that aren’t included in the cost).

However, there are (at least) two things that aren’t yet factored in:

  • Opportunity cost – the cost in terms of alternatives foregone. This isn’t included since the only time you can really be sure that new requests for proposals appear is at the end of the financial year. Otherwise – for ad-hoc project work at least – there is no way to accurately predict the flow of work.
  • Competitive bidding – where profit is multiplied with expected success rate to give expected profit. While guesses can be informed by previous success rates, I don’t rate it as a) closed bidding processes mean competitor bidding strategies are unknown and b) perceived favourites are just that – perceptions (for instance, an incumbent may be secretly detested)

So what does this mean?

Ultimately, an agency will only submit a proposal if they think the profit they will make is worthwhile. The above equation can be reframed to reflect this:

p = P – C where P =< B

Or profit is price minus cost.

And this is where my main problem is with agency pricing. Profit is expressed purely financially.

Undoubtedly, finance is crucial. An agency requires cashflow to operate, it cannot survive solely on kudos. But it shouldn’t be the sole consideration

What I think should be included

Value should be added to the equation.

An agency should think not only about the financial margin, but about the business margin.

In addition to revenue, an agency can receive:

  • Knowledge – will the project increase knowledge of markets, industries, processes or methodologies that can be applied to other projects in future? This can be used to improve the relevance of business proposals, or be incorporated into frameworks of implementation
  • Skills – is the process repeatable, which can create future efficiencies? Does the project offer opportunities for junior staff to train on the job? If so, savings in training and innovation can be made
  • Reputation – will the results of the project be shared publicly – in testimonials, trade press, conference circuit or otherwise. If the agency is fully credited, there is PR value in terms of profile and attracting new business
  • Follow-up sales – will the project lead to additional work, either repeating the process for another aspect of the business or in up-selling follow-on work? Again, this can save on business development and can offer some future financial assurances (which will influence the amount of money borrowed and subsequent interest paid)
  • Social good – perhaps not as relevant for those in commercial sectors, but will the project create real and tangible benefits for a community – referencing Michael Porter’s concept of shared value

Thus, project gains are far more than financial. These intangible benefits should be applied as a discount to financial profit

Dodgy algebra (this can be skipped unless you want to pick holes in my logic)

Because while net gain would be:

N = p + β(k+s+r+f+g)

The net gains from a project are profit plus estimated gains in knowledge, skills, reputation, follow-up sales and social good (note that these factors can be negative or zero as well as positive). These can be simplified as intangibles:

N = p + I

These intangibles offer alternatives to financial profit. Increasing the amount can be gained effectively increases the budget:

P = C + p where P =< B + I

Assuming that an agency won’t offer psychological pricing, we can assume that P = B. This makes the equation

B + I = C + p

Substituting budget back in for price, and rearranging gives:

P = C + p – I

However, this assumes that the entire surplus is passed onto the client. Obviously, this shouldn’t be the case but equally the agency shouldn’t keep all of this surplus. Instead, I propose a proportion of the benefit is passed onto the client via a discount (in order to make the agency more competitive and improve chances of success).

Value is therefore a function of profit and discounted intangible gain:

V = fn(p – ɣI) where gamma is a discounted proportion

What this means – the conclusions bit

All of this long-winded (and probably incorrect) algebra effectively changes to equation

P = C + p

becomes

P = C + V

Financial profit is substituted for value.

I believe that the price an agency charges should be a reflection of their costs and the overall value that is received from the profit – both in tangible revenue and intangible benefits. Some of these benefits should be passed on to the client in the form of a price reduction, in order to make the bid more competitive and improve chances of success.

This also works in the converse. If there is a project that an agency isn’t enthusiastic about – it might be laborious or for an undesirable client – then the intangibles are negative and so profit needs to increase in order to make the project worth undertaking (in a purely financial equation, this means costs will need to fall within a fixed price/budget).

I should also make it explicit that I am not advocating a purely price-driven approach to bidding. Other factors – communicable skills and expertise, vision and so forth – are still vital. The reality is that markets are highly competitive, and price (or more accurately, the volume of work that can be delivered within a fixed budget) will be a large factor on scorecards used to rate bids.

The client side

This section doesn’t require algebra (fortunately).

My main issue with client budgeting is that it only concentrates on purchasing outputs. While these are tangible, these outputs (at least in research) are a means to an end. A client may want eight groups and transcripts, or a survey and a set of data tables, but the client doesn’t want these for the sake of it. They are purchased to provide evidence to validate or iterate a business process.

Therefore, I believe the client budget should be split into two.

  • The project budget – the amount that a client is willing to pay for the tangibles – the process required to complete the delivery of the project. These outputs are outcome-independent.
  • The implementation budget – which is outcome-dependent. The complexity or implications of a project are often unknown until completion. A project could close immediately, or it could impact critical business decisions in nuanced ways. If the latter, additional resource should be assigned to ensure the business can best face any challenges identified.

The majority of costs are incurred in the project, but the real value to the client comes in the implementation. This needs to be properly reflected; it currently isn’t.

Effectively, I propose a client should commission an “agency” to manage the project and a “consultancy” to manage the implementation. These could be the same organisation, they could not.

Wrapping up

There are undoubtedly things I have overlooked, and I’m pretty sure my algebra is faulty.

However, I believe my underlying hypothesis is valid. The current agency pricing model is flawed and needs overhauling because

  • Agencies ignore non-financial benefits
  • Clients ignore implementation requirements
Both of these are easily correctable, and these corrections can only improve the process.

sk

Image credit: http://www.flickr.com/photos/chrisinplymouth/3222190781