• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

The selective truth

There are two sides to every coin, but nuance is difficult to convey in a headline or summary. A clear and decisive statement is far likelier to catch the eye. It is important to question the motives of both the source of information and the reporting when making a decision as to the veracity.

I’ve noted this during my experiment to alternate my news sources. Similarly, I’ve tracked the early responses to a recent project I’ve worked on with interest.

SIDENOTE: The project is Brandheld – an extended study into consumer perceptions of the mobile internet, and both their current and intended behaviour. The press release is here and a topline slide deck will be released shortly. If you want more information about the report, contact me at [firstname]@essentialresearch.co.uk [/sales pitch]

The press release for the project can basically be split into two sections. The first section is a reality check, noting that adoption of the technology is perhaps lower than those in the London-centric media sector might think. The second section is a call to arms, suggesting a pathway to make the mobile internet seem more relevant to the mainstream.

SIDENOTE: The comments on The Register article nicely illustrate the reason for our first section. Most comments seem to fall into the “I do this, therefore everyone else must be doing it as well” category.

Several of the outlets picking up the story (to date) are only reporting or emphasising one of these sections. The reality check grabs the attention, and the call to arms supports the relevant sectors.

There’s nothing wrong with this – reporting a single side makes it easier for readers to digest, while many of us have an agenda we seek to push and any supporting evidence we can get is gratefully received and promoted.

This is fine for external communications and reporting. But for internal knowledge, it can be dangerous to be reliant on one side of the story.

The best clients I have worked with are those that recognise that while research may be commissioned in the hope of proving something, it is necessary to start with the unbiased and unvarnished truth, even if that might be difficult to hear. Even if only half the findings are externally reported, the other half should still be included in internal briefings.

This requires a strength of conviction if there is pressure coming down the chain of command for a particular result but there is clearly a need to avoid self-delusion. If the results are “bad”, it should be made clear why. If the desired outcome is achieved, it is unlikely that there won’t be a single caveat. And these caveats are important to understand when designing or promoting a strategy.

A similar principle is required when collating secondary research. Even if the findings are sourced or quoted as evidence in external communications, it is important to understand the biases or reliability of the data for your own internal knowledge. Recognising the nuances or limitations of something can only assist your efforts to improve it.

News articles remain a fantastic way to distribute information, and are often the first place that research or data is discovered. Nevertheless, it is vital to go back to the original source if you plan to do something with the findings. That way, an informed decision can be made about the accuracy or reliability of the information (for what it’s worth, Brandheld is an independent study conducted with no prior agenda aside from us thinking the mobile internet would be an interesting area to research). Even if this doesn’t affect the way the information is collated, it is still an important facet to consider.

sk

Image credit: http://www.flickr.com/photos/colin-c/200867665/

Reblog this post [with Zemanta]
Advertisement

Putting PR research in its place

Ben Goldacre‘s Bad Science column frequently exposes less-than-stellar research findings that have been subjected upon the general populace. He’s always worth reading, and in the past has even inspired me to vent at some of the ridiculous claims.

But his latest column on PR reviewed data has, surprisingly, caused me to reconsider my stance.

In it, he compares the recent advertorials that the Express ran (which were outside of the rules) to these sorts of surveys. This led to a response from one of the main perpetrators  – One Poll – and they (sort of) agreed.

In their own words, “We’ve been providing branded, stat-based news copy to the nationals for more than ten years now. Why do you think we do it? Everyone is aware this is a branding exercise…”

Unlike the advertorial row, the transaction is non-monetary and thus legal. The journalist is essentially outsourcing part of their responsibility. In some ways, it is like a landlord taking on a lodger, with the lodger earning their keep through “chores” rather than paying rent.

Does it matter? I don’t disagree with One Poll when they say one of these survey stories can be entertaining and provoke discussion – though I might replace the word “entertaining” with “momentarily diverting”.

But I disagree when they say that the surveys are valid. In terms of validity, I would subjectively rank this approach as thus (moving from invalid to valid):

  • Made up data
  • Straw poll of close friends
  • Accumulating opinion from one source (e.g. comments from one news story)
  • PR survey
  • Hypothesis testing survey
  • Exploratory survey
  • Census

I have issues with their validity not so much because of the financial motivations of respondents, but because of an obvious inability to replicate the answers. They don’t have an objective truth.

To use one of their most recent press releases as an example: Man City fans are among the poorest in Britain. If the survey were repeated tomorrow, I wouldn’t be at all surprised to see a totally different result.

SIDENOTE: This post isn’t meant to disparage this sort of research (I’ve done that already), but for supposed experts they can surely create better copy than ”It’s no surprise to see Chelsea up at the top and the other big London clubs. They have a loyal ban base with supporters around the country and obviously have some money to spend.” Aside from spelling and grammar, perhaps they should have made the correlation between geographic disparity and affluence a bit more explicit…

While I still think this approach (to both research and marketing) is nonsense, I’m no longer against it appearing in newspapers. What I would like to see, however, is an explicit admission that it is a one-off accumulation of non-representative opinion at a single point in time (or something slightly catchier). It is not fact that Man City fans are among the poorest – it is just the result from a single, dodgy survey among people that live on the Money Saving Expert forums. These surveys are views, not news.

And with that caveat, I have no issue with this method. While labeled as market research, it is not something I, nor my company, would participate in and so we aren’t in competition with these practitioners. There is evidently a market for this sort of product, so good luck to those pursuing it.

One Poll have worked with some big names so they must be doing a good job in their niche. That niche is nicely summed up by their “No coverage no fee” results-driven model. The legal equivalents have been stigmatised as “ambulance chasers” – I wonder whether this type of service can avoid a similar slur…

sk

Image credit: http://www.flickr.com/photos/daveknapik/

PS I’d be interested in hearing from anyone who has commissioned a PR survey of this nature. Outside of the value of the column inches being greater than the cost of the survey, has there been any tangible, noticeable benefit to engaging in this sort of activity?

Reblog this post [with Zemanta]

Bad research: Compromising the value of PR

NB: The inspiration for this post is Ben Goldacre’s excellent Bad Science column in the Guardian. The book came out this week (here is the Amazon link), and the Guardian has serialised chapters on the MMR jab and miracle pills.

I particularly enjoy the columns where he remorselessly picks apart a PR piece containing some level of human interest, but one that is entirely based on spurious research. This deconstruction of Jessica Alba having the perfect wiggle is a fine example of the method.

A doctorate-for-hire is commissioned to create a formula out of thin air. The story generated then gets picked up by the press. The sponsor is named in the article, and they conveniently have some tenuous link to the subject. The PR pays back the research cost.

The most depressing thing about this endeavour is that there is no conclusion in sight – they just keep on coming.

I work in a Research & Insight department. More often than not – and I include myself in this – the insight is dropped and it is simply referred to as the research department.

This is fine, but it is important to note that the two functions are distinct. Insight is about finding new things out and threading pieces together to form fresh conclusions or intelligence. Research is providing evidence to support a theory or hypothesis.

(Note that these are my own personal definitions informed in the most part by the way in which my job operates. Wikipedia defines them differently)

Both are necessary, but both can be compromised. Problems with insights tend to be more innocent – flights of fancy where the new findings (intended or otherwise) don’t justify the expenditure invested in producing them.

Problems with research are more sinister. The answer is already known. The end truly justifies the means. The research design, the wording of the questions and the data cuts providing the analysis are contorted to ensure that the correct answer is given.

In reality, this doesn’t (well, shouldn’t) happen. To take a form of research I am familiar with; advertising effectiveness studies don’t always produce positive results. If results are bad, the client and agency are informed (albeit with any silver linings accentuated), and the study is swept under the carpet.

The major problem is when the research resembles, but doesn’t match, the pre-ordained conclusions. Then the temptation seems to be too great to resist. So, the results are tidied up. The supporting evidence is hidden behind hyperbolic headlines and the announcement is made.

For all intents and purposes, the evidence may as well be removed. It only gets in the way of a good story.

Canon – “world-leader in office imaging solutions” – recently came up with a doozy. As you may have been aware, the Beijing Olympics recently occurred. Did it inspire office workers to emulate their sporting heroes and get fit and healthy? Of course not, and Canon has data to support this claim. Apparently officer workers spend “the equivalent of a staggering 34 working weeks per year”.

Fortunately for us, “Canon has teamed up with health professionals from the fields of dietetics and ergonomics to develop an ‘Office Olympian’ guide. The guide includes independent expert advice on a range of topics such as keeping active in the office, healthy nutrition advice and perhaps most importantly, correct posture for employees who spend long periods working at a computer”

Phew!

And for just the cost of a few questions on an omnibus survey, press coverage is acquired.

But let’s take a closer look:

  • The 34 weeks number comes from office works spending, on average, five and a half hours of work per day at their desks. In what conceivable way is this staggering? On average, people only have 90 minutes worth of meetings and toilet breaks per day?
  • Time with friends and family, exercising and chores are sacrificed when two thirds of office workers work beyond their contracted hours. The frequency and length of this overtime isn’t elucidated upon. I can only assume it is regular and extensive
  • The work-life balance is destroyed because a fifth of workers spend 7-8 hours a day in the office. Because with the 2 hour each way commute and need to get 12 hours of sleep a night, there really is no time to have a life outside of work
  • 20% of workers don’t consider their health when in the office, despite spending the majority of their time there. Firstly, health is generally only considered when there are negative repercussions, so that would imply 4 in 5 are healthy. Secondly, how does five and a half hours a day for five days a week over 47 weeks a year constitute the majority of my time?
  • This is my favourite one; Only 19% use the tea run as an opportunity to take a break and just 28% regularly leave their desk to pick up documents from the printer – an ideal opportunity to stretch and exercise. See how Canon’s world class imaging solutions help improve your life? Because I don’t drink tea and have no need for my rubbish non-Canon printer, I have no need to leave my desk. Outside of the 90 minutes I spend away from it of course. I think Canon missed a trick here. Consider the downtime involved in going to the toilet – surely some stretches and exercises could be incorporated into that?
  • A couple of rent-a-quotes are then wheeled out for the coup de grâce.

Just writing the above has actually made me quite angry.

Perhaps I am too cynical? There are no doubt some good intentions burrowed beneath the marketing effort, and some people may genuinely gain benefit from the tips on diet and ergonomics.

But when the advice is packaged up in such a moronic fashion, it completely destroys any appeal that the campaign may have instilled.

At some point, either the press must resist publishing these “stories” or the sheer ridiculousness of their claims must be exposed. But in the meantime, there appears to be no respite.

I eagerly await the next release on anger management.

sk

Hype machine

Gutter twins, Mark Lanegan, Greg Dulli

Photo taken by http://www.flickr.com/photos/evillorelei/

Last night I saw The Gutter Twins play at Koko. Opinion on the evening varied.

 Person A: A long-time fan of Greg Dulli who had listened to the new tracks many times and participated in online chatter in the lead-up the show. Person A went to the gig with very high expectations and was severely disappointed when the performance didn’t translate to the expectations based around past and current form.

 Person B: A huge – some might say obsessive – fan of both acts who travelled a great distance to be at the show and who is highly immersed in the band’s culture. Person B went in with big expectations and had a brilliant time at the concert.

Person C: A casual fan of previous acts but completely unexposed to Gutter Twins material. Person C went in with no expectations and had a pleasant, if unmemorable, evening.

Why was it that Person A went away so disappointed while Person B was so pleased? They both had similar expectations. But while Person A was knowledgeable, perhaps Person B’s immersion meant that their expectations were more accurate. Person C ended up having a more enjoyable time than Person A, entirely due to lower expectations.*

So hype can have a negative impact. Hype may well get consumers to participate, but if the product or service doesn’t live up to it, then their loyalty has been tested. Person B will still listen to The Gutter Twins, but the reverence for the band has dwindled.

By definition, hype is exaggerated and excessive. In the post-Cluetrain world, information asymmetries should be reduced but rationality does not always overpower emotion. Appealing to rationality by imparting accurate information may be safe, but it is also limiting. Tapping into emotion is the key to success but the stakes are high if the product or service doesn’t deliver.

Hype might work as a tactic but it is not an effective strategy unless there really is a product worthy of all the praise lying beneath the exposition. Which possibly explains why the talk of a Snakes On a Plane sequel has dried up.

sk
*Incidentally, I am Person C