A guide to corporate blogging (beta)

We’ve recently set up our Essential Research blog. It’s started well, albeit a little slowly. Go check it out.

The main reason for the slow start is that we are currently crazy busy. However, a second reason is that the majority of us have never blogged before. And as those who have their own blog know, it is a little scary to begin with.

What do I write about? Who will read it? What if it is rubbish?

I’m quite happy with how this blog has evolved. But the number of blogs I’ve had is in double figures (I think), and it has taken me 6 or 7 years to get into a position of (relative) confidence.

So, using a combination of my past experiences and the advice of others that are quite proficient in the space, I’ve created a little guide on blogging.

See below – it is a draft, and particular to research, but I’d be interested to know where it could be improved.

Essential Research blogging guide

Click on the picture for a larger (and readable) version.

Yes, I like mnemonics.

Incidentally, the further reading list is:

All images are taken without credit. Sorry. If one of the images is yours and you’re not cool with my use then let me know and I’ll change it.

sk

Advertisements

A second set of eyes

In my last post, I attempted to make a few calculations around the return on conversation. Rather embarrassingly, I suffered a temporary mindfreeze regarding the definition of a percentage, and so my calculations were out by a factor of 100.

This is my blog, and – unless I am directly linking to someone else – everything here is the work of me and me alone. This has its upsides and drawbacks.

One of the obvious drawbacks is my idiosyncratic quality control. Sometimes I may dwell over a post and its formulation for an age, other times I will quickly bash something out without due consideration to checking grammar, logic and facts.

For the most part, the reader may know no different. Some posts may be perceived to be a better quality than others, but unless there is a really obvious error – like yesterday – there is little indication as to how long the post took, or how much effort was put in. As Mark Twain once alluded to, it often takes longer to craft a succinct output.

In my day job, this doesn’t happen. There are project leads, but there aren’t projects with only one person working on it. It may take slightly longer to coordinate around different individuals, but ideas are bounced off of one another, different perspectives are compared, and details are checked. Nothing leaves the office until at least two people – one of whom is normally at a senior level – are happy with it. This is a crucial component of our approach – we require absolute conviction in what we are doing.

Quality control is absolutely vital. Without it, there is no trust.

So, mea culpa – the quality control on this blog has been found wanting. I’ve relearned an important lesson, and I hope this doesn’t affect your impressions of this blog too negatively.

sk

Image credit: http://www.flickr.com/photos/hotcherry/

Return on conversation

EDIT: As has been pointed out, I made a rather embarrassing miscalculation in the original post, which made me seriously underestimate the CTR. I evidently need to evaluate my quantitative credentials

My previous post on conversation monitoring was tweeted and retweeted by several individuals. Firstly, I’m grateful that people both read this blog and are motivated to share something I’ve written.

However, the additional traffic that this Twitter activity generated has left me wondering how valuable this social activity is to individuals or organisations that look to spread their message through this sphere.

What follows are some rough numbers given that:

  1. WordPress.com stats are pretty basic
  2. I’ve left it two weeks to do the maths, and so follower numbers will have changed
  3. Follower overlap and actual exposures are unknown

Nevertheless:

  • To my knowledge, the post was tweeted/retweeted 10 times
  • Combined number of people following those who linked the post is 10,354 as of today
  • The post probably got 100 additional hits as a result of Twitter activity

A couple of guesstimated calculations:

  • At an absolute level, this represents a click through rate of 1%
  • If I made the assumption that 5,000 followers are unduplicated (the largest follower count for a retweeter is over 3,000), the CTR changes to 2%
  • How many of the followers would have seen the tweet? A fifth? That changes the CTR to 10%

10% is OK for a CTR, but it isn’t spectacular. The best ad campaigns with a strong call to action (e.g. competition entry) would achieve that.

The argument is that these 10% are going to be of a much higher quality than random visitors – they have acted upon a social recommendation and are likely to be engaged and interested in the content.

But that argument should work for the click through itself. If someone you follow and trust is recommending something, shouldn’t you be more likely to click through than if it were a random link or ad?

There a few issues at play here, which are causing this level of CTR

  • Noise – Twitter is popular; there are a lot of tweets and links to browse and skim
  • Ambient intimacy – often, it is enough for me to know that person X has linked to a post on conversation monitoring by @curiouslyp. I may prefer to browse the remaining tweets rather than click through to this post
  • Power laws – if the post on conversation monitoring was by @jowyang or @chrisbrogan I may click through since they are renowned experts. Who is @curiouslyp and what would he know about this topic?
  • Nature of followers – my prior post was relevant to the PR community – very active on Twitter. I suspect posts of a different subject matter are unlikely to be spread and consumed to the same degree

It is nice to think that the future is social, and that these networks will power traffic in future. But those perpetuating this – in my opinion – myth are those for whom power laws benefit, and who spend an inordinate amount of time on social networks (most likely because it is there job to do so). The average person does not have the time nor inclination to follow through on many, let alone all, posts or links.

So, in my opinion, the return on conversation is pretty minimal. Nevertheless, I did find it interesting to map how my post spread through Twitter via social graphs and, to repeat, I am grateful to the few that took the time to read and pass on my post.

sk

Image credit: http://www.flickr.com/photos/ironmonkey480/

Reblog this post [with Zemanta]

Should we listen to every conversation?

Over on the Essential Research blog, I have responded to a post by a social media conversation monitor who eulogised the death of focus groups.

In that post, I have outlined why focus groups themselves aren’t the issue; rather it is shoddy application. Here, I want to expand on that a bit. It is my contention that conversation monitoring is more flawed than traditional research, and should not be used for major corporate decision.

Alan Partridge once declared himself to be a homosceptic, and in a not dissimilar way I am doubtful of the efficacy of social media monitoring.

In terms of numbers signing up, the social space is still increasing. However, the number of active users within this universe will remain limited – the late arrivals will be the more passive and occasional users. This space is increasingly asymmetric, with network effects and power laws distorting the flow of information.

Topics of conversation will by nature revolve around the major players – whether individuals, blogs or organisations. The larger the hub, the weaker the concentration of signal to noise.

As a small example, consider blog commenting. Aside from the odd spam comment, the contributions I get here are all genuinely helpful. Because this is a relatively small blog, there are few people commenting out of self-interest. Moving to the larger sites, comments are filled with spam, self-promotion and unquestioning advocacy/contrariness. Genuine debate and discussion still exists, but it is diluted by the inanity surrounding it. This on its own creates difficulties for sentiment analysis, but clever filters can overcome this.

But despite the internet being open, we will cluster around likeminds. Group think creates an echo chamber. danah boyd has pointed out that teenagers network with pre-existing friends. It is my observation that the majority of adults network with those in their pre-existing spheres. Planners chat to planners. Cyclists to cyclists. Artists to artists. Mothers to Mothers. These categories aren’t mutually exclusive, but the crossover is minimal compared to likeminds.

Remember the Motrin outrage? The mainstream majority remain blissfully ignorant. This may have been because it was resolved before it had a chance to escalate to the mainstream media, but it nevertheless shows the limited nature of social media echos.

Of course, some products or services target the early adopting, tech savvy ubergeeks and so for these companies they should obviously engage where their audience is.

But for the rest? Despite my assertions above, I do view monitoring as useful, but only as a secondary tool. Tracking conversations as they happen is a useful feedback mechanism, but few companies are going to be nimble enough to implement it immediately (once they have separated the meat from the gristle and verified that this opinion is indeed consensus).

Surveys and groups are indeed limited by taking place in a single point in time, and through these it is difficult to extrapolate long-term reaction. The Pepsi taste test being one notorious example.

But there are plenty of longitudinal research methodologies that are suitable. Long-term ethnographic or observational studies can track whether attitudes or behaviour do in fact change over time. These can be isolated in pilots or test cases, so that any negative feedback can be ironed out before the product or service is unleashed to the general public.

This is where traditional research still prevails: the controlled environment. Artificiality can be a benefit if it means shielding a consumer basis from something wildly different from what they are used to.

This takes time though, and some companies may prefer to iterate as they go, and “work in beta”. Facebook is an example of this – they have encountered hostility over news feeds, Beacon, redesigns and terms of service.Each time, they have ridden the storm and come back stronger than ever.

Is this a case study for conversation monitoring effectiveness? Not really. They listened to feedback, but only implemented it when it didn’t affect their core strategy. So, the terms of service changed back but the news feed and redesign stayed. Features intrinsic to its success.

Should Scyfy have gone back to being the Sci-Fi channel due to the initial outrage? Perhaps. Personally, I think it is a rather silly name but it didn’t do Dave any harm. If they have done their research properly, they should remain confident in their decision.

Conversation monitoring can be useful, but it should remain a secondary activity. A tiny minority have a disproportionately loud voice, and their opinions shouldn’t be taken as representative of the majority. When iterating in public, there is a difficult balance between reacting too early to an unrepresentative coalition, and acting too late and causing negative reaction among a majority of users/customers.

Because of this, major decisions should be taken before going to market. Tiny iterations can be implemented after public feedback, but the core strategy should remain sound and untouched.Focus groups and other research methodologies still have an important place in formulating strategy.

sk

Image credit: http://www.flickr.com/photos/jeff-bauche/

Reblog this post [with Zemanta]