• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Ten things I learned from the New MR Virtual Festival

My previous post included all of the notes I took while listening into the New MR Virtual Festival. This post contains the key things I took away from the day, and have subsequently been mulling over in the 2 months (!) since.

New MR Virtual Festival header

NB: All slides shown below are taken entirely without permission. If a creator objects to its use, please contact me and I’ll remove it.

1. The boundaries between participation and observation can (and, in some circumstances, should) be blurred

Although Ray Poynter touched on the range of methods in which research can overtly or covertly be conducted online, Netnography (cf. Robert Kozinets) is – to me – one of the more interesting. To be effective, it needs to have the research both participate and observe the environment of interest.

Erica Ruyle argues that observation (or lurking) is fine in the initial stage, since the norms and cultures need to be understood and respected. But active participation is vital in order to get than “insider” knowledge and to be able to read between the lines of the interactions.

This is a difficult proposition to promote commercially as a) the time investment (and thus cost) required will be huge and b) the researcher will need to have a personal as well as professional interest in the topic in order to both be accepted by the community and accept the community. For instance, how many researchers would turn their nose up at being asked to take part in World of Warcraft for 6 months?

Nevertheless, in the right circumstances it could prove to be a fascinating and rewarding exercise.

2. Convenience samples can still be worthwhile

It is true that all non-census research incorporates a convenience sample to some extent. But some methods require more convenience (and thus are less representative) than others.

Annelies Verhaeghe highlighted some of the issues to be aware of when conducting social media research – particularly that we should resolve ourselves to not always know who we are speaking to or monitoring.

Furthermore, something I had not considered but makes sense is that even though companies trumpet the volume of data they scrape and collect, only a sub-sample of that will be analysed due to the diminishing returns of going deeper into a very large data set.

If we’re able to augment social media research with other techniques or data sources – Annie Pettit mentioned some of the benefits of combining social media research with surveys – then it can be a very valuable and insightful method of getting real-world information on past actions and thoughts.

3. Respondents shouldn’t necessarily be viewed equally

Both Rijn Vogelaar and Mark Earls talked about premises realised more thoroughly in their books – The SuperPromoter and Herd respectively.

Segmenting the audience isn’t a new phenomenon – we often restrict our universes to who we are interested in – but within these universes perhaps we should pay more attention to some individuals more than others – particularly given the complex social interactions that cause ideas and opinions to spread. I’m not clever enough to be able to incorporate full network theories into any of my research – in the manner of Duncan Watts, for instance – but perhaps there is an argument for applying simple weights to some projects, to account for some opinions becoming more important than others. Or perhaps it is too contentious to implement without proper academic grounding and proof.

4. More of an effort needs to be made to meet respondents on their terms

Betty Adamou joined the likes of Stowe Boyd in saying that email is on the decline among younger people. This is problematic for online surveys, given that online panels are predominantly administered via email. Given the trends, perhaps we should be looking to Facebook, Twitter, instant messenger etc for both initial recruitment of these audiences and then allow them to dictate how we can contact them to alert them with new surveys. I’m not sure whether a note on a Facebook group could be as effective as an email, but it is certainly worth exploring.

5. Survey structures can do more to take advantage of the online environment

Our media and communications channels have fragmented but the content providers retain a centralised hub of controlling activity. Why can’t the research industry do this? John Dick talked through Civic Science’s modular research methodology, whereby questions are asked in chunks of two or three at a time, but combined at the back-end to build up extensive profiles of respondents.

This approach makes intuitive sense. In face to face research, the main cost was in finding people to speak to. Thus, once they were located, it was efficient to collect as much information as possible. The web is abundant with people, who are time-poor. The cost isn’t in finding them, it is keeping them. People could easily answer three questions a day if there was the possibility of a prize draw. They would be less willing to spend 30 minutes going through laborious and repetitive questions.

There are clearly downsides to this method and plenty of issues to overcome regarding data quality assurances, but the notion of Facebook users answering a couple of questions a day sounds like a feasible way to collect information among people who might be unwilling to sign up to an online survey

6. Surveys don’t have to be boring or static…

Another aspect of the online world that should be explored further is the level of interactivity. Many online surveys are straight ports of face to face surveys – a shame when there are so many more things that a web survey can – in theory – be capable of.

Jon Puleston of GMI highlighted several of their experiments in this area. Interestingly, although interactive surveys take longer, respondents are more engaged, enjoy them more and give “better” answers. I particularly like the idea of timing respondents to give as many answers as possible within a given timeframe. This appeals to people’s competitive nature, and means they’d spend far longer on it than they normally would.

Jon Puleston of GMI at New MR Virtual Festival

The King of Shaves case study was very interesting. Rather than a 1 minute introduction for a 10 minute survey, this example reversed the process. People were given a detailed briefing on the role of a copywriter, and asked to come up with a creative slogan. In subsequent testing, seven “user-generated” ideas scored better than the advertising agency.

7. But we should be aware of the implications of survey design on data capture…

Jon’s examples showed how framing questions can improve data collection. But Bernie Malinoff warned us that even minor superficial changes to a survey can have a big impact on how people answer questions. For instance, the placement of the marker on a slider scale can heavily impact the distribution of answers.

Bernie Malinoff at the New MR Festival

Bernie also had some important lessons in survey usability – ranging from the wordiness of the questions (something I’ve been guilty of in the past) to the placement of error messages and how they can influence subsequent responses.

Surprisingly, his research found that survey enjoyment was comparable among people who did traditional “radio button” style surveys versus richer experiences, and that people were less willing to take part in future after having completed a flash-based survey.

It acts as a sobering counter-point to Jon’s presentation, but I inferred some caveats to this research (or perhaps I am only seeing what I want to see). I suspect some of the resistance to flash might be down to the “newness” of the survey design rather than a genuine preference for radio-button style surveys. Similarly, design iterations aren’t neutral – I wouldn’t mind different results so long as I felt they were “better” (and any methods to discourage survey cheaters are welcome). Nevertheless, it an important reminder that a better designed survey is only an improvement if it makes the survey more usable and easier to understand, and I completely agree with the final point that the industry should reinforce best practices for interface design.

8. …And whether they are suitable for the audience

Tom Ewing’s talk on gaming and research covered many interesting points, but the one that stuck with me is that it isn’t for everyone. As he points out, FourSquare holds little appeal to him (unless he wanted to be Mayor of his child’s nursery). Similarly, while the number of gamers is rising, it isn’t everyone and so we cannot assume that introducing interactive, exploratory or personalised experiences will automatically make respondents enjoy our surveys more.

Particularly since games design is pretty hard – Angry Birds and Farmville may look simple, but I wouldn’t expect any research agency to devise and incorporate something as addictive to their research methodologies. The latter in particular seems to purely encourage the completion of repetitive, monotonous tasks – not something that would benefit the quality of research outputs.

9. There is plenty of scope to improve beyond the debrief

John Clay talked about ways in which researchers can improve the way that debriefs are communicated. This is an area that many (too many) researchers need to improve upon, but an even more pressing area of improvement is what occurs after the debrief.  Spencer Murrell’s presentation on insight translation covered this.

Spencer Murrell at New MR Virtual Festival

Summary slides and executive summaries are important in debriefing research, but it is important to go beyond the report/presentation into a framework that can be referenced in future. Whether it is a model that can be stuck on a wall, or a cheat sheet that can be put on a post-it note, there are many creative ways in which the core findings of a project can be transformed into an ongoing reminder. Evidently, this could easily descend into a gimmicky farce, but it is important to remember that the debrief isn’t the end of the project. In many ways, it is only the end of the beginning. The next phase – actually using that information to improve the organisation – is the most important. Any ways in which researchers can add value to this stage can only improve their standing with their clients.

10. Online conferences can work

On the whole, I think the event can be viewed as a huge success. For an affordable fee ($50), I listened to many intelligent speakers on a variety of topics, as shown by both this post and the previous post.

There was also plenty of excellent discussion around the talks and the content on Twitter, using the #NewMR hashtag. I’m usually reticent to tweet during events, but given the lack of face-to-face contact and the fact I was facing my computer at the time, Twitter worked excellently as a forum to amplify the usefulness of the content presented.

An idea is one thing, but executing it is something very different. Aside from the odd technical hitch (inevitable given the volume of speakers from across the globe), the day ran impeccably. So Ray Poynter and his board deserve huge congratulations for not only the concept, but also the organisation and output of the event. I would wholeheartedly recommend people with an interest in research investigate the New MR site and list of upcoming events.

sk

Advertisement

Cutting the current

I’m not big on New Year’s resolutions, particularly since I never actually seem to keep them. But I start with good intentions, so I suppose that is at least something.

In 2009, I vowed to read less, but better. That sort of happened, but the mass of information makes it difficult to resist.

In 2010, I attempted to widen my reading sources, by rotating my online sources of news. I lasted for about a fortnight, but more pressing priorities meant it quickly fell by the wayside.

Nevertheless, I return once again with a resolution for 2011.

It is quite similar to the 2009 resolution in that it is another attempt to combat information overload. But rather than simply say I will try to read more but better, this is hopefully a process that will help me achieve it.

In 2011, I will take a conscious step-back from real-time content consumption, and intentionally read (most) news and commentary much later than their time of publishing.

I’m not going to be as prescriptive as saying it will be 12 hours, or 48 hours, a week or a month. Particularly, since posts on MediaGuardian will be more time-sensitive than those on New Yorker. But I’m going to avoid the regular refreshing of Google Reader, and let links build up.

The last couple of months has proven the efficiency of this appraoch to me. An incredibly busy November and December meant I had to cut down my reading and surfing. Over the Christmas break, I have largely caught up on my RSS feeds and bookmarks. Google Reader trends tells me that in the last 30 days I’ve read circa 2,500 items. That would previously have been circa 3,500, while the current figure also includes items over a month old.

But there are many other benefits.

In the character of C’s, here are five interrelated reasons why I think this approach will suit me.  No fancy graphics. Sorry.

SIDENOTE: I’ve exaggerated it for the purpose of this post, but what is with the proliferation of lists consumed with Cs – is it the most alliterative word for media and technology related content? Whether Brian Solis5 C’s of community or Srinivasana et al’s eight factors that influence customer e-loyalty, its popularity is clear.

1. Concentration through centralisation and classification

What I found most striking in my catch-up of links was that I was far more selective in what I chose to read. When caught in the fire-hose, I may have read the same story four times from four different sources, not knowing who else would be picking up on the topic. Now, I’m able to select from a complete list of sources on my radar. A more discriminating selection process will also free up more time to do other important things. Like sleep.

It also benefits long-form content consumption, since I’m no longer in a hurry to steam through articles. Recently, I’ve been enjoying Vanity Fair, Atlantic and New Yorker pieces courtesy of services such as Give Me Something to Read – here is their best of 2010

2. Curation through collation and compilation

I’m not totally sold on curation – services like paper.li just annoy me. But trusted editors can make a difference. I don’t necessarily need to scour every link looking for the most interesting pieces, when people such as Neil Perkin crowdsource recommendations or people like Bud Caddell point to interesting things.

Incidentally, I may once again resurrect my link updates. I may not. It depends how this experiment goes.

3. Conversation through community and comments

Although the number of comments might be dwindling (or merely refocusing on the biggest sites with an active community), they can still be incredibly valuable.

Initial comments tend to be from sycophants or – in the case of social media monitoring blogs – companies such as Alterian or Radian 6 proving their scraping technology works but later comments can be insightful in their critiquing or extending the authors points. Helpfully, Disqus now sorts comments based on popularity (I should really start voting).

4. Context through critique and connections

Whether it is through comments or from myself connecting different commentaries or posts, different items can be combined or juxtaposed for context and additional understanding. And often it is the connectors that are more interesting than the nodes themselves.

5. Contemplation through consideration and cogitation

Finally, moving away from real-time motivates reflection and critical thinking. The need to rush into a response has been marginalised. I can ponder and muse before I decide whether to write a response to something or not. Nicholas Carr would be proud.

To make this work, each person will have a unique system that works for them. Mine is using Read It Later – a bookmarking service that syncs across devices. It also works within Google Reader, though I suspect I may need to also use stars if the volume of bookmarks needs additional features to distinguish information (on time-sensitivity, if not topic)

Of course, there are drawbacks to this approach.

  • It effectively makes me a lurker rather than an active contributor, so I’ll be taking more than giving.
  • I will continue to link, comment and blog but most likely after the fact, once people have moved on and the topic has lost some relevance. A balance will undoubtedly need to be struck.
  • I’ll have lower visibility through not being not being an early commenter or tweeter, and link-baiting my wares – though Twitter does seem to have made blog commenting and responding far more infrequent anyway. I think I can live with a lower Klout score, since I’m not doing this to reach an arbitrary number of undifferentiated people.

Let’s see how I get on.

sk

Image credit: http://www.flickr.com/photos/36593372@N04/5198073390/

Enhanced by Zemanta

Predictions for 2011

In the grand tradition of December blog posts, here are seven predictions for 2011:

<sarcasm filter>

  • A niche technology will grow
  • Businesses to focus less on the short-term bottom line and more on consumer needs for a long-term sustainable relationship
  • Traditional media/methods will take several more steps closer to its death
  • Social media will become more important within organisations
  • Companies will banish silo thinking and restructure around a holistic vision with multi-skilled visionaries at the helm
  • The product will be the only marketing needed
  • A company will launch a new product with these precise specifications…

</sarcasm filter>

1999 A.D. / Predictions From 1967

I think the tone and style of my predictions are about right. They run the spectrum from bland tautology to wild guesswork with plenty of jargon and generalisation thrown in.

Given how utterly useless predictions are, why do people persist? I presume they pander to people’s love of lists while gambling on their inherent laziness in not checking accuracy of previous predictions and hoping that, as with horoscopes, people read their own truths into open statements.

I’ve had the displeasure of running across numerous offenders in the past month. I won’t name check them all but, unsurprisingly perhaps, the tech blogs are the worst offenders. This example from Read Write Web and these two examples from Mashable are particularly mind-numbing in both their blandness and unlikeliness.

Living on the bleeding edge can massively skew perspective. I’m sure Cuil (remember them?), Bebo and Minidiscs have all featured in predictions of game-changing technology. In other past predictions, you can probably swap “virtual reality” for “augmented reality” or “geo-location”, or Google for Facebook or Twitter, and recycle old predictions for different time periods.

The basic truth is that the future is unpredictable. We are micro participants trying to define macro trends. A reliance on logical step-progression completely overlooks the serendipity and unanticipated innovation that characterises long-term trends, which constantly ebb and flow as tastes change and rebound against the status quo.

Take popular music as an illustration. The most popular acts of one year don’t predict the most popular acts of the following year. Tastes evolve (and revolve) with pop, rock, urban (I intensely dislike that word but can’t think of a better one), electronic and dance being in the ascendency at different points in the past twenty years.

With honourable exceptions, business and technological breakthroughs are revolutionary rather than evolutionary (note I have quite a wide definition of revolutionary). To give some examples

  • 2 years ago how many people would have predicted that an online coupon site would be one of the fast growing companies of all time
  • 5 years ago how many people would have predicted that a social network would be the most visited website in the UK
  • 7 years ago how many people would have predicted that company firewalls would be rendered obsolete by internet-enabled phones
  • 10 years ago how many people would have predicted that Apple would change the way mobile phones are perceived
  • 15 years ago how many people would have predicted that a search engine dominated advertising revenues
  • 20 years ago how many people would have predicted that every business would need a presence on the internet

Undoubtedly, some people would have made these predictions. But to use the well-worn cliché, even a stopped clock is right twice a day.

Despite my negativity, I recognise that there are some benefits to offering predictions. It opens up debate around nascent movements and trends and adds to their momentum, and provides a forum for authors to say where they’d like things to be in addition to where they think things will be.

If only so many weren’t so badly written.

(NB: I recognise by saying that I open myself up to accusations of poor writing, to which I fully admit)

sk

Image credit: http://www.flickr.com/photos/blile59/4707767185/

Polls are taxing my patience

Polls are everywhere at the moment.

They’ve been around for a long time, but for me they’ve jumped the shark/nuked the fridge. Use has been superseded by overuse.

The US elections are an obvious, recent cause. I am amazed by the amount of polls taking place. Yesterday’s FiveThirtyEight poll update shows that there were 11 national polls and 25 state polls. For that one day alone.

All the polls will be using different samples, methodologies and weighting factors, and will be producing different results. How useful can all these be? Nate Silver doesn’t think much of them, hence his predictive model.

On the one hand polls can be incredibly misleading. Look at the 1992 British election, where people were ashamed to admit they voted Tory. Labour were well ahead in the polls, yet the Conservatives won. And there are still concerns that the Bradley effect could hand the current election to McCain despite Obama’s current lead.

And on the other hand they can also be influencing. A candidate may move ahead in a poll. This is reported as a surge in popularity. People gravitate towards the likely winner (whether it’s Rupert Murdoch or Mondeo Man) and so a temporary surge can be converted into a substantial lead. All without the candidate doing anything of substance.

However, I believe that while these polls are overused, they do at least serve a purpose. I’m less convinced by the glut of polling options appearing online at the moment.

WordPress, for instance, has incorporated Polldaddy into the service. So, I could choose to serve a poll to my readers if I wished to.

However, I do not.

I’m 100% in favour of developing systems and introducing new options, but I see little use in polls. They are a vague nod to interactivity, but they will produce little utility.

I can see how they can of use to some blogs with a large readership who spend a lot of time on the site constructing thoughtful arguments.

But for the majority of blogs (mine included), people skim and pass through. If they see a box, they might tick it. But how would that be useful to me? I would be grateful to my readers for participating, but I wouldn’t trust any results that come out of it.

Polls give an unwarranted aura of science or respectability. Ticking a box is no better than making a comment. In fact, it is worse, since it requires less effort to think. Simply choose a pre-conceived option and on you go.

Take the BBC’s polls for instance. The BBC are in a constant battle to maintain relevance (for the record, I love the BBC), and interactivity is a way in which they try to do so. But as a result, you end up with a surfeit of pointless noise.

I’m thinking less about the 33,000 comments and counting regarding “Sachsgate” (spEak You’re bRanes must think it has gone to heaven) and more about the fluffy questions of the day or the completely illogical player-rater on football games (I can go and rate Ashley Cole 1/10 on every match even though I won’t be following it), What benefit is being accrued here – either to the user or to the BBC?

So I am making a one person stand against polls. I won’t be using them, and I won’t be participating in them. May they rest in pieces.

sk

Image credit: http://www.flickr.com/photos/cfox74/

Reblog this post [with Zemanta]

The internet lasts forever*

* Well, unless the Internet Archive and the mirror at the Library of Alexandria both melt down.

I’ve been crazy busy the last week, hence the lack of real updates. So, a quick observation and a couple of jumbled thoughts to keep things ticking over here (as you may tell from my archive, I fall more into the “post often” than “post well” category – my blog is a work in progress collection of unedited thoughts and observations, rather than the finished article (so to speak)).

My observation is thus:

In the month of July, according to Comscore, the 95th most popular domain in the UK – with almost 2m unique users and 10m page impressions – was… Geocities.

My first thought was – huh? Geocities is still going?

After visiting the site, I can see that it still functions. Barely. But it has seen better days.

Yet it is still there. And still collecting more traffic than asda.co.uk, travelsupermarket.com and hmv.co.uk – the 3 sites directly below it in the July rankings.

Site owners rarely pull the plug online – though hosting companies might. What we publish online lasts forever. From Google Cache to the Wayback Machine via tools I am not savvy enough to know about, we will always have a digital footprint.

And that footprint will soon become footprints. I must have created a hundred site profiles over the years using a variety of pseudonyms. Most of those are collecting dust in various corners of the Internet. Ghost-towns are alive and well. But only in terms of users – not necessarily visitors.

But one day. whether it is through Open ID, Friendfeed or whatever interoperability that Google decide to bless us with, we will eventually become joined up.

Do I want that link to the past? Things I publish under the curiouslypersistent name (or derivatives thereof) form part of a coherent persona. Do I want that to be linked to things I have forgotten about and things I would rather forget from my past that are completely contrary to who I am now?

I notice that some of the newer sites allow you to change your username. This can allow one to align their personal brand by porting over to a new name and removing certain content. But just because it no longer exists in the current doesn’t mean that it isn’t there. And if a prospective employer decided to carry out a thorough online sweep of an interviewee?

Can there truly be a separation from work and life? Business and pleasure? Church and state?

The Internet is always on. And there is no escape.

sk

NB: As a sidenote, I am planning to redesign this blog. When I finally get around to it, I will be incorporating feeds to my other online footprints.

Image credit: http://www.flickr.com/photos/deia/

Linkbaiting is a tactic, not a strategy

Will blogging eat itself?

While taking into the account the existential question of what a blog actually is, and the gamut of prose that it encapsulates, the trend for ever-increasing noise does seem apparent (this blog being but one example). From microblogging to reblogging via splogs and linkrolls, are we reducing ourselves to inanity repeated endlessly? And does this degrade the wider media environment? Two excellent posts have brought these questions to my attention

Warren Ellis argues that we have come through to the end of the age of blogging he calls “The patchwork years”. Does this mean original content will make a comeback?

Possibly, but Jason Calacanis points to a wider, potentially damaging, effect of this era. Jason may have quit blogging for private mail-outs, but his presence is still felt (well, perhaps reblogging isn’t always so bad). I recommend that you read the entire entry, but this quote grabbed my attention (spotted via A VC).

The life of a startup CEO dealing with the rabid but sometime naive blogosphere is one of extremes. You’re killing or you’re killed, you’re the shinny new object or yesterday’s news. You can couple the link-bait based blogosphere with main-stream media journalists who, instead of acting like the voice of reason and “sticking to what got them there,” have taken the link-baiting bait. The MSM has had to incorporate the flame warring, rumor mongering and link-baiting ethos in order to keep up in the page-view cold war.

This is either the shot in the arm MSM needs to compete, or they’re chasing the blogosphere Thelma and Louise-style off a cliff. Time will tell I suppose

This harks back to my earlier post on the problems with auditing online metrics. Page views and unique users are not the complete answer and we risk cheap stunts overpowering quality content. Trivia may be hugely popular on the Internet, but MSM risks damaging their brands if they try to compete.

Perhaps I am being Utopian but if a work of genius like The Wire can survive on 38,000 viewers then surely websites can survive on a commercially orientated but BBC-inspired mindset. By providing us with a useful service. Content should remain King.

sk

Photo credit: http://www.flickr.com/photos/leecullivan/

BigChampagne and measuring piracy


Photo by http://www.flickr.com/photos/sharynmorrow/

Through this Economist article on Internet piracy, I came across the company BigChampagne. Among the data they compile are statistics on p2p downloads.

I can’t fathom from their website how exactly they measure this activity (I presume they crunch IP addresses of seeders and leechers), but it is certainly an area work monitoring. I expect that some very useful findings can be accrued.

So far the talk seems to be about music, but I see their being potentially more scope for TV producers and networks. Music, like radio, is try before you buy. You may hear a track by a band that you like on a compilation, and want to check out some more before you decide. As such, you cannot guarantee that a downloader is a fan of that artist and it brings into doubt the insights that can be leveraged from the location of the IP address, or the other simultaneous downloads.

As a sidenote, the Internet should be thanked for minimising the record label’s ability to con the public into buying the albums of one hit wonders. Though a bit too late for the people that have Babylon Zoo, Eagle Eye Cherry or OMC albums gathering dust in their attics.

However, if someone is downloading episode 6 of a show it is safe to assume that they are investing in the show as fans. While the geographic and cross-taste analysis is interesting, the key is to dig into the reasons why people are watching the programmes this way. Is it because of the length of time it takes to broadcast in their territory? Are legal video players sub-standard? Do people prefer to episode stack, and cannot wait for the DVD?

BigChampagne offers a potential aid to explore these issues. And these learnings can then go on to help companies improve their content offerings to the benefit of all involved – producers, distributors and consumers.

sk

Is too much information a good thing?

sensory overload

Well, no. By definition. But despite occasional thoughts that I am suffering from sensory overload, I’m grateful for the sheer amount of information available to us – TMI or not. I believe it makes me a better researcher.

However I can fully understand the concern some have over the sheer volume of knowledge available to us. Articles on the subject are appearing all of the time. We are infomaniacs. We now squeeze 31 hours into a single day. Google is making us stupid.

The root of this trend is of course the Internet. The democratisation of information means that our sources have multiplied. This is undoubtedly a good thing, but it becomes a challenge to distinguish the signal from the noise.

Extending the sources of our knowledge can widen our understanding, but the returns are diminishing. At what point do we reach an optimal point? When is the incremental benefit of an additional piece of information outweighed by the costs?

I’ve recently experienced this dilemma on a report I have been writing. After the first few pieces of research, the key themes begin to emerge. But rather than write up my findings, I continued to delve deeper into the data. My report was ultimately more thorough, but the key themes remained the same. Was this additional time spent worthwhile? Or would I have made better use of my time by moving onto the next project?

Ultimately, I believe it was worthwhile. There may be specific reasons when this isn’t the case, but generally I would argue that all information available should be considered because of NEEDS:

  • Nuance: Comparing and contrasting different sources allows you to put findings into a better context
  • Expertise: The more you take in, the more knowledgeable you are. It builds a solid platform for further work to emerge from. More work at the first stage can reduce the workload at later stages -in a similar way to new teachers writing lesson plans from scratch in their first year, and then honing existing plans in subsequent years
  • Experience: Following on from expertise, greater knowledge allows a greater understanding of both normative and emerging trends. In aural reports, this informed opinion is often as important as the data itself.
  • Depth: My themes may have remained, and so the breadth of my report remained the same. But I was able to expound on each with much greater depth of detail and understanding
  • Simplicity: This final point is counter-intuitive but also crucial. Accumulating information is easy; synthesizing and condensing isn’t. More information may make this task longer, but it will ensure greater quality and accuracy. For instance, the Net Promoter Score may be only one question, but a lot of work (and a 210 page book) went into the formulation of that question. As Mark Twain famously said, “I didn’t have time to write a short letter, so I wrote a long one instead”

If ignorance is bliss, does that make knowledge miserable? In my opinion, no. The best insights come from a complete assessment of the available information. This requires focus, dedication, excellent time management and an eye for detail, but the effort will be rewarded with the results.

sk

Photo credit: http://www.flickr.com/photos/biancaprime/

ABCe and the difficulties of auditing online metrics

measurement

As the recent influx of links have shown, I have struggled to keep my blog updated in recent weeks. This post has been saved in my drafts for close to a month now. While it may no longer be current news, the principles underlining the issues are still, and will continue to be, pertinent.

So, please cast your minds back to May 22nd, when it was announced that the Telegraph had overtaken the Guardian in terms of monthly unique users, and with it took the crown of the UK’s most popular newspaper website.

The figures were according to the ABCe – as close as the UK gets to officially audited web statistics. However, close is a relative term. The ABCes are still far from universally accepted and as can be inferred from the FAQs on their website, there are still many challenges to overcome. It will be some time before we can even approach the accuracy in audience figures for other above the line media (outdoor excepted).

To my eyes, the main issues surrounding effective online measurement can be boiled down to 3 broad categories.

Promoting the best metric(s)

metric hairclipThe biggest and most intractable obstacle. Which measure should be given most credence?

TV – the area I am most familiar with – also has a variety of measures. But average audience – across a programme, series or a particular timeslot – and coverage – the total number of people exposed to a programme/series/timeslot for a given time (usually 3 minutes) tend to be used most often.

Unfortunately, neither of these are fully appropriate for the web. So what are the alternatives? The main three are

  • How many (unique users) – but how unique is a unique user? Each visitor is tracked by a cookie, but each time a user empties his or her cache, the cookie is deleted. On the next visit, a new cookie is assigned. If I clean my cache once a week, I am effectively counted as 4 unique users a month. Plus there is my office computer, my blackberry, my mobile and my games console. I could easily be counted ten times across a month if I use a variety of touchpoints.
  • How busy (page impressions) – but how important is my impression? I may have accidentally clicked through a link, or I may continually refresh a page to update it. As for automated pages, such as the constantly refreshing Gmail or Myspace ? Is each page refresh counted as a new impression? Furthermore, if a page impression is used to calculate advertising rates, what happens to the impressions made with an adblocker in place?
  • How often (frequency – page impressions divided by unique users) – as this relies on the above metrics, it is heavily compromised

What about other measures?

  • Average time spent can be massively skewed by people leaving their browsers open while they aren’t at their pc
  • Average number of visits would give a decent measure of engagement, but the cookie issue would mean it would be understated.
  • Measuring subscriptions would be interesting, but these may be inactive, sites offer multiple feeds, and take-up are far from universal. As people become more adept with web browsing, RSS may gain more traction but websites such as Alltop are showing viable alternatives to the feed-reading system.

And beyond these concerns, there is still one crucial question that remains unanswered. Who are these people?

TV, radio and press use a representative panel of people to estimate the total population. For TV, the BARB panel consists of around 11,000 people who represent the 60m or so individuals in the UK. But we are seeing that as the number of channels increase, this size of panel isn’t able to accurately capture data for the smaller channels.

So what hope is there for the web, with the multitude of sites and sub-sites with tiny audiences? Not to mention the fact that these audiences are global.

Of course, online panels do already exist. But these only sample the top x number of websites, and, as it stands, the – differing -figures each of them produce are treated with caution and – on occasion – suspicion. Witness the argument between Radiohead and Comscore, to give one example

So I’m no closer on figuring out how we measure. How about what we measure?

Determining the content to be measured

greenshield stampsIf we are looking to determine advertising rates, then the easy answer is to measure any page that carries inventory. But should the quality or relevancy of the content be considered?

Sticking with UK newspaper sites, questions over what material should be audited include:

  • If we are looking at UK sites, should we only look at content orientated towards a UK audience? Should this content or audience be considered “higher quality”?
  • If we are considering the site as a newspaper, should we only look at current content only? For instance, the Times has opened up its archive for perusal. Should all of this content be counted equally?
  • How relevant to the contents of the news do the stats have to be? Newspapers have employed tricks from crosswords to bingo to free DVDs in order to boost their readership, but should newspaper websites be allowed to host games, social networking spaces or rotating trivia (to give one example) as a hook for the floating link-clickers or casual browsers?
  • How does one treat viral content, that can be picked up and promoted independently of the proprietor? See the story of the Sudanese man marrying the goat, which remained a popular story on BBC News for years, or the story about Hotmail introducing charges, which is brought up to trick a new batch of gullible people every year or so
  • What about if the internal search is particularly useless, and it takes several attempts to get to the intended destination?
  • And a tricky question to end on – can we and should we consider the intentions of the browser? For instance, my most popular post on this blog is my review of a Thinkbox event. Is it because it is particularly well written or interesting? No, it is because my blog appears when people search for a picture of the brain. Few of the visitors will even clock what the post is about; they will simply grab the picture and move on.

All of this makes me wonder how much of a false typology “UK Newspaper site” is in this environment. What proportion of visitors could actually be identified as being there for the news, and not because of clicking a link about the original Indiana Jones, or a funny review of the new Incredible Hulk movie

Could those articles have been approved purely for link-bait? As they also appear in the print editions, I think not. But I’m sure it does happen.

Accounting for “performance enhancers”

the incredible spongebob hulkIn the same way as certain supplements are permitted in athletics but others are banned, should some actions that can be used to artificially boost stats be regulated?

  • Should automated pages be omitted?
  • If the New Yorker splits out a lengthy article across 12 pages, can it really be said that it is 12 times more valuable than having it appear on one page?
  • Many sites now have “see also” or “related” sidebars. Should sites that refer externally be penalised for offering choice, against those that only refer within the site itself?
  • Search engine optimisation is a dark art, but there can ultimately only be one winner. While there are premium positions in-store and on the electronic programming guide, search engines have much more of a “winner take all” system in place where the first link will get the majority of the click-throughs. Should referrals be weighted to account for this?

There are a lot of questions above, and no real answers. No measurements are perfect, but we look to be a long way off approaching acceptability in the online sphere.

This is by no means my area of expertise, and I would love to hear from anyone with their own thoughts, suggestions or experiences on the topic. I will happily be corrected on any erroneous details in this post.

sk

Photo credits:
Measurement: http://www.flickr.com/photos/spacesuitcatalyst/
Metric hairclip: http://www.flickr.com/photos/ecraftic/
Greenshield Stamps: http://www.flickr.com/photos/practicalowl/
The Incredible Spongebob-Hulk: http://www.flickr.com/photos/chris_gin/

Spread Firefox

firefox dogToday is D-Day for Firefox 3. It comes out of beta testing, and is officially released.

Mozilla have come up with a brilliant launch campaign for it. They have harnessed the affinity and advocacy of the product through an excellent word of mouth campaign.

Today is their attempt to get into the Guinness Book of Records for the most downloads in a 24 hour period. There are currently 1.6m pledges (which you can see split by geography at the webpage), which matches the number that downloaded Firefox 2 in 2006. But now that the BBC, among others, are running the story, this is bound to rise.

I think Firefox is a fantastic product and this is a great way to launch it. I shall be downloading my version later.

If you are not yet convinced, there are write-ups here (Wired), here (Webmonkey) and here (Webware).

It will be available to download here from 1100PDT/1800GMT

sk

EDIT: All downloaded smoothly. The only issue so far is that some of my add-ons aren’t yet compatible with the new version.