Links – 26th October 2008

A selected list of links below. The recent paucity of posts, along with this going out on a Sunday, should indicate that my recent schedule hasn’t been too forgiving.

Blog-related

Jeremiah Owyang on the 7 tenets of the connected analyst. There is a balance between utility, leadership and a commercial outlook

NBC has begun releasing its TAMIs – total audience measurement index across all platforms. It will be interesting to compare how different genres perform across the different media. Cross-media reach is the holy grail – this isn’t that as it takes no account of audience duplication, but it is a step in the right direction (TV Week)

10 reasons why newspapers won’t reinvent the news in the 21st century. Quite a pessimistic outlook. (Xark)

What great marketers do well – well worth reading (Wikibranding)

Thoughts on the semantic web and the future of advertising from the Web 3.0 Expo (Read Write Web)

Hugh MacLeod interviews Mark Earls – both extremely interesting and intelligent thought-leaders

10 Internet stats for sceptics – the This is Herd blog has been on fire in the last week. This is an extremely useful post that I will be referencing again and again.

Nicholas Carr on Google and the Centripetal web – a very interesting notion. Google is moving from purely facilitating search, to providing unique content through its “First Click Free” method of moving around subscription firewalls.

Doc Searls’ elegant response to the borderline troll post on Wired where the author opined that Facebook, Flickr and Twitter had killed blogging. I don’t blog for fame and money. I blog to ruminate, to share and to learn. And will continue to do so.

Random

Paul Graham writes a typically brilliant essay on… writing an essay

Vice has an interesting profile on a former large-scale heroin dealer

http://librivox.org/ is the place to go for user-created audio books of out-of-copyright works

17 interesting facts about doctors and patients (E-med Expert)

NY Mag has a great feature on Nate Silver – the statistical genius behind the brilliant Five Thirty Eight website

I can’t really single any particular post out for praise this week- they are all well worth reading and re-reading

sk

Advertisements

Nielsen launch TV/Online convergence panel

Nielsen have announced the official launch of their convergence panel. The panel of 1,000 homes and 2,800 individuals in the United States (I presume this is a pilot sample size) will have both their TV viewing and online surfing measured. The TV viewing is captured using the official audience measurement system, with the online element recorded using Nielsen’s proprietary technology.

This is a much needed development as we seek to overcome the prevailing 20th century methods of treating activities distinctly and in silos.

(NB: I recognise that this consensus was most probably taken for practical purposes but with Touchpoints and now this I am encouraged that we are moving towards a unified and more realistic approach)

With more activity moving online, the potential for this panel is incredibly exciting. Achievable insights can include

  • Total TV viewing audiences/reach across TV, DTR and online catch-up (across all sites)
  • The precise relationship between TV viewing hours and online surfing hours
  • The effects of on-air continuities on online activity in real-time
  • The relationship between viewing a programme on TV and accessing supplementary content or information online
  • Simultaneous and solus usage. In theory, one could also see whether the frequency of visiting sites in simultaneous usage is affected as a TV programme draws to a conclusion
  • Whether certain genres are conducive to solus viewing, and others to simultaneous
  • Assuming advertising data is recorded, one can measure the call to action in prompting those exposed to the TV ads to search or purchase online
  • Cross-visiting between channels/programmes and websites

ESPN have already benefited from cross-platform research with Nielsen, with data showing that those invested into the online offering had higher levels of TV viewing. But thus far, this is only looking at one dimension of convergence.

If the panel proves to be successful, it could theoretically be extended to the entire media landscape. Digital/Internet isn’t a straightforward media in that it overlaps with everything – audio, video and text.

Could we therefore see an online panel merged with the official radio or print audience currencies? By fusing the data to a consumer lifestyle survey such as TGI, we would create a real-time, ongoing record of behaviour that goes way beyond Touchpoints’ one-time snapshot. We would be getting closer to the holy grail of media planning with cross-platform reach, frequency and (perhaps) effectiveness all attainable.

However, we shouldn’t get too excited just yet as there are several objections and obstacles to overcome

  • Mediapost notes a concern that increased data collection will put people off, making the panel more unrepresentative. This is a valid criticism, but it is equally valid to all current measurement panels. The size of the establishment survey and the length of commitment are already major deterrents. Adding a second – passive – measurement isn’t going to make much difference, in my opinion
  • Similarly, people may be more concerned with their privacy online. When I worked at a research agency, our online tracking tool didn’t record secure sites and also had an opt-out on certain behaviour. Over time, this behaviour can be modelled and factored into assumptions. While imperfect, it can become a known limitation and worked around
  • While the TV ratings are official, the online traffic numbers aren’t. There are still many problems with recording online behaviour – the long tail of websites, home vs. work vs. mobile access, bias to power users – and all these concerns will be transferred over to the convergence panel
  • Related the the above, the panel size for online measurement needs to be far greater than TV due to the multitude of niche sites (and TV-related activities online are a minority interest, albeit growing). This means the convergence panel is going to need to be much larger than the TV panel and therefore more expensive. Will it be viable?
  • And a very minor point, but as always one can get carried away with online behaviour and overlook the significant number that aren’t online (the “left behinds”). This should be avoided.

These benefits and obstacles are top-of-mind, and I’m sure they have already been considered by those involved. But as always, the proof is in the pudding and so I eagerly anticipate the data releases and word on how the panel is being used by the participating clients. A successful convergence panel can only be good news for the media industry.

sk

Image credit: http://www.flickr.com/photos/jmtimages/

The five questions that need to be answered about online video

I’ve recently put together a presentation deck on the state of the online video market. It consists of both the primary research that we have been conducting here, and the secondary research I have been able to source through subscription services, press releases and generous folk who put their work online – such as Ofcom and Universal McCann.

I’ve been taking the presentation around our various agency clients in order to spread the love (and of course use the face time to sell in opportunities). Thus far (touch wood) the presentation has been well received.

The general feedback I’ve been getting is that people are willing to experiment with online video, but the paucity of makes it difficult to justify a long-term investment. Any data showing the efficacy (or otherwise) of online video is therefore valuable.

Below are the five major questions regarding online video. I’ve tried to give a steer on them but currently there are no definitive answers.

1. Who is watching?

And indeed how many. This is the crucial question for agencies. With many (but not all) moving to online video from TV, the gap where BARB audience ratings should be is extremely conspicuous. Alas, JICIMS don’t have an immediate solution and so for the interim one or a combination of the below is used

  • Internal site stats. But even if they are externally validated (e.g. by ABCe) there is still the lack of transparency about whether the stats have been gamed and comparisons with other sites won’t necessarily be like for like. And of course the major problem is that it gives you numbers, but not demographics. I posted about many of the problems around online measurement here.
  • Users could be forced to register and share information before being allowed to view videos. But this contravenes the “openness” of the web
  • Comscore and Net Ratings give basic demographic details for video views and offer a competitive context. However their methodologies aren’t universally accepted. All impressions are equal – whether auto-rolls or each ad in an ad break being tagged as a separate impression
  • If none of these methods are viable, then we are forced to rely (at least partly) on survey data. And that opens up a whole new can of worms

2. Which advertising model is best?

Or, which is your favourite child?

There are a multitude of ad models and formats to choose from. I detailed many of them in a previous post here. Since writing that, I have read about two further types of ad format which add to the choice available.

Inskin Media wrap an interactive banner/border around content

MirriAd digitally incorporate brands and products into a show in post-production – whether a logo on an object, or a car driving off in the background.

Pre-rolls are the most common format (from memory, I read a stat saying two thirds of European video providers use pre-rolls), and their prevalence has driven acceptance. While people may have initially been turned off, familiarity has bred acceptance. The public generally agree that advertising is a reasonable exchange for free content.

However, not all content sites are equal. Ipsos have shown that people are less likely to find advertising reasonable on user generated content than professional. This is why I am eager to find out whether Youtube’s post-roll experiment works. Despite ads only being placed after partner content, Youtube has the stigma of UGC and I believe people will have trouble accepting ads on the site, no matter what content they are placed around.

3. How effective is my advertising?

This is the big question in all forms of media, and online is no exception. However, because audiences can’t be so easily identified, it is also a big problem. To my knowledge, there are three routes to go down

  • Pop-up/overlay advertising. We know respondents have just seen the ad (unless they are the control), but this is the downside. The questionnaire is served at the point of exposure. There is no window to allow the ad to embed into people’s consciousness.
  • Behavioural tracking. Whichever company does this first (and effectively) will become very, very rich. Obstacles include coverage (it is unlikely that all websites will participate, and even the most popular ad networks only cover a fraction of the web) and the sheer volume of data that would be generated. Analysis of longitudinal data would require a cloud/farm of Google or Amazon proportions
  • Surveys on online panels – again the can of worms of claimed, rational, after-the-fact responses among members of a panel

4. Why should I add online video to my media plan?

Indeed. But why should I add radio, or press, or cinema? Each medium brings a different audience and a different experience – if they are used in conjunction effectively then a stellar transmedia campaign can be executed.

My argument regarding online video is that it dovetails very nicely with television. Rather than cannibalising, it complements. TV has the mass reach and epoch-defining moments. Online video offers the shared experience asynchronously, allowing the attentive audience to interact on their terms. People tend to watch TV programmes online when they have missed them on TV, while short-form extras can deepen the experience (look at Heroes for example), and increase engagement among the TV-viewing audience.

For those that are interested in numbers, creating an accurate measure of incremental reach is vital. Touchpoints offers it at a platform level but isn’t granular enough for most situations. A tool that highlights incremental reach and frequency across a multitude of platforms and channels therefore needs to be developed.

5. Where is this going?

I don’t understand financial markets but I do understand the dangers of speculation. And that’s what any answer to this question would be. Forrester, emarketer and so on may predict future audiences and revenues. But who knows what the situation will be like next year, let alone in five. How long did it take Youtube to change the market? Or iPlayer/Hulu? And what effect will Kangaroo/SeeSaw have?

And as for the unified home entertainment TV/Internet experience? I’m not even going there…

sk

ABCe and the difficulties of auditing online metrics

measurement

As the recent influx of links have shown, I have struggled to keep my blog updated in recent weeks. This post has been saved in my drafts for close to a month now. While it may no longer be current news, the principles underlining the issues are still, and will continue to be, pertinent.

So, please cast your minds back to May 22nd, when it was announced that the Telegraph had overtaken the Guardian in terms of monthly unique users, and with it took the crown of the UK’s most popular newspaper website.

The figures were according to the ABCe – as close as the UK gets to officially audited web statistics. However, close is a relative term. The ABCes are still far from universally accepted and as can be inferred from the FAQs on their website, there are still many challenges to overcome. It will be some time before we can even approach the accuracy in audience figures for other above the line media (outdoor excepted).

To my eyes, the main issues surrounding effective online measurement can be boiled down to 3 broad categories.

Promoting the best metric(s)

metric hairclipThe biggest and most intractable obstacle. Which measure should be given most credence?

TV – the area I am most familiar with – also has a variety of measures. But average audience – across a programme, series or a particular timeslot – and coverage – the total number of people exposed to a programme/series/timeslot for a given time (usually 3 minutes) tend to be used most often.

Unfortunately, neither of these are fully appropriate for the web. So what are the alternatives? The main three are

  • How many (unique users) – but how unique is a unique user? Each visitor is tracked by a cookie, but each time a user empties his or her cache, the cookie is deleted. On the next visit, a new cookie is assigned. If I clean my cache once a week, I am effectively counted as 4 unique users a month. Plus there is my office computer, my blackberry, my mobile and my games console. I could easily be counted ten times across a month if I use a variety of touchpoints.
  • How busy (page impressions) – but how important is my impression? I may have accidentally clicked through a link, or I may continually refresh a page to update it. As for automated pages, such as the constantly refreshing Gmail or Myspace ? Is each page refresh counted as a new impression? Furthermore, if a page impression is used to calculate advertising rates, what happens to the impressions made with an adblocker in place?
  • How often (frequency – page impressions divided by unique users) – as this relies on the above metrics, it is heavily compromised

What about other measures?

  • Average time spent can be massively skewed by people leaving their browsers open while they aren’t at their pc
  • Average number of visits would give a decent measure of engagement, but the cookie issue would mean it would be understated.
  • Measuring subscriptions would be interesting, but these may be inactive, sites offer multiple feeds, and take-up are far from universal. As people become more adept with web browsing, RSS may gain more traction but websites such as Alltop are showing viable alternatives to the feed-reading system.

And beyond these concerns, there is still one crucial question that remains unanswered. Who are these people?

TV, radio and press use a representative panel of people to estimate the total population. For TV, the BARB panel consists of around 11,000 people who represent the 60m or so individuals in the UK. But we are seeing that as the number of channels increase, this size of panel isn’t able to accurately capture data for the smaller channels.

So what hope is there for the web, with the multitude of sites and sub-sites with tiny audiences? Not to mention the fact that these audiences are global.

Of course, online panels do already exist. But these only sample the top x number of websites, and, as it stands, the – differing -figures each of them produce are treated with caution and – on occasion – suspicion. Witness the argument between Radiohead and Comscore, to give one example

So I’m no closer on figuring out how we measure. How about what we measure?

Determining the content to be measured

greenshield stampsIf we are looking to determine advertising rates, then the easy answer is to measure any page that carries inventory. But should the quality or relevancy of the content be considered?

Sticking with UK newspaper sites, questions over what material should be audited include:

  • If we are looking at UK sites, should we only look at content orientated towards a UK audience? Should this content or audience be considered “higher quality”?
  • If we are considering the site as a newspaper, should we only look at current content only? For instance, the Times has opened up its archive for perusal. Should all of this content be counted equally?
  • How relevant to the contents of the news do the stats have to be? Newspapers have employed tricks from crosswords to bingo to free DVDs in order to boost their readership, but should newspaper websites be allowed to host games, social networking spaces or rotating trivia (to give one example) as a hook for the floating link-clickers or casual browsers?
  • How does one treat viral content, that can be picked up and promoted independently of the proprietor? See the story of the Sudanese man marrying the goat, which remained a popular story on BBC News for years, or the story about Hotmail introducing charges, which is brought up to trick a new batch of gullible people every year or so
  • What about if the internal search is particularly useless, and it takes several attempts to get to the intended destination?
  • And a tricky question to end on – can we and should we consider the intentions of the browser? For instance, my most popular post on this blog is my review of a Thinkbox event. Is it because it is particularly well written or interesting? No, it is because my blog appears when people search for a picture of the brain. Few of the visitors will even clock what the post is about; they will simply grab the picture and move on.

All of this makes me wonder how much of a false typology “UK Newspaper site” is in this environment. What proportion of visitors could actually be identified as being there for the news, and not because of clicking a link about the original Indiana Jones, or a funny review of the new Incredible Hulk movie

Could those articles have been approved purely for link-bait? As they also appear in the print editions, I think not. But I’m sure it does happen.

Accounting for “performance enhancers”

the incredible spongebob hulkIn the same way as certain supplements are permitted in athletics but others are banned, should some actions that can be used to artificially boost stats be regulated?

  • Should automated pages be omitted?
  • If the New Yorker splits out a lengthy article across 12 pages, can it really be said that it is 12 times more valuable than having it appear on one page?
  • Many sites now have “see also” or “related” sidebars. Should sites that refer externally be penalised for offering choice, against those that only refer within the site itself?
  • Search engine optimisation is a dark art, but there can ultimately only be one winner. While there are premium positions in-store and on the electronic programming guide, search engines have much more of a “winner take all” system in place where the first link will get the majority of the click-throughs. Should referrals be weighted to account for this?

There are a lot of questions above, and no real answers. No measurements are perfect, but we look to be a long way off approaching acceptability in the online sphere.

This is by no means my area of expertise, and I would love to hear from anyone with their own thoughts, suggestions or experiences on the topic. I will happily be corrected on any erroneous details in this post.

sk

Photo credits:
Measurement: http://www.flickr.com/photos/spacesuitcatalyst/
Metric hairclip: http://www.flickr.com/photos/ecraftic/
Greenshield Stamps: http://www.flickr.com/photos/practicalowl/
The Incredible Spongebob-Hulk: http://www.flickr.com/photos/chris_gin/