• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

New MR Virtual Festival notes

I’m breaking my longest-to-date blogging absence (work-life parity should soon be restored) with two versions of the same post. This is the first.

They are related to the New MR global online conference that ran on 9th December 2010, featuring speakers and moderators across Europe, the Americas and Asia-Pacific. The event was created and organised by Ray Poynter – a long-standing, committed and energetic member of the international market research community – and his management board. In addition to the core event, various “fringe” events also took place. More info on them can be found at the 2010 Festival pages on the site.

This is the larger of the two posts, where I’ve reformatted all of the notes I made on the day and supplemented them with some additional thoughts (I’ve not yet caught up on the presentations I missed for reasons such as being asleep/exhausted, while some presentations weren’t as relevant to me and so I skipped them). So while this isn’t exhaustive, there will still be plenty of words to keep you occupied for a short while.

I’ll follow it up with a shorter post outlining my key takeaways from the day, and my overall thoughts on the event.

As this is the single longest post on this blog (circa 5,000 words), I’m taking the rare step of putting the bulk of it behind a cut (the size also means it is not properly proof-read). Click through to continue (unless you are reading via RSS) Continue reading

Advertisement

The five questions that need to be answered about online video

I’ve recently put together a presentation deck on the state of the online video market. It consists of both the primary research that we have been conducting here, and the secondary research I have been able to source through subscription services, press releases and generous folk who put their work online – such as Ofcom and Universal McCann.

I’ve been taking the presentation around our various agency clients in order to spread the love (and of course use the face time to sell in opportunities). Thus far (touch wood) the presentation has been well received.

The general feedback I’ve been getting is that people are willing to experiment with online video, but the paucity of makes it difficult to justify a long-term investment. Any data showing the efficacy (or otherwise) of online video is therefore valuable.

Below are the five major questions regarding online video. I’ve tried to give a steer on them but currently there are no definitive answers.

1. Who is watching?

And indeed how many. This is the crucial question for agencies. With many (but not all) moving to online video from TV, the gap where BARB audience ratings should be is extremely conspicuous. Alas, JICIMS don’t have an immediate solution and so for the interim one or a combination of the below is used

  • Internal site stats. But even if they are externally validated (e.g. by ABCe) there is still the lack of transparency about whether the stats have been gamed and comparisons with other sites won’t necessarily be like for like. And of course the major problem is that it gives you numbers, but not demographics. I posted about many of the problems around online measurement here.
  • Users could be forced to register and share information before being allowed to view videos. But this contravenes the “openness” of the web
  • Comscore and Net Ratings give basic demographic details for video views and offer a competitive context. However their methodologies aren’t universally accepted. All impressions are equal – whether auto-rolls or each ad in an ad break being tagged as a separate impression
  • If none of these methods are viable, then we are forced to rely (at least partly) on survey data. And that opens up a whole new can of worms

2. Which advertising model is best?

Or, which is your favourite child?

There are a multitude of ad models and formats to choose from. I detailed many of them in a previous post here. Since writing that, I have read about two further types of ad format which add to the choice available.

Inskin Media wrap an interactive banner/border around content

MirriAd digitally incorporate brands and products into a show in post-production – whether a logo on an object, or a car driving off in the background.

Pre-rolls are the most common format (from memory, I read a stat saying two thirds of European video providers use pre-rolls), and their prevalence has driven acceptance. While people may have initially been turned off, familiarity has bred acceptance. The public generally agree that advertising is a reasonable exchange for free content.

However, not all content sites are equal. Ipsos have shown that people are less likely to find advertising reasonable on user generated content than professional. This is why I am eager to find out whether Youtube’s post-roll experiment works. Despite ads only being placed after partner content, Youtube has the stigma of UGC and I believe people will have trouble accepting ads on the site, no matter what content they are placed around.

3. How effective is my advertising?

This is the big question in all forms of media, and online is no exception. However, because audiences can’t be so easily identified, it is also a big problem. To my knowledge, there are three routes to go down

  • Pop-up/overlay advertising. We know respondents have just seen the ad (unless they are the control), but this is the downside. The questionnaire is served at the point of exposure. There is no window to allow the ad to embed into people’s consciousness.
  • Behavioural tracking. Whichever company does this first (and effectively) will become very, very rich. Obstacles include coverage (it is unlikely that all websites will participate, and even the most popular ad networks only cover a fraction of the web) and the sheer volume of data that would be generated. Analysis of longitudinal data would require a cloud/farm of Google or Amazon proportions
  • Surveys on online panels – again the can of worms of claimed, rational, after-the-fact responses among members of a panel

4. Why should I add online video to my media plan?

Indeed. But why should I add radio, or press, or cinema? Each medium brings a different audience and a different experience – if they are used in conjunction effectively then a stellar transmedia campaign can be executed.

My argument regarding online video is that it dovetails very nicely with television. Rather than cannibalising, it complements. TV has the mass reach and epoch-defining moments. Online video offers the shared experience asynchronously, allowing the attentive audience to interact on their terms. People tend to watch TV programmes online when they have missed them on TV, while short-form extras can deepen the experience (look at Heroes for example), and increase engagement among the TV-viewing audience.

For those that are interested in numbers, creating an accurate measure of incremental reach is vital. Touchpoints offers it at a platform level but isn’t granular enough for most situations. A tool that highlights incremental reach and frequency across a multitude of platforms and channels therefore needs to be developed.

5. Where is this going?

I don’t understand financial markets but I do understand the dangers of speculation. And that’s what any answer to this question would be. Forrester, emarketer and so on may predict future audiences and revenues. But who knows what the situation will be like next year, let alone in five. How long did it take Youtube to change the market? Or iPlayer/Hulu? And what effect will Kangaroo/SeeSaw have?

And as for the unified home entertainment TV/Internet experience? I’m not even going there…

sk

How representative are Online surveys?

To answer the above question in three words: I don’t know.

Generally, I have been sceptical about the relative veracity of Online surveys. Working for a media owner, there is a general concern that moving surveys online may reduce the strength of TV and increase that of the Internet. But I am being won around.

After all, no methodology is perfect. In fact, it could be argued that all are inadequate. Even if one were able to take a census of the entire population (even the UK census only has a 94% response rate), how accurately are people able to express their unconscious thoughts, desires and opinions?

Which is why I was pleasantly surprised to read that YouGov correctly predicted the results of the London Mayoral election. Accurate polling always requires a bit of luck. When I worked at a research agency, the weighting factors for the forthcoming election were changed at the last moment, and fortunately improved the prediction. But even when considering the fluctuations, it does represent a significant victory for the online method.

There are far better resources than this blog debating the relative merits and drawbacks of research methodologies, but my softening of opinion has come about for two main reasons that I have recently given more thought

Societal changes are making the traditional methodologies less accurate over time: The rise of the one person household makes it more difficult for face-to-face interviewers to catch people at home at a time where they are willing to participate. Telephone research is becoming less representative thanks to the rise of mobile phones at the expense of landlines, and the popularity of the TPS. Even if mobile numbers are included in the sample, people are far less willing to participate, since mobile phones are more personal and the call is therefore more intrusive. And while the TPS doesn’t cover market research, some companies voluntarily clean the sample of TPS numbers, since the public perception is that research is no different to telemarketing. And as online penetration increases, one would expect survey representativeness to follow suit.

Online research is more conducive to considered opinion: Online surveys produce more honest responses thanks to the anonymity provided. Without an interviewer waiting for an answer, the respondent can also give a more considered answer (if they so desire). Combined, these will produce more accurate data.

Of course, these points aren’t uniformly positive. Even though Internet access increases, the proportion of those actively on a research panel will still be quite small. Gritz (2004) achieved an 8.4% sign-up rate for an online survey and I wouldn’t be surprised if this figure would be lower if the experiment were repeated now. And Online surveys may allow for more considered responses but without an interviewing probing, the answers may be ambiguous and thus meaningless. But, for me at least, the benefits far outweigh the drawbacks.

I do still have one major concern with online surveys. Without any proof, I have the perception that the attitudinal differences between those that take part in online surveys and those that don’t is greater than the differences between those that do and don’t respond in different methodologies. Those that join online panels are self-selecting, and will tend to spend more time online than the average person.

Sticking with YouGov, their Brand Index (which, in general, I like) ranks Google and Amazon as the top 2 brands in 2007. Would they still come out on top in an offline survey, factoring in the third that don’t use the Internet and those that spend more time with traditional media? I’m not so sure.

But for me at least, I have far less reservations with moving research online than I had a year ago.

I’d be interested in hearing other people’s thoughts on the benefits and drawbacks of the shift. Am I late to the party in accepting online, or do others still hold reservations?

sk