Mediatel Media Playground 2011

My previous blog post covered my notes on Broadcast in a Multi-Platform World, which I felt was the best session of the day. Below are my notes from the other 3 sessions (I didn’t take any notes during the bonus Olympics session)

The data debate

Chaired by Torin Douglas, Media Correspondent for the BBC

Speakers:
Andrew Bradford, VP, Client Consulting, Media at Nielsen
Sam Mikkelsen, Business Development Manager at Adalyser

Panellists:
David Brennan, Research & Strategy Director at Thinkbox
Kurt Edwards, Digital Commercial Director at Future
Nick Suckley, Managing Director at Agenda21
Bjarne Thelin, Chief Executive at BARB

Some of the issues touched upon in this debate were interesting but I felt they were dealt with too superficially (but as a researcher, I guess it is inevitably I’d say that).

David Brennan thinks we need to take more control over data and how we apply it. There is a dumb acceptance that anything created by a machine must be true and we’ve lost the ability to interrogate the data

Nick Suckley thinks the main issue is the huge productivity problem with manual manipulation of data from different sources (Google has been joined by Facebook, Twitter and the mobile platforms), but this also represents a huge opportunity. He thinks the fight is not about who owns the data, but who puts it together

Torin Douglas posited whether our history of currencies meant that we weren’t so concerned with data accuracy, since everyone had access to the same information. Bjarne Thelin unsurprisingly disagreed with this, pointing out the large investment in BARB shows the need for a credible source.

David Brennan said his 3 Es of data are exposure (buying), engagement (planning) and effectiveness (accountability)

Nick Suckley thinks people would be willing to give up information for clear benefits but most don’t realise what already is being collected on them

Kurt Edwards thinks social media is a game-changer from a planning point of view as it sends the power back to the client. There is real-time visibility, but the challenge is to not react to a few negative comments

David Brennan concurred and worried about the possibility of social media data conclusions not being supported by other channels. You need to go out of your way to augment social media data with other sources to get the fuller picture

Bjarne Thelin gave the example of BBC’s +7 viewing figures to show that not all companies are focusing purely on real-time. He also underlines the fact that inputs determine outputs and so you need to know what goes in

David Brennan concluded by saying that in the old days you knew what you were getting. Now it is overblown, with journalists confused as to what is newsworthy or significant

Social media and gaming

Chaired by Andrew Walmsley, ex i-Level

Speakers:
Adele Gritten, Head of Media Consulting at YouGov
Mark Lenel, Director and senior analyst at Gamesvison

Panellists:
Henry Arkell, Business Development Manager at Techlightenment
Pilar Barrio, Head of Social at MPG
Toby Beresford, Chair, DMA Social Media Council at DMA
Sam Stokes, Social Media Director at Punktilio

The two speakers gave a lot of statistics on gaming and social gaming, whereas the panel focused upon social media. This was a shame, as the panel could have used more variety. All panel members were extolling the benefits of social media, and so there was little to no debate.

There was discussion about the difficulty in determining the value of a fan, the privacy implications, Facebook’s domination across the web and the different ways in which social media can assist an organisation in marketing and other business functions.

Mobile advertising

Chaired by Simon Andrews, Founder of addictive!

Speaker:
Ross Williams, Associate Director at Ipsos MediaCT

Panellists:
Gary Cole, Commercial Director at O2
Tamsin Hussey, Group Account Director at Joule
Shaun Jordan, Sales Director at Blyk
Will King, Head of Product Development at Unanimis
Will Smyth, Head of Digital at OMD

Ross Williams gave an interesting case study on Ipsos’ mobi app, which tracked viewer opinion during the Oscars.

Simon Andrews’ approach to chairing the debate was in marked contrast to the previous sessions. He was less a bystander and more a provocateur – he clearly stated his opinions and asked the panel to follow-up. He was less tolerant of bland sales-speak than the previous chairs, but was also more biased in approaching the panel with the majority of panel time filled with Simon speaking to Will Smyth.

Will King things m-commerce will boost mobile like e-commerce did with digital. Near field communication will move mobile into the real world.

Gary Cole pointed out that mobile advertising is only a quarter of a percent of ad spend but that clients should think less about display advertising and of mobile as a distinct channel. Instead, mobile can amplify other platforms in a variety of ways.

Tamsin Hussey said that as there isn’t much money in mobile, there is no finance to develop a system for measuring clicks and effectiveness of all channels. Currently, it has to be done manually.

Will Smyth said the app store is the first meaningful internet experience on the mobile. The mobile is still young and there is a fundamental lack of expertise at the middle management level across the industry. Social is currently getting all the attention (“Chairman’s wife syndrome”) but mobile has plenty to offer.

sk

Advertisements

The gamification of surveys

How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.

I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.

The presentation is embedded below.

The problem

There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.

In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.

As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.

The solution

Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.

Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as

  • Time taken to complete the survey (against what time it “should” take)
  • Time taken on a page before an answer is selected
  • Consistency in time taken to answer similar forms of questions
  • Length of response in open-ended answers
  • Variation in response (or absence of straight lines)
  • Absence of contradictions (a couple of factual questions can be repeated)
  • Correct answers to “logic” questions

A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via

  • Achievement – aiming to improve a quality score over time
  • Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
  • Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)

The challenges

For this to work successfully, several challenges would need to be overcome

  • Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
  • Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
  • Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
  • Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
  • Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.

Would it work? I can’t say with any certainty, but I’d like to see it attempted.

sk

Enhanced by Zemanta