The future of mobile at the RSA

I’m a big fan of the RSA, and should really attend their excellent events more often than I do. However, I did attend the Future of Mobile event last night.

Sadly, it was the least interesting event I’ve attended (though the standard is exceptionally high). The keynote – billed as an insight into what the next few years may bring in terms of new products and practices, new opportunities for creativity, collaboration and economic growth, the role of new communications in shaping social norms and behaviours, and the effect this will have on individuals, organisations and societies was little more than a corporate sales pitch. However, I did make some notes and the event was, on balance, worth attending.

The Keynote

The keynote speaker was Lee Epting (Director of Content Services at Vodafone). In her talk she referenced three Vodafone initiatives in the developing world, namely

  • M-Pesa – the mobile banking/money transfer service set up in Kenya
  • An M-Health initiative
  • Future Agenda – driving towards sustainability, such as machine to machine communications around load capacities or metering of utilities

Each initiative had its own uplifting video. Perhaps I’m being an overly sensitive Western liberal apologist, but I found the tone of the videos quite patronising and demeaning. The African people in the video may well have independently said things like “I feel like a real businessman now” or “And now I even know the real medical terms”, but they didn’t need to be included. If the video were on Western initiatives, they would have focused on the tangible benefits, not on trying to give us a warm fuzzy feeling about helping those less fortunate.

The majority of the talk was based upon initiatives in the developing world (which makes sense, since Vodafone and mobiles can bypass computers there, while in the developed nations they run the risk of commodisation into “dumb pipes”), but Lee Epting did finish on a few trends visible in our society

  • People tracking – she claimed that acceptance of this is accelerating. This may be true, but accelerating from a very small base. A minority may opt into location-based services, and ticketless transport may prove popular, but I’d say it would infiltrate by stealth. She also mentioned vehicle tracking and how it will help pricing for toll roads
  • Choice editors – we are becoming reliable news sources, so it is about curation as much as consumption

The responses

The speech was followed by two responses. The second was from Ralph Simon (CEO of The Mobilium International Advisory Group), who I’d previously criticised when he chaired the Harold Evans lecture on innovation. He’s much better as a panelist, since he basically just tells loosely connected anecdotes. He also has excellent enunciation. In his brief response he talked about Couch Surfing and how communications can amplify lives, but also shared clutter and how we need curators to navigate it.

A longer response was from Christian Lindholm, (Partner and Director at the convergence design agency Fjord) who made some fascinating provocations and was by far the best thing about the event.

  • Choice quotes include “The future is always here and now but someone we ignore it” and how “Humans are obsessed with objects”
  • He believes the Nokia 2110 simplified the phone and became the first hit phone, and that the iPad is the Nokia 2110 of computing. The iPad gives a power of mobility that the IT department can’t control. There is also a significant difference between the Wifi and 3G models. The Wifi comes from the pc industry and drains the battery. The 3G is energy-efficient and gives ubiquitous communications
  • A big thing in future will be the digitisation of the wallet. It needs a big disruption as elastic process innovation – adding chips to everything – won’t work since proper digitisation requires screens, profiles etc. The current “two-handed” analogue wallet is “retarded” but it makes sense for incumbent companies who are invested in producing cheap thin strips of plastic. In the question and answer session, he speculated Amazon might make a play in this area
  • He sees the next megabrand as Foursquare, what with every classroom at Harvard Business School already mapped onto it.
  • We need a new vocabulary for next generation communications. It is not multimedia, video, smart or other industry jargon but come from the users. This seems to be “facebooking”, which is aggregating all forms of content and creating an internet of people.
  • An internet of people means everyone will be on Facebook, since everyone will want to communicate. He sees Facebook negotiating privacy in the same way Google negotiates copyright – move the boundaries two steps forward, apologise and take one step back and gradually monetise it. He sees the openness of the web as the counterweight to Facebook or Google dominance and should be preserved.

Due to a late start, there was little time for questions. In the introduction to the questions, the chair Luke Johnson made a barbed comment about people playing on their mobile devices rather than listening to the speakers, and rather unfairly picked out one person in the front row. Personally, I was on Twitter throughout (don’t RSA hashtags encourage this sort of thing?) and the general tone of chat was similar to my thoughts – mild disappointment.

Final thoughts

Christian Lindholm and Ralph Simon both seemed to disagree with Luke Johnson’s contention about split attention being a bad thing. Simon quoted Brian Eno by saying the genius is being replaced by scenius. Though as Steven Johnson has recently written, perhaps the idea of a lone genius is a myth.

We may end up doing things less well than if we concentrated solely, but split attention and mass collaboration provide other benefits such as broader scope and more rounded influences. Technological advancement has got to the point where it is almost impossible for a single person to know everything about a particular topic – we need specialists and teams working together. I’m in favour of our new hyperlinked working practices – those arguing against are analogous to Socrates hating the written word, since he thought it reduced quality of discourse and dialogue.

So, in summary, it wasn’t an unmissable event and there wasn’t a whole lot on the future of mobile and its effects on society (at least in areas directly relevant to my job or my interests), but there were a couple of interesting nuggets I took away.

However, if you are interested in hearing more, an audio recording of the event is available here.

sk

Image credit: http://www.flickr.com/photos/gibbons/343384475

The battle of big versus small

EDIT: An updated version of this article can be found here

Précis

Particularly in research, but also in other marketing disciplines, big agencies and small agencies will compete for tenders against one another. Normally, one agency is successful. I find this strange as the benefits of several agencies specialising appear, to me at least, to be greater than that of consolidation.

Introduction

Having worked at both a large research agency (GfK NOP, though it was NOP World when I joined) and small agency (Essential Research, where I’m currently employed), I read with interest the “Is Bigger Better?” article in October’s Research magazine (EDIT: The link to “Is Bigger Better?”, which is now online).

The article took the form of a debate, with Paul Edwards (Chairman of TNS-RI) representing “big” and Jan Shury (Joint Managing Director of IFF Research) representing “small”. Their main arguments are summarised below

The Initial debate

Why bigger is better – Paul Edward:

  • Doing everything in-house, worldwide, affords a consistent standard
  • There are big, validated products that are economical to use
  • There is a wealth of diverse talent for more bespoke requirements
  • The size of the company means clients are more likely to find a suitable contact with the right frame of mind
  • There is a wider range of people with different training, sector experience and tenure
  • The resources are available to be proactive in thought leadership, conference attendance and so on
  • Investment in IT can be made to fuse different techniques or data sets together
  • It is safer than a small agency – efficient, economical, fast, financial secure and properly audited
  • “For me, it is about playing the odds”

Why smaller is better – Jan Shury:

  • Thinking small makes for a more bespoke and friendly experience
  • Who runs the business owns the business – there is no plc board to report to
  • There is one building with one culture
  • People live the brand by getting more involved and having an entrepreneurial spirit
  • Management are involved and can apply their knowledge of running a business
  • There is high visibility and high reward
  • There is no “One Size Fitz Hall” career progression
  • There is no separate sales team – the people pitching are the team that will work on the project
  • They are more adaptable to client needs
  • “The client views us as the brains of the operation, and the large research companies as the data factories”

My criticisms of the initial debate

Why bigger isn’t always better

  • I question the existence of a consistent standard. There may be consistent processes, but a team and an output is only as good as the weakest link. With more people involved, the weaker the link
  • The need for many staff (and high staff turnover in general) means that recruitment isn’t as careful or deliberate as a smaller agency
  • Do projects really get assigned based on personality? Surely workloads and specialisms are more pragmatic
  • Furthermore, the high staff turnover means relationship are lost and working cultures are rarely maintained
  • Big companies do offer great training programmes, but there is rarely the opportunity to apply learnings. Graduates get trained up and leave. When I left GfK after 3 years, only 5 of my graduate intake of 20 people remained. Furthermore, we were all promoted at similar rates for “equality”
  • Big companies may have more resource to sink into being pro-active, but small companies are also able to do this, if marketing is viewed as an investment. I’ll be speaking at a conference in a few weeks, for instance.
  • Big companies may be fast in turning things around, but how agile are they when it comes to experimentation?
  • Large agencies have departments to meet all research needs, but are they masters of jacks or these trades?
  • “Playing the odds” makes it sound like a science, when I think the interpretation and implementation of research is very much an art

Why smaller isn’t always better

  • Small agencies need to pay the bills and so their high morals may be compromised in order to keep the business afloat
  • Small agencies have very particular cultures and personalities – it can take buyers a long time to find the company with the right fit
  • Personality and quality of work flow from the owner – being able to run a company isn’t analogous to being a good researcher
  • There is less inherent experience in specialised requirements, so there can be elements of experimentation and failure on projects
  • Workloads for small agencies vary to a greater degree as there is less ability to spread jobs around – this creates uneven working hours for staff and can mean slightly more variable quality for clients
  • Staff do take on more responsibility but can also burn out, or seek other employment opportunities with more forgiving schedules (particularly when children enter the picture)
  • Credit can often be centred on the owner, who is the company figurehead. I note that there is small note at the bottom of Jan’s article saying that Mark Speed also contributed (though admittedly they are both joint MDs)

Does it have to be a zero sum game?

It had been something I’d already been considering, but Jan’s quote above is very telling: “The client views us as the brains of the operation, and the large research companies as the data factories”.

This quote isn’t necessarily asserting small agencies are better than big agencies; just that they are different. So why cannot both be employed for different aspects of a research project? It is no different to a brand employing both a media agency and a creative agency, or a sommelier not serving bread at the dinner table.

My solution

The set-up

As outlined above, there are advantages and drawbacks to working with both big and small agencies. So, why not try to leverage the benefits of both while minimising the drawbacks.

  • Big agencies have scale, security and have the resource to standardise processes. They should focus upon data collection, management and administration, perhaps extending into training of systems or processes. Reliability is prioritised.
  • Smaller agencies have more rigid recruitment criteria, generally employing more driven people who get involved in a broad range of tasks. They can use their experiences and immersion to hypothesise on research findings and assist in implementation. Creativity is prioritised.

A big agency is thus employed for data management. A small agency for consultancy.

Evidently, employing two very different agencies to work alongside each other on a project (whether ad-hoc or continuous) is problematic

The challenges

  • It has the potential to be an uneven relationship. The small agency is the driver of the project, which effectively makes the larger agency the car. The driver steers and provides direction and control; the car uses its horsepower to get the job done
  • The extra investment in finding agencies that meet client requirements and also work well with each other requires a more long-term strategic approach to research (not dissimilar to advertising accounts). In media at least, research budgets tend to be set on an annual basis as a response to strategic objectives (which can differ vastly, year on year)
  • The client may need to get more involved in mediating between the two agencies, ensuring a fair division of labour on a project that continues to focus on the end objectives
  • Communication becomes more difficult as it becomes more open. A small agency may have previously sub-contracted to a large agency and hidden all the processes (which, let’s face it, clients don’t always care about). Now they are given equal exposure
  • The goal of most small agencies is to grow (they don’t seem to believe in Seth Godin’s Dip) – at what point do they become too large to continue to offer the benefits of being a smaller agency
  • The model is a bit unkind to larger agencies – it is possible to have a boutique presence within a larger infrastructure. I know QMedia have been pretty successful at this in the past
  • The goal is to leverage the benefits of both types of agency, but it is also possible to get the worst of both worlds – inflexible data collection poorly implemented

The way to get it to work

  • The big agency would probably be recruited first – there is a smaller number to shortlist, and it is important to get a partner that can be trusted across the gamut of potential methodologies required
  • The small agency is harder to recruit due to the many available, and the importance of finding the people with the right skills and culture. Extreme care should be taken in selection
  • Before finalising partnerships, chemistry meetings should take place and “rules of engagement” clearly established
  • The client needs to be very clear in their priorities in project management (quality, time, cost or scope), logistics of the relationships and the anticipated communication lines. Is it going to be a three-way relationship; will the client act as a central point of contact between both agencies or will the smaller agency interject between the client and larger agency
  • Piloting. There will inevitably be teething issues. The first project commissioned using this method shouldn’t be of critical importance with strict time-sensitive outputs

Conclusions

Could this system work? I think it could for companies that operate long-term strategic research programmes that require both consistency of practice and nuanced interpretation. While there are many challenges in getting competitors working together (though, arguably, they are no longer competitors if the industry fragments into consultancy and data management), I see it as potentially being more beneficial to employing a single agency that can effectively perform some, but not all, of the requirements.

sk

Image credit: http://www.flickr.com/photos/hand-nor-glove/2240709916

The gamification of surveys

How can gaming principles be used in research? This is a fascinating area that I know Tom Ewing has been spending some time thinking about.

I haven’t, but a combination of some frustrations on a project and reading this excellent presentation, entitled “Pawned. Gamification and its discontents”, got me thinking specifically about how gaming principles could contribute to data quality in online (or mobile) surveys.

The presentation is embedded below.

The problem

There are varying motivations for respondents to answer surveys, but a common one is economic. The more surveys completed, the more points accrued and money earned.

In its basic sense, this itself is a game. But like a factory production line team paid per item, it promotes speed over quality.

As such, survey data can be poorly considered, with minimal effort going into open-ended questions (deliberative questions are pointless) and the threat of respondents “straight-lining” or, more subtly, randomly selecting answer boxes without reading the questions.

The solution

Some of these issues can be spotted during post-survey quality checks, but I believe simple gaming principles could be used (or at least piloted) to disincentivise people to poorly complete surveys.

Essentially, it involves giving someone a score based on their survey responses. A scoring system will evidently require tweaking to measures and weights over time, but it could consist of such metrics as

  • Time taken to complete the survey (against what time it “should” take)
  • Time taken on a page before an answer is selected
  • Consistency in time taken to answer similar forms of questions
  • Length of response in open-ended answers
  • Variation in response (or absence of straight lines)
  • Absence of contradictions (a couple of factual questions can be repeated)
  • Correct answers to “logic” questions

A score can be collected and shared with the respondent at the end of the survey. Over time, this could seek to influence the quality of response via

  • Achievement – aiming to improve a quality score over time
  • Social effects – where panels have public profiles, average and cumulative quality scores can be publicly displayed
  • Economic – bonus panel points/incentives can be received for achievements (such as a high survey quality score, or an accumulation of a certain number of points)

The challenges

For this to work successfully, several challenges would need to be overcome

  • Gaming the system – there will always be cheats, and cheats can evolve. Keeping the scoring system opaque would mitigate this to an extent. But even with some people cheating the system, I contend the effects would be smaller with these gaming principles than without
  • Shifting focus – a danger is that respondents spend more time trying to give a “quality” answer than giving an “honest” answer. Sometimes, people don’t have very much to say on a subject, or consistently rate a series of attributes in the same manner
  • Alienating respondents – would some people be disinclined to participate in surveys due to not understanding the mechanics or feeling unfairly punished or lectured on how best to answer a survey? Possibly, but while panels should strive to represent all types of people, quality is more important than quantity
  • Arbitrariness – a scoring system can only infer quality; it cannot actually get into the minds of respondents’ motivations. A person could slowly and deliberately go through a survey while watching TV and not reading the questions. As the total score can never be precise, a broad scoring system (such as A-F grading) should be used rather than something like an IQ score.
  • Maintaining interest – this type of game doesn’t motivate people to continually improve. The conceit could quickly tire for respondents. However, the “aim of the game” is to maintain a minimum standard. If applied correctly, this could become the default behaviour for respondents with the gaming incentives seen as a standard reward, particularly on panels without public profiles.

Would it work? I can’t say with any certainty, but I’d like to see it attempted.

sk

Enhanced by Zemanta

Launching a publishing career via Tumblr

A lot of ink has been spilt, and even more keys have been bashed, on the topic of the changing publishing industry. A good starting point for those less familiar with the movements would be my review of the “From Hardbacks to Hot Bytes” event, with talks from Gerd Leonhard and Dominic Pride.

It is clear the internet has led to a greater democratisation of the book industry, with new economies and a changing role of publishers.

But it is also interesting to note how people are using the internet to leverage traditional book deals.

This isn’t a new phenomena – Dickens serialised his work before it was published in a single volume, and many a newspaper columnist or cartoonist has subsequently earned a book deal.

The newspaper analogy is pertinent as there are erudite and thoughtful bloggers who use their blogs as a platform to showcase their writing abilities or specialist knowledge, in order to publish a related book. Think Chris Brogan or Cory Doctorow to give but two examples. Similarly, there are comics such as Freakangels and xkcd that have been converted into trade format. (Note all links take to blogs, which in turn have links to the books)

But the nature of the topics mined for the paperbacks appears to have shifted. Simpler, more immediate books. Effectively, web content transferred to paper format.

Again, these types of books aren’t new. Think of books such as “The little book of complete bollocks” – the sort of titles you’d find next the counter at HMV for £2.99. The sort of books that are always bought as gifts for others, never for oneself.

But the balance of power in this genre appears to be shifting to the web. And Tumblr appears to be at the centre of this.

Not all of these books originate on Tumblr (Stuff White People Like didn’t), but the directory of Tumblr books is continuing to expand. Think Garfield Minus Garfield, Look At This Fucking Hipster or This Is Why You’re Fat. Slaughterhouse 90210 can’t be far off.

Why is this? I’m not entirely sure, but I suspect it is because the barriers of entry are now so low.

  • Tumblr is one of the easiest blogging platforms to use (It is effectively an online scrapbook)
  • The content is fairly low involvement – after the initial moment of inspiration, adding content is pretty easy. No long and thought-out blogs; just a picture and a pithy comment
  • While high quality will hopefully, eventually, rise to the top, the nature of distribution is to an extent random. Being reblogged, retweeted or even getting a mainstream media mention is largely uncontrollable – two identical pieces of content could have very different audiences depending on the serendipity of who happened to check their feeds at a certain point in time

I think this randomness is quite important. There is a huge number of blogs of this nature, and thus the quality is of course highly variable. Over the past month, I’ve seen Hungover Owls, Sad Don Draper, Rosa DeLauro is a fucking hipster and Fuck Yeah Prancing Cera. Clearly, not all of these (if any) are aspiring to book deals but the rules of the game appear to be set.

I wonder whether this will be a passing trend, or something that will continue. Good ideas will always come along – whether on Tumblr or in a literary agent’s office – but the economies are changing substantially. For instance, I have no idea how the copyright works on this type of blog; it’s highly unlikely that all images used have received clearance or licence under creative commons

sk

Image credit: http://www.flickr.com/photos/andyi/2369617357

Mobile internet adoption isn’t an inevitability

To tie in with the MRG conference, Mediatel is running a series of opinion pieces from the speakers.

Mine is on the diffusion of innovation with regard to the mobile internet (I’ll be speaking about Essential’s Brandheld mobile internet project at the conference). I’m not sure if it will eventually go behind a paywall or not, but the article can be found here.

In it, I say that the majority of people will eventually have powerful internet-enabled phones, but that adoption of the mobile internet isn’t guaranteed as

  • Ownership doesn’t equate to usage
  • The mobile shouldn’t seek to replicate the computer
  • Needs and behaviours vary across the adoption curve
  • Usage does not always correspond to value
  • Seek to surprise

Each of these points are explained within the article, which even has a photo of me adorning the page.

sk

Enhanced by Zemanta
Follow

Get every new post delivered to your Inbox.

Join 35 other followers