• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Social media dichotomies

I’ve been having a think about the different types of social media services. Structurally, different services can be very different. Below are a few bipolar scales that different services can find themselves upon

Structure

Public <——————-> Private

Permanent <————-> Transient

Centralised <————-> Decentralised

Hierarchical<————> Non-hierarchical

Automatic <————–> Manual

Exclusive <—————> Inclusive

Zero-sum <—————> Shared gain

Single-focus <————> Multiple focus

Push <———————> Pull

Usage

Personal <—————> Professional

Active <—————–> Ambient

Flexible <—————> Fixed

Expert <—————-> Amateur

Give <——————-> Take

Create <—————-> Consume

I realise that without proper definitions, several of these scales overlap with one another. Nevertheless, it is my starting point and I’d be interested to know if I’ve overlooked any important dimensions.

sk

Enhanced by Zemanta
Advertisement

Connected: The amazing power of social networks and how they shape our lives

Nicholas Christakis speaks at the RSA on the power of social networks

“Connected: The amazing power of social networks and how they shape our lives” was the title of the talk given by Dr Nicholas Christakis at the RSA earlier. Due to rather poor time management, I didn’t make it to the event itself, but followed it online. This link should eventually have the video and downloadable audio of the event.

I’d recommend checking out the full talk, as Christakis is an engaging speaker and his theories make a lot of sense. Rather than recap the full session here, I’ll instead focus on a few areas.

The talk

The hypothesis of the talk (and book) is that social context plays an important part in our behaviour and attitudes, and our ties tend to form groups of likeminds. Things ultimately spread in networks.

In his data visualisations, he displayed his theories by using nodes to represent people, with lines acting as connectors.

The number three was a dominant theme throughout the talk.

Christakis noted that there are three theories in how things cluster.

  • Induction – Person A’s behaviour directly affects Person B, who then mimics Person A
  • Homophily – Person A and Person B both have the pre-existing condition independently, and group together because of this
  • Confounding – Person A and Person B are proximate, and share an exposure to an external factor

The confounding theory refutes the idea of network effects. Yet for network effects to be proven, the nature of the connections need to be understood:

  • Mutual friendship – where both person A and B are friends
  • Ego-perceived friendship – Person A befriends person B, but Person B ignores them
  • Alter-perceived friendship – Person B befriends Person A, who ignores Person B

Christakis argues that different relationships will have different effects. He notes that if we were to map our relationships, they wouldn’t form a uniform pattern like a regular lattice but instead vary across three dimensions

  • The number of friends/connectors per person/node
  • The interconnectedness of friends – are the nodes I am connected to also connected to one another?
  • The position within the overall network – is my node in the centre or towards one of the edge?

The final three of his talk is in degrees of influence. Christakis posits that we are not only influenced by our friends but also their friends and their friends’ friends. Three degrees of influence.

He believes that we should look at the networks, rather than the individuals, when formulating policies and strategies, because properties aren’t understandable when just looking at individual components. He used the (excellent) example of carbon. When carbon atoms are linked together in one way, they form graphite. When linked in another way, they form diamond. Two very different structures, with very different properties (And the one with more connections is more valuable).

Thus, he believes we live connected lives (even though he talked about part of it being a genetic trait) because the benefits outweigh the costs. We break off bad connections, and strengthen good ones. We create networks to spread and sustain good and desirable things – things we couldn’t as individuals.

My thoughts

I enjoyed the talk immensely, and would recommend people watch/listen to the full 75 minutes. I appreciated the depth he went into when attempting to determine causation, rather than just correlation. His argument was quite persuasive and of course it has repercussions on how we would be framing our objectives.

It’s got me thinking about whether the value of people within a network differs. Christakis claimed a network could shed its bad apples – I’m not convinced since a breaking of a first order tie doesn’t necessarily break the second order tie, where influence can still pertain. If we were able to break our ties and influence our networks, then surely only good things would spread, and not things like viruses or unhappiness. But notwithstanding, are some apples “better” than others?

Whether through Berry & Keller’s Influentials, Gladwell’s Tipping Point typologies or another example, people have attempted to segment the population in attempts to harness the spread of messages. But does the number, strength and position of connections impact on the value of that person, or is a person only as valuable as his or her network?

Instead could it be analogous to Belbin’s team role functions? A balanced team needs the whole range of roles and contributions in order to be successful. Would a network comprising purely of influentials become less valuable, due to the absence of other types of people to influence?

And so when we devising sampling structures or STP strategies based on their attitudes or behaviour, should we be attempting to create a proxy of individual positioning within a relevant network in order to predict the dynamic interplay of ideas and actions? I’m not even sure if this would be possible, but it would certainly aid our predictions on whether something is sustainable or not.

sk

Reblog this post [with Zemanta]

Can social media become a mass media?

My short answer is “Yes, if it continues to evolve”.

But there are numerous challenges to overcome within this evolution process.

SIDENOTE: Throughout this blog, I’ll be referring to social media in the singular. I know that technically media denotes plurality, but, to me at least, phrases such as “social media aren’t mass” sounds weird. Well, weirder than “social media isn’t mass” anyway.

Isn’t social media mass already?

I may have already lost a few readers by this point, who refuse to believe my basic premise that social media isn’t mass.

And they will have numbers to back up their spluttering, incredulous rage:

These numbers sound big. They are big. But they aren’t mass.

This is where semantics get involved.

Firstly, for social media I’m referring to platforms or websites whose primary aim is to connect people and facilitate communication – such as social networks, blogs and forums. I’m not considering websites with social widgets or functionality added – such as the comments section on a newspaper website.

Secondly, I believe there is a big difference between a popular media and a mass media. The definition used on Wikipedia is “a section of the media specifically designed to reach a large audience”.

From that definition, and from my general perceptions, I infer that for media to be mass it requires inclusiveness.

And despite the large numbers, social media is not inclusive.

The Diffusion of Innovations

I reference Everett Rogers’ Diffusion of Innovations model in a previous post on the iPad. I will go into it in a bit more detail here.

Rogers posited that, within the population, there are five types of person, each with a different relationship with and attitude towards new innovations. The five types progress along a time series

  • The innovator will try something for the sake of it being new
  • The early adopter will try something before most people, but only when he or she is confident that it will be worth it
  • The early majority (or mainstream) come into the frame when they see a new innovation is gaining in popularity and thus must be worth adopting
  • The late majority see a new innovation has proved itself to be worthwhile, and thus they try it
  • The laggards are resistant to new technologies, but will try something when there is little or no alternative

Rogers estimated the proportions in the population to be as laid out in the diagram below:

At Essential Research, we have measured to the population in order to calibrate and weight our data for the Essential Eye, our ongoing exploration of digital media usage and attitudes. Our figures are:

  • Innovators – 6%
  • Early adopters – 11%
  • Early majority – 26%
  • Late majority – 32%
  • Laggards – 25%

If Facebook has 23m UK users in a population of 62m people, that would place it firmly into the early majority stage of diffusion.

Leaving aside my doubts that this figure constitutes 23m unique individuals within the UK, and that as marketers or researchers we are usually (but not always) confining ourselves to adults, I believe social media take-up will shortly plateau unless some big changes are made

Why majority take-up isn’t inevitable

Few, if any, innovations ever reach 100% penetration. There will always be rejectors that go out of their way to avoid certain technologies.

Digital media has the additional hurdle of scepticism among a minority – whether cost, fear of privacy, shame over incomprehension or a belief that they can live their lives quite happily without the internet, thank you, there are a significant minority that never have, and perhaps never will, use the internet.

Anyway, I digress. The main point to note is that early adopters are DIFFERENT to late adopters.

How are they different? They tend to be

  • Younger
  • Higher social class, and more educated
  • More disposable income
  • Greater interest/proximity to science and technology
  • Greater opinion leadership
  • More social

This may seem obvious, but it is vitally important to reflect upon.

The demographic differences aren’t such a huge deal since people age and earn more over time, and it means the user base will always skew to the more commercially attractive audiences. Essential Research Brandheld data bears this out – 64% of all users of social networks via a mobile are aged 16-34 and three quarters are on a contract phone.

However, the attitudinal differences could be a major barrier to social media uptake.

Later adopters advocate things to a lesser degree and are less social. They have smaller friendship groups and are less likely to want to meet new people.

The network effects become less powerful. Latecomers see less benefit. Their investment into the software will bear less reward.

And this is assuming that later adopters can be sold on the idea to begin with. This is not guaranteed.

The mainstream prioritise different benefits

The proposition that convinced the earlier users to adopt social networks will probably not work for the latecomers.

Earlier adopters saw their friends on the site. They saw the software made it easier to keep track of their large and disparate friendship groups. They got to grips with the technology quickly, and found it easy to adapt as the social networks change to accommodate a larger user base.

Yet even in the early majority, we are witnessing problems with adoption. Examples include loud protests over redesigns to Facebook or people getting confused by a simple error within Google’s algorithm.

Two things need to fundamentally change in order convince the mainstream to trial, let alone permanently adopt, social media.

  1. A return to simplicity: Feature creep is a well document problem with iterated software. The earlier adopters – the more vocal power users – may appreciate greater customisation but it raises the barriers to entry for newcomers. The longer they leave it, the harder it is for them to figure out how to use these services. And the greater the chance they abandon them. Apple may have beautifully designed products, but the simple and intuitive interface is the most important part of the design. The core social media service should be simple, with additional functionality optional for those comfortable – the Firefox model, if you will. A quick fix would be for Facebook to offer its lite version as a default.
  2. A realignment in promotion of benefits: Mainstream and late adopters are less inclined to experiment. The benefits to using social networks need to be immediately obvious and tangible. A benefit either gives you something – entertainment or information – or lets you save something – time, money or effort. The more averse to new technology someone is, the harder these benefits are to communicate. Currently, there is plenty of room for platforms, developers and marketers to improve in this area

How social networks can become more attractive

Conventional wisdom might say that the less affluent among us have more time on their hands. 8 hour shifts, no ski-ing holidays in Chamonix etc. They have the social surplus that Clay Shirky talks about. And we’re not expecting them to create a new Wikipedia, just to engage in the social media space.

But, given the current platform, not gonna happen. They will be more likely to stick to their gin and television.

What do people gain from social media that they cannot get elsewhere? Why should they divert their time from their favourite TV shows, or from housework or other chores, in order to “join the conversation”?

Where are the tangible benefits?

Well, they may already be there. They just need to be communicated

  • Facebook and Twitter are building on the fact that they are increasingly responsible for traffic directed to major news sites. Conversely, despite being unfashionable, the portals are still popular. This is primarily because they offer a single place to get all desired information. If Facebook or another social network desire to become a portal, they need to contain, or at least link to, all relevant information for that person in a similar manner to the portals
  • Even if people aren’t social themselves, they may still like to read or hear the opinions of others on topics or areas that interest them. A comparison could be made to radio phone-ins, but with a criteria of entry based on interest rather than geography
  • Vouchering schemes are highly discriminatory, but cost/cost saving is eye-catching. People will gravitate towards the discounts
  • I really like Doc Searls’ idea of Vendor Relationship Management, where potential customers recruit providers instead of companies advertising to potential consumers. This clearly represents an easier route to deciding upon a major purchase, and is far preferable to disruptive advertising or poor performing display advertising.

The final point brings us on to the business model.

The challenges for a successful business

It is one thing to succeed in bringing in an audience. It is another thing to successfully run that business. To my mind, there are three major challenges to overcome before this space can be fully monetized

  1. Competition – in this instance, competition can be a bad thing. Maintaining a presence on a social network requires a major investment of time and effort. People are reticent to needlessly duplicate this. I believe that the low distribution barriers and start-up costs in the digital space mean that there should be no concerns over monopoly activity. Google, Amazon and eBay have all succeeded in this position to date, and there is no reason why Facebook cannot . I see no issue with them maintaining that (sound business strategy permitting, which is where Myspace fell down), with specialist networks operating in its orbit. If I am right, Google Buzz will swiftly fail
  2. Evolution without natural selection – I have quite a large problem with Google Buzz. Dumping a new social network on a group of people without it evolving from innovators downwards is a recipe for rejection. Without any proven benefits among even a minority of users, there is no reason for the average user to adopt it. It could be argued that the average Gmail user is savvier than those of competitor services, but there are as yet no clear benefits to using it. I’ve personally removed it from my Gmail, and it will remain turned off until these benefits become apparent. Throughout the evolution of social networks, there will always be the tension between placating the current users while reaching out to the sceptics. This requires a careful balancing act between keeping pace with the ambitions and needs of the power users, and the more conservative use of the later adopters
  3. The commercial model – there are many potential routes to take – basic display/interruptive advertising, VRM, subscription or integration with search to give just four examples – but the commercial model for ensuring the success of the social media space is still unclear. There may be a growing number of social media agencies in the space, but until they offer real, workable proposals for a) monetizing the current user base and b) attracting a mass audience, the prospects for mainstream success remain limited. It is therefore in their interests to do this, otherwise they will remain a niche proposition at threat from integrated campaigns from digital agencies, not to mention full service agencies.

Conclusion

Will social media become mass? Ultimately, I think so.But not in its current guise.

Social media is currently geared towards the technologically savvy. This is fine. But if the platform wishes to mature, then it is necessary to change.

The focus needs to move away from the exploration of something new towards the benefits people receive. This is achieved through highlighting the gains – information and entertainment – and the savings – in time, effort and money. Running alongside this is the need to identify and promote a sustainable commercial model – not an easy task.

Yet, to revisit Rogers’ model, an individual needs to trial something before they can fully adopt it. While social networks are free to join, the registration page still represents a barrier. Keeping most of the functionality behind the log-in is analogous to a paywall. It is hidden away. It is exclusive to users.

This isn’t a trait of a mass media. Social media needs to evolve further before it can be considered one.

sk

Image credit: http://www.flickr.com/photos/ejpphoto/2633923684/

Reblog this post [with Zemanta]

Twitter, unlike Facebook, is socially mobile

The reciprocity of relationships is, in my opinion, the most fundamental difference between Facebook and Twitter. On Facebook, both sides need to agree before the connection is made. On Twitter, people can follow whoever they like.

Does this make Twitter more “social”? I think it might.

I’m writing in broad terms, since different people use the services in different ways, but this makes Twitter aspirational. The more socially mobile, to reuse the pun from my title.

Facebook is who you know. Twitter is who you want to know.

Facebook reinforces social conventions. Twitter does not.

Facebook maintains the status quo. Twitter breaks it.

Facebook is about the past. Twitter is about the future.

Facebook is a constant reminder of our past actions and relationships. Nostalgic of both the recent and distant past.As Don Draper points out in this scene (embedding is disabled, but I’d recommend watching or rewatching it), nostalgia literally means “the pain from an old wound”. This is powerful, but also static.

It is about who we know and what we did.

The good moments but also the bad.

The people we’re glad we’ve stayed in touch with, but also those we’d rather keep in our past.

Yet the social pressure is there to accept these reconnections and intermingle the different worlds and circles of our past (I’m sure Don wouldn’t appreciate that). These relationships are hugely powerful, but they’re not the whole story.

Twitter is about the future. It is social networking in terms of forging new connections, rather than maintaining old ones.

We seek out people who we perceive to have similar interests or ideas to our own.

We recommend people to one another.

We follow macro and micro celebrities, whether to vicariously bask in the reflected glow or to learn from them.

Whatever our motivations, we are able to do this. There is no requirement to justify the people we follow. Likewise, there is no pressure to reciprocate when an individual (Or organisation. Or bot) follows us.

This fluidity of Twitter is a major advantage it has over Facebook. And if Facebook is seeking to keep more of our browsing behaviour within its network, it is something it needs to address.

It’s not just about who we are. It’s also about where we want to be.

sk

Image credit: http://www.flickr.com/photos/eyermonkey/2842941601/

Reblog this post [with Zemanta]

Return on conversation

EDIT: As has been pointed out, I made a rather embarrassing miscalculation in the original post, which made me seriously underestimate the CTR. I evidently need to evaluate my quantitative credentials

My previous post on conversation monitoring was tweeted and retweeted by several individuals. Firstly, I’m grateful that people both read this blog and are motivated to share something I’ve written.

However, the additional traffic that this Twitter activity generated has left me wondering how valuable this social activity is to individuals or organisations that look to spread their message through this sphere.

What follows are some rough numbers given that:

  1. WordPress.com stats are pretty basic
  2. I’ve left it two weeks to do the maths, and so follower numbers will have changed
  3. Follower overlap and actual exposures are unknown

Nevertheless:

  • To my knowledge, the post was tweeted/retweeted 10 times
  • Combined number of people following those who linked the post is 10,354 as of today
  • The post probably got 100 additional hits as a result of Twitter activity

A couple of guesstimated calculations:

  • At an absolute level, this represents a click through rate of 1%
  • If I made the assumption that 5,000 followers are unduplicated (the largest follower count for a retweeter is over 3,000), the CTR changes to 2%
  • How many of the followers would have seen the tweet? A fifth? That changes the CTR to 10%

10% is OK for a CTR, but it isn’t spectacular. The best ad campaigns with a strong call to action (e.g. competition entry) would achieve that.

The argument is that these 10% are going to be of a much higher quality than random visitors – they have acted upon a social recommendation and are likely to be engaged and interested in the content.

But that argument should work for the click through itself. If someone you follow and trust is recommending something, shouldn’t you be more likely to click through than if it were a random link or ad?

There a few issues at play here, which are causing this level of CTR

  • Noise – Twitter is popular; there are a lot of tweets and links to browse and skim
  • Ambient intimacy – often, it is enough for me to know that person X has linked to a post on conversation monitoring by @curiouslyp. I may prefer to browse the remaining tweets rather than click through to this post
  • Power laws – if the post on conversation monitoring was by @jowyang or @chrisbrogan I may click through since they are renowned experts. Who is @curiouslyp and what would he know about this topic?
  • Nature of followers – my prior post was relevant to the PR community – very active on Twitter. I suspect posts of a different subject matter are unlikely to be spread and consumed to the same degree

It is nice to think that the future is social, and that these networks will power traffic in future. But those perpetuating this – in my opinion – myth are those for whom power laws benefit, and who spend an inordinate amount of time on social networks (most likely because it is there job to do so). The average person does not have the time nor inclination to follow through on many, let alone all, posts or links.

So, in my opinion, the return on conversation is pretty minimal. Nevertheless, I did find it interesting to map how my post spread through Twitter via social graphs and, to repeat, I am grateful to the few that took the time to read and pass on my post.

sk

Image credit: http://www.flickr.com/photos/ironmonkey480/

Reblog this post [with Zemanta]

Should we listen to every conversation?

Over on the Essential Research blog, I have responded to a post by a social media conversation monitor who eulogised the death of focus groups.

In that post, I have outlined why focus groups themselves aren’t the issue; rather it is shoddy application. Here, I want to expand on that a bit. It is my contention that conversation monitoring is more flawed than traditional research, and should not be used for major corporate decision.

Alan Partridge once declared himself to be a homosceptic, and in a not dissimilar way I am doubtful of the efficacy of social media monitoring.

In terms of numbers signing up, the social space is still increasing. However, the number of active users within this universe will remain limited – the late arrivals will be the more passive and occasional users. This space is increasingly asymmetric, with network effects and power laws distorting the flow of information.

Topics of conversation will by nature revolve around the major players – whether individuals, blogs or organisations. The larger the hub, the weaker the concentration of signal to noise.

As a small example, consider blog commenting. Aside from the odd spam comment, the contributions I get here are all genuinely helpful. Because this is a relatively small blog, there are few people commenting out of self-interest. Moving to the larger sites, comments are filled with spam, self-promotion and unquestioning advocacy/contrariness. Genuine debate and discussion still exists, but it is diluted by the inanity surrounding it. This on its own creates difficulties for sentiment analysis, but clever filters can overcome this.

But despite the internet being open, we will cluster around likeminds. Group think creates an echo chamber. danah boyd has pointed out that teenagers network with pre-existing friends. It is my observation that the majority of adults network with those in their pre-existing spheres. Planners chat to planners. Cyclists to cyclists. Artists to artists. Mothers to Mothers. These categories aren’t mutually exclusive, but the crossover is minimal compared to likeminds.

Remember the Motrin outrage? The mainstream majority remain blissfully ignorant. This may have been because it was resolved before it had a chance to escalate to the mainstream media, but it nevertheless shows the limited nature of social media echos.

Of course, some products or services target the early adopting, tech savvy ubergeeks and so for these companies they should obviously engage where their audience is.

But for the rest? Despite my assertions above, I do view monitoring as useful, but only as a secondary tool. Tracking conversations as they happen is a useful feedback mechanism, but few companies are going to be nimble enough to implement it immediately (once they have separated the meat from the gristle and verified that this opinion is indeed consensus).

Surveys and groups are indeed limited by taking place in a single point in time, and through these it is difficult to extrapolate long-term reaction. The Pepsi taste test being one notorious example.

But there are plenty of longitudinal research methodologies that are suitable. Long-term ethnographic or observational studies can track whether attitudes or behaviour do in fact change over time. These can be isolated in pilots or test cases, so that any negative feedback can be ironed out before the product or service is unleashed to the general public.

This is where traditional research still prevails: the controlled environment. Artificiality can be a benefit if it means shielding a consumer basis from something wildly different from what they are used to.

This takes time though, and some companies may prefer to iterate as they go, and “work in beta”. Facebook is an example of this – they have encountered hostility over news feeds, Beacon, redesigns and terms of service.Each time, they have ridden the storm and come back stronger than ever.

Is this a case study for conversation monitoring effectiveness? Not really. They listened to feedback, but only implemented it when it didn’t affect their core strategy. So, the terms of service changed back but the news feed and redesign stayed. Features intrinsic to its success.

Should Scyfy have gone back to being the Sci-Fi channel due to the initial outrage? Perhaps. Personally, I think it is a rather silly name but it didn’t do Dave any harm. If they have done their research properly, they should remain confident in their decision.

Conversation monitoring can be useful, but it should remain a secondary activity. A tiny minority have a disproportionately loud voice, and their opinions shouldn’t be taken as representative of the majority. When iterating in public, there is a difficult balance between reacting too early to an unrepresentative coalition, and acting too late and causing negative reaction among a majority of users/customers.

Because of this, major decisions should be taken before going to market. Tiny iterations can be implemented after public feedback, but the core strategy should remain sound and untouched.Focus groups and other research methodologies still have an important place in formulating strategy.

sk

Image credit: http://www.flickr.com/photos/jeff-bauche/

Reblog this post [with Zemanta]