• Follow Curiously Persistent on WordPress.com
  • About the blog

    This is the personal blog of Simon Kendrick and covers my interests in media, technology and popular culture. All opinions expressed are my own and may not be representative of past or present employers
  • Subscribe

  • Meta

Eight challenges to measuring off site social media performance

I’ve written my first post for the BBC Internet blog around the challenges of social media measurement. It is intended to be a primer of some of the issues we face with assessing our activity, before I go on to look at some of our actual figures.

Some of the challenges are specific to the BBC (ie a public sector UK media company), but many are universal.

The first challenge is a lack of official measurement source. TV has BARB and Radio has RAJAR – two well established bodies, with consensus on the most appropriate metrics to use. Within digital, there is the relatively new UKOM – while it offers a range of measures, it does not break down social media into specific accounts (such as @BBCSport on Twitter or BBC One on Facebook). Social networks may offer useful insight tools themselves, but only top-level information is made public. It can therefore be difficult to place performance in the context of the performance of other accounts or organisations.

You can read the full blog post, outlining all eight challenges, here.

sk

From a digital cradle to a digital grave

There have been many debates about privacy and user data recently, not least given the rewriting of Google’s terms of service and questions over who is generating the value in Facebook’s IPO price.  I’ve tangentially been thinking about the legacy of data a result.

1. Can a person reinvent themselves in the social media era?

Human lives are wonderfully random and unpredictable, but within this unfortunate or regrettable things can happen. Structured data can maintain links between different stages of life (and death) – do we want that? And if we don’t, how can organisations that hold such data respond? Or indeed should they, given  the question-marks over data ownership.

On the first question, I remember reading a while back someone (though I can’t remember who) saying that Bob Dylan couldn’t exist in the 21st century, since people would quickly find out that Nowhere Boy was just Robert Zimmerman from Minnesota, and the mystique would be destroyed. To some extent, this has been borne out, with Lady Gaga still connected to Stefani Germanotta and Lana Del Ray to Lizzie Grant. Yet, despite the backlashes and meta commentary, the majority of people don’t seem to care about the past. The first viewing of Video Games on YouTube is day zero, as far as Lana Del Ray’s fans are concerned (Personally, I haven’t gotten further than the Lana Del Ray dancing Tumblr).

Yet, the stages of Lizzie Grant’s existence wouldn’t be so disconnected in a Facebook timeline. People would easily be able to browse the gradual (or not) evolution – even with a scorched earth policy some recorded activity will be outside of the individual’s control. Does this matter? In some instances, it does. There have been reported instances of companies demanding prospective employee’s Facebook passwords. The sheer idiocy of it makes it sound absurd, but it could potentially be destructive. A 22 year old graduate could have been on Facebook since he or she was 15 or 16. Facebook will keep a record of all that individual’s actions and misdemeanours through school, university and work, charting the progress of that person growing up – actions which could have otherwise been long since forgotten or disregarded. For instance, when I was 16 I thought that Significant Other was a better album than OK Computer. And that would just be the least of my concerns.

One could argue that people should have a mental filter in place when deciding what to put on the internet. This ignores the fact that you can’t control what other people put online about you (cf. Rick Santorum), but also that future-proofing social media is hardly at the forefront of people’s minds. Even anonymous tumblrs can be linked back to people’s Facebook or Linked In accounts, should the investigator be willing to try hard enough.

There is no specific answer I am trying to get to, or position I’m trying to achieve with the above line of thinking. It is merely an observation of the data trail our evolving personalities leave. Google Circles are hard enough to maintain when the information is static; longitudinal information over an extended period of time will be even more difficult.

2. What happens to someone’s data in the long-term?

Twitter, as many other services do, offer recommendations on who to connect with. This is a screenshot of a recent recommendation.

The middle account – Martin Skidmore – hasn’t been updated since July 21st 2011. Now, I’ve never met Martin Skidmore but I’ve heard of his name via friends of mine who have very nice things to say about him. I also know that he died last year.

Now Martin’s Twitter profile (as I assume his other profiles) has remained static. Is that right? Should we keep a person’s profile as it is, and enable others to interact with it (even with the potential of notifications suggesting you reconnect)? Or perhaps profiles should be frozen and turned into a shrine or memoriam (some final entries might be elegiac, but I suspect most would be fairly banal, and there is also the potential to be greeted with a suicide note) Or should these profiles be removed completely (deleting interactions with friends or contacts in the process)?

I don’t know the answer. It is a highly emotive issue, and people will have different opinions. It also opens the question over who has the right to decide – should there be a digital will? Though of course final wishes aren’t always observed, as with the case of Vladimir Nabokov and his final manuscript. And of course there are practical questions over how we deal with multiple accounts and fragmented online activity. Facebook might be the central point for most people, but there are also blogs, microblogs, social networks and website profiles all in existence, potentially all linked to the same email address. Should there be a responsibility on the email provider to track down all of these profiles, to allow the family to settle the affairs?

(I have a vague recollection of a service existing whereby if an account wasn’t accessed for a period of time then it would automatically notify everyone in the address book that the person was dead. However, I can’t recall where I heard of it, or what it was called).

The amount of information being put on the internet appears to be growing exponentially, with nodes linked together in many ways. Structurally, data trails are a mess. Do we need to reach a place where “we” (as bottom-up individuals, or top-down conglomerated organisations) start to clean and tidy this environment and avoid the equivalent of space debris harming our online experience? Or is it something we just accept, as the more things change the more we want things to stay the same?

sk

Image credit: Errr… there was about a 2 month gap between me drafting and publishing and I didn’t save the link. Apologies – if it is yours then let me know and I’ll credit accordingly

Technology changes quickly, people change more slowly

Manny from Black Books says "I'm a prostitute robot from the future"Last February I asked whether social media could become a mass media. It’s one of the more considered posts on this blog, so if you haven’t yet read it I would recommend it. The general crux was that social media could only become mass when it moved away from super-serving the tech savvy, and more closely identifying with the needs and desired benefits of the average person.

In it, I reference the adoption curve. Adoption curves work extremely well for technology, whereby regular innovations boost capabilities and cut costs. New (and, in retrospect, successful) technologies see the eager niche pay the premium to be first, with the rest following once the benefits have been clearly established and the price has fallen.

But when it comes to behaviours, or services that manifest these, we see much slower shifts. This is because our selves and our needs remain consistent as the world changes around us.

Social media may be new but many of us have the inherent need to be social. Is it mainstream yet?

As of writing, there are 29.9m Facebook users in the UK. Mark Zuckerberg recently announced that its 750m members worldwide are sharing are over 4 billion items each day.

Those are big numbers. While far from universal – and indeed there are suggestions that usage in some areas might be plateauing – it does appear that Facebook is mainstream.

One of the main reasons for this is Facebook’s sheer versatility. It can be something different to each person.

  • It can be a layer across the web or a portal
  • It can be a referral engine or an aggregator
  • It can be an extension of real life or a stand-alone identity
  • It can be a place to chat to friends, to the public or to brands
  • It can be a photo album, a games site, a store, an events manager, a blog etc etc

This has been vital to Facebook’s growth. Rather than enforce a particular type of behaviour, or a particular set of social norms, it has enabled the userbase to transfer their unique preferences within the existing infrastructure. Facebook lets people do what they want, but in a place where network effects can enhance the experience.

It is familiar, but also better. A killer combination.

SIDENOTE: Partially due to the above, I’m reserving judgement on Google+ until it becomes apparent how it will integrate into all of the other Google services. In its current guise, it isn’t more than a shiny new object.

But while mainstream, there is no adoption curve. This has three implications.

  1. It is not inevitable that everyone will use Facebook in future – Technologies change and services evolve, but needs and beliefs are stable. Behaviours, the manifestation of needs that may be facilitated by services, thus change fairly slowly.
  2. Facebook users do not share or use the service in the same way – The Zuckerberg law of sharing ranks up their with his proclamation of a person with multiple personas being dishonest in its idiocy. More people may join Facebook, but the volume, frequency and breadth of their participation will vary massively. Power laws will persist – for instance I wouldn’t be surprised if 80% of those 4bn items shared a day are generated by 20% of the userbase.
  3. Not every type of participation that Facebook facilitates will become widespread – to use the well-worn mother analogy; my mother is on Facebook. She might “like” a photo or even comment on it, but she isn’t going to suddenly become a social media specialist. My mother will not become a blogger, no matter how easy Tumblr makes it. And that is because she has no real need to be one.

New services grow in two stages – displacing old behaviours and activities, and extending them. Social media has succeeded in the first stage, but the second stage is still in progress.

Some activities have been displaced because the benefits of shifting them online are clear. Why show the holiday photos to ten groups of people separately, when you can put them online for everyone to see at once? Why wait until tomorrow to talk to people about what you watched on TV tonight when you can do it immediately after, or even during the programme?

The benefit can also be in simplicity. While my mum may never Tumblr, its simplicity will convince others who possess the need to share or opine to try it out. Ten to fifteen years ago, a prospective blogger would have had to know a programming language and hard-code everything in. I’m sure those Geocities sites took ages.

Extending or creating new behaviours has proven more difficult. For instance, Foursquare (the concept of checking in is still pretty new and alien to most) has struggled to get beyond a niche. Conversely, services like What’s App, which concentrate on displacement, are thriving.

Again, making things simple can help create new behaviours. Going down the 1-9-90 model, it is also far easier to contribute content as well as create it. No longer does one have to master the intricate social norms of a community before venturing a cautious post. Now there are buttons, gestures and such like to get people going. The next step beyond that is automation. I’ll never need to compile a list of my top artists, for example, because last.fm does that for me.

SIDENOTE: Last.fm could be an example of a service that automates too well. The only time I seem to visit the website is to download the plug-in for a new computer.

Overall, given Facebook’s growth, it is fair to say that social media is mainstream. But possessing a social media profile is very different to creating content or exponentially increase the items of one’s life that is being shared. Some social media activities – sharing photos, writing status updates – may be widespread, but the industry still has a long way to go in order to convince the mainstream of the ways in which social services can help fulfil these longstanding needs.

sk

Image credit: http://www.flickr.com/photos/strangefrontier/5771431767

Mediatel Media Playground 2011

My previous blog post covered my notes on Broadcast in a Multi-Platform World, which I felt was the best session of the day. Below are my notes from the other 3 sessions (I didn’t take any notes during the bonus Olympics session)

The data debate

Chaired by Torin Douglas, Media Correspondent for the BBC

Speakers:
Andrew Bradford, VP, Client Consulting, Media at Nielsen
Sam Mikkelsen, Business Development Manager at Adalyser

Panellists:
David Brennan, Research & Strategy Director at Thinkbox
Kurt Edwards, Digital Commercial Director at Future
Nick Suckley, Managing Director at Agenda21
Bjarne Thelin, Chief Executive at BARB

Some of the issues touched upon in this debate were interesting but I felt they were dealt with too superficially (but as a researcher, I guess it is inevitably I’d say that).

David Brennan thinks we need to take more control over data and how we apply it. There is a dumb acceptance that anything created by a machine must be true and we’ve lost the ability to interrogate the data

Nick Suckley thinks the main issue is the huge productivity problem with manual manipulation of data from different sources (Google has been joined by Facebook, Twitter and the mobile platforms), but this also represents a huge opportunity. He thinks the fight is not about who owns the data, but who puts it together

Torin Douglas posited whether our history of currencies meant that we weren’t so concerned with data accuracy, since everyone had access to the same information. Bjarne Thelin unsurprisingly disagreed with this, pointing out the large investment in BARB shows the need for a credible source.

David Brennan said his 3 Es of data are exposure (buying), engagement (planning) and effectiveness (accountability)

Nick Suckley thinks people would be willing to give up information for clear benefits but most don’t realise what already is being collected on them

Kurt Edwards thinks social media is a game-changer from a planning point of view as it sends the power back to the client. There is real-time visibility, but the challenge is to not react to a few negative comments

David Brennan concurred and worried about the possibility of social media data conclusions not being supported by other channels. You need to go out of your way to augment social media data with other sources to get the fuller picture

Bjarne Thelin gave the example of BBC’s +7 viewing figures to show that not all companies are focusing purely on real-time. He also underlines the fact that inputs determine outputs and so you need to know what goes in

David Brennan concluded by saying that in the old days you knew what you were getting. Now it is overblown, with journalists confused as to what is newsworthy or significant

Social media and gaming

Chaired by Andrew Walmsley, ex i-Level

Speakers:
Adele Gritten, Head of Media Consulting at YouGov
Mark Lenel, Director and senior analyst at Gamesvison

Panellists:
Henry Arkell, Business Development Manager at Techlightenment
Pilar Barrio, Head of Social at MPG
Toby Beresford, Chair, DMA Social Media Council at DMA
Sam Stokes, Social Media Director at Punktilio

The two speakers gave a lot of statistics on gaming and social gaming, whereas the panel focused upon social media. This was a shame, as the panel could have used more variety. All panel members were extolling the benefits of social media, and so there was little to no debate.

There was discussion about the difficulty in determining the value of a fan, the privacy implications, Facebook’s domination across the web and the different ways in which social media can assist an organisation in marketing and other business functions.

Mobile advertising

Chaired by Simon Andrews, Founder of addictive!

Speaker:
Ross Williams, Associate Director at Ipsos MediaCT

Panellists:
Gary Cole, Commercial Director at O2
Tamsin Hussey, Group Account Director at Joule
Shaun Jordan, Sales Director at Blyk
Will King, Head of Product Development at Unanimis
Will Smyth, Head of Digital at OMD

Ross Williams gave an interesting case study on Ipsos’ mobi app, which tracked viewer opinion during the Oscars.

Simon Andrews’ approach to chairing the debate was in marked contrast to the previous sessions. He was less a bystander and more a provocateur – he clearly stated his opinions and asked the panel to follow-up. He was less tolerant of bland sales-speak than the previous chairs, but was also more biased in approaching the panel with the majority of panel time filled with Simon speaking to Will Smyth.

Will King things m-commerce will boost mobile like e-commerce did with digital. Near field communication will move mobile into the real world.

Gary Cole pointed out that mobile advertising is only a quarter of a percent of ad spend but that clients should think less about display advertising and of mobile as a distinct channel. Instead, mobile can amplify other platforms in a variety of ways.

Tamsin Hussey said that as there isn’t much money in mobile, there is no finance to develop a system for measuring clicks and effectiveness of all channels. Currently, it has to be done manually.

Will Smyth said the app store is the first meaningful internet experience on the mobile. The mobile is still young and there is a fundamental lack of expertise at the middle management level across the industry. Social is currently getting all the attention (“Chairman’s wife syndrome”) but mobile has plenty to offer.

sk

Can our opinions exist without us?

This article from Jeff Jarvis got me thinking about the evolution of content and opinion over time. Extrapolating past patterns could lead to some bizarre scenarios in future.

(NOTE: the remainder of this blog post is incoherent speculation).

Broadly, the past evolution of storytelling roughly covers four ages

  • Oral stage
  • Hand-written stage
  • Printed stage
  • Multimedia stage

Within this, there have been many trends and patterns in the types of content, the means of production and the methods of consumption.

Many arguments focus on the dumbing down of culture. But instead of rehashing that ground, the article got me thinking about content length

I’m no historian but the following spring to mind:

  • Oral accounts would take pretty long to recount and spread
  • Hand-writing/scribing has similar scalability issues
  • The printing press achieves scalability but also encourages verbosity
  • Newspapers and magazines encourage serialisation and the consumption of articles rather than full-length tracts (I suppose pamphlets come under this heading)
  • The computer age propagates articles, blog posts and shorter-form content
  • Social media reducing creation and consumption time down further – currently at 140 characters

How is this extrapolated further? Two scenarios – one logical progression and one step-change – come to mind

Progession

A logical extension would be to reduce opinion down to its underlying sentiment – why use 140 characters when a single word or gesture will do (thumbs ups, retweets etc all fulfil this function, but alongside other forms of opinion)

It is conceivable that a social media service in future could be a single spectrum of opinion, from like to dislike. Links, names, words etc could be placed on that spectrum. Our contacts would take something we like as a recommendation and consume it, and avoid things we dislike.

Would this work? Probably not, since it has no nuance. It would further encourage the balkanisation of online opinion and, even with a potential velocity measure to capture trajectory of opinion, it would make it difficult for new content to rise upwards.

Step-change

As the shortening of opinion can’t evolve beyond a single word, an obvious revolution would be to move from active to passive.

In other words, once I input parameters or some past behaviour, a service can automatically generate my sentiment to new content that crosses my digital path. With refinement over time, this would become more accurate.

We already have digividuals, based on composites of others. Could we have digi-extensions? Possible, but again unlikely. But it raises some interesting questions about the nature of digital personas. Once my online persona starts acting independently, does it still fully represent my real-world self? If someone died, their digital persona could continue to exist without them although it would cease to evolve.

If you’ve read this far down, then congratulations. This post doesn’t really have a point, or any obvious application, but I wanted to write this down to help formalise my speculation (my thoughts on this were even more jumbled before I started writing). And, on the off-chance that something similar happens around the time of the singularity, then I can go the wayback machine and glow over a small victory.

sk

Social media dichotomies

I’ve been having a think about the different types of social media services. Structurally, different services can be very different. Below are a few bipolar scales that different services can find themselves upon

Structure

Public <——————-> Private

Permanent <————-> Transient

Centralised <————-> Decentralised

Hierarchical<————> Non-hierarchical

Automatic <————–> Manual

Exclusive <—————> Inclusive

Zero-sum <—————> Shared gain

Single-focus <————> Multiple focus

Push <———————> Pull

Usage

Personal <—————> Professional

Active <—————–> Ambient

Flexible <—————> Fixed

Expert <—————-> Amateur

Give <——————-> Take

Create <—————-> Consume

I realise that without proper definitions, several of these scales overlap with one another. Nevertheless, it is my starting point and I’d be interested to know if I’ve overlooked any important dimensions.

sk

Enhanced by Zemanta

Cutting the current

I’m not big on New Year’s resolutions, particularly since I never actually seem to keep them. But I start with good intentions, so I suppose that is at least something.

In 2009, I vowed to read less, but better. That sort of happened, but the mass of information makes it difficult to resist.

In 2010, I attempted to widen my reading sources, by rotating my online sources of news. I lasted for about a fortnight, but more pressing priorities meant it quickly fell by the wayside.

Nevertheless, I return once again with a resolution for 2011.

It is quite similar to the 2009 resolution in that it is another attempt to combat information overload. But rather than simply say I will try to read more but better, this is hopefully a process that will help me achieve it.

In 2011, I will take a conscious step-back from real-time content consumption, and intentionally read (most) news and commentary much later than their time of publishing.

I’m not going to be as prescriptive as saying it will be 12 hours, or 48 hours, a week or a month. Particularly, since posts on MediaGuardian will be more time-sensitive than those on New Yorker. But I’m going to avoid the regular refreshing of Google Reader, and let links build up.

The last couple of months has proven the efficiency of this appraoch to me. An incredibly busy November and December meant I had to cut down my reading and surfing. Over the Christmas break, I have largely caught up on my RSS feeds and bookmarks. Google Reader trends tells me that in the last 30 days I’ve read circa 2,500 items. That would previously have been circa 3,500, while the current figure also includes items over a month old.

But there are many other benefits.

In the character of C’s, here are five interrelated reasons why I think this approach will suit me.  No fancy graphics. Sorry.

SIDENOTE: I’ve exaggerated it for the purpose of this post, but what is with the proliferation of lists consumed with Cs – is it the most alliterative word for media and technology related content? Whether Brian Solis5 C’s of community or Srinivasana et al’s eight factors that influence customer e-loyalty, its popularity is clear.

1. Concentration through centralisation and classification

What I found most striking in my catch-up of links was that I was far more selective in what I chose to read. When caught in the fire-hose, I may have read the same story four times from four different sources, not knowing who else would be picking up on the topic. Now, I’m able to select from a complete list of sources on my radar. A more discriminating selection process will also free up more time to do other important things. Like sleep.

It also benefits long-form content consumption, since I’m no longer in a hurry to steam through articles. Recently, I’ve been enjoying Vanity Fair, Atlantic and New Yorker pieces courtesy of services such as Give Me Something to Read – here is their best of 2010

2. Curation through collation and compilation

I’m not totally sold on curation – services like paper.li just annoy me. But trusted editors can make a difference. I don’t necessarily need to scour every link looking for the most interesting pieces, when people such as Neil Perkin crowdsource recommendations or people like Bud Caddell point to interesting things.

Incidentally, I may once again resurrect my link updates. I may not. It depends how this experiment goes.

3. Conversation through community and comments

Although the number of comments might be dwindling (or merely refocusing on the biggest sites with an active community), they can still be incredibly valuable.

Initial comments tend to be from sycophants or – in the case of social media monitoring blogs – companies such as Alterian or Radian 6 proving their scraping technology works but later comments can be insightful in their critiquing or extending the authors points. Helpfully, Disqus now sorts comments based on popularity (I should really start voting).

4. Context through critique and connections

Whether it is through comments or from myself connecting different commentaries or posts, different items can be combined or juxtaposed for context and additional understanding. And often it is the connectors that are more interesting than the nodes themselves.

5. Contemplation through consideration and cogitation

Finally, moving away from real-time motivates reflection and critical thinking. The need to rush into a response has been marginalised. I can ponder and muse before I decide whether to write a response to something or not. Nicholas Carr would be proud.

To make this work, each person will have a unique system that works for them. Mine is using Read It Later – a bookmarking service that syncs across devices. It also works within Google Reader, though I suspect I may need to also use stars if the volume of bookmarks needs additional features to distinguish information (on time-sensitivity, if not topic)

Of course, there are drawbacks to this approach.

  • It effectively makes me a lurker rather than an active contributor, so I’ll be taking more than giving.
  • I will continue to link, comment and blog but most likely after the fact, once people have moved on and the topic has lost some relevance. A balance will undoubtedly need to be struck.
  • I’ll have lower visibility through not being not being an early commenter or tweeter, and link-baiting my wares – though Twitter does seem to have made blog commenting and responding far more infrequent anyway. I think I can live with a lower Klout score, since I’m not doing this to reach an arbitrary number of undifferentiated people.

Let’s see how I get on.

sk

Image credit: http://www.flickr.com/photos/36593372@N04/5198073390/

Enhanced by Zemanta

Predictions for 2011

In the grand tradition of December blog posts, here are seven predictions for 2011:

<sarcasm filter>

  • A niche technology will grow
  • Businesses to focus less on the short-term bottom line and more on consumer needs for a long-term sustainable relationship
  • Traditional media/methods will take several more steps closer to its death
  • Social media will become more important within organisations
  • Companies will banish silo thinking and restructure around a holistic vision with multi-skilled visionaries at the helm
  • The product will be the only marketing needed
  • A company will launch a new product with these precise specifications…

</sarcasm filter>

1999 A.D. / Predictions From 1967

I think the tone and style of my predictions are about right. They run the spectrum from bland tautology to wild guesswork with plenty of jargon and generalisation thrown in.

Given how utterly useless predictions are, why do people persist? I presume they pander to people’s love of lists while gambling on their inherent laziness in not checking accuracy of previous predictions and hoping that, as with horoscopes, people read their own truths into open statements.

I’ve had the displeasure of running across numerous offenders in the past month. I won’t name check them all but, unsurprisingly perhaps, the tech blogs are the worst offenders. This example from Read Write Web and these two examples from Mashable are particularly mind-numbing in both their blandness and unlikeliness.

Living on the bleeding edge can massively skew perspective. I’m sure Cuil (remember them?), Bebo and Minidiscs have all featured in predictions of game-changing technology. In other past predictions, you can probably swap “virtual reality” for “augmented reality” or “geo-location”, or Google for Facebook or Twitter, and recycle old predictions for different time periods.

The basic truth is that the future is unpredictable. We are micro participants trying to define macro trends. A reliance on logical step-progression completely overlooks the serendipity and unanticipated innovation that characterises long-term trends, which constantly ebb and flow as tastes change and rebound against the status quo.

Take popular music as an illustration. The most popular acts of one year don’t predict the most popular acts of the following year. Tastes evolve (and revolve) with pop, rock, urban (I intensely dislike that word but can’t think of a better one), electronic and dance being in the ascendency at different points in the past twenty years.

With honourable exceptions, business and technological breakthroughs are revolutionary rather than evolutionary (note I have quite a wide definition of revolutionary). To give some examples

  • 2 years ago how many people would have predicted that an online coupon site would be one of the fast growing companies of all time
  • 5 years ago how many people would have predicted that a social network would be the most visited website in the UK
  • 7 years ago how many people would have predicted that company firewalls would be rendered obsolete by internet-enabled phones
  • 10 years ago how many people would have predicted that Apple would change the way mobile phones are perceived
  • 15 years ago how many people would have predicted that a search engine dominated advertising revenues
  • 20 years ago how many people would have predicted that every business would need a presence on the internet

Undoubtedly, some people would have made these predictions. But to use the well-worn cliché, even a stopped clock is right twice a day.

Despite my negativity, I recognise that there are some benefits to offering predictions. It opens up debate around nascent movements and trends and adds to their momentum, and provides a forum for authors to say where they’d like things to be in addition to where they think things will be.

If only so many weren’t so badly written.

(NB: I recognise by saying that I open myself up to accusations of poor writing, to which I fully admit)

sk

Image credit: http://www.flickr.com/photos/blile59/4707767185/

My avatar is my digital face

Throughout my digital career (in both amateur and professional status), I’ve used a multitude of personalised avatars.

I’ve pasted ten of the more prominent (in my mind, if not in digital footprint) examples below.


Evolution of avatars

There is a noticeable continuity, as my projected self as evolved. I’d never really wanted my face to be over the internet so after the first iteration and a couple of poor attempts at humour I settled (largely) on popular culture icons. I started with random “cult” characters before progressing to avatars that reflected either my mood, look (when I had bigger hair, there was a resemblance with Hugh Jackman’s Wolverine) or personality.

And so Columbo is where I am now. Although choices can be frivolous, the icons or avatars that we are use are pretty important. It creates a first impression, and will be the image others associate with you, often even after they’ve met you in person.

I’m fairly consistent in my use of Columbo now – the only place I actively and publicly use that doesn’t have Columbo as my avatar is Linked In, due to their insistence that the avatar has to be of you (so I have the default shaded outline). It could be argued that different sites should have different avatars, since they represent separate parts of a distributed digital personality. But while I don’t side with Mark Zuckerberg in thinking that people that have more than one identity are fraudulent, I do prefer the consistency of recognition across sites and platforms.

The beginning

The reason I’m posting about this is that I’m changing my policy on having my face on the internet. This is partly down to my bylines on Mediatel and Research having a picture, but it also reflects the number of contacts I’ve made over the past few years through blogging and through the research industry (and would like to continue making).

When I started this blog, I was wilfully anonymous. That was partly because I wasn’t sure what my employer at the time (ITV) would think of me writing about video content and marketing in a public forum, but also because of my relatively lowly status. When I set this blog up, I was a 24-year-old fairly junior market researcher. The blogs I enjoyed reading and commenting on were written by far more intelligent and experienced people who were mainly in the marketing and comms industries. I felt (rightly or wrongly, you decide) that being anonymous would allow my thoughts and ideas to stand up for what they were, rather than be coloured by perceptions of my relative inexperience.

Anyway, I eventually started writing under my full name and I put a small bio (I hate bios) up. But going under an avatar means that when I go to public events, people who I interact with online won’t recognise me and so it is my prerogative to seek out them. unfortunately, I’m not the most observant person so I’ve missed several opportunities to meet and greet.

The present and future

So, I’m rectifying this by putting a picture of myself on the blog’s about page.

As you can see, it is not a “corporate” picture. I still think corporate pictures are grotesque – either in their “sexy execs” style cringeworthiness or their overly conscious attempt at kookiness cringeworthiness. Fortunately, I’ve managed to avoid this at Essential (for the time being) by having a Wii Mii avatar. I’m not particularly photogenic but the picture nicely captures two of my interests (music and beer), and so could be considered “authentic”. At least, it is more authentic than me sitting on a stool at a 45 degree angle forcing a smile to guy with a huge flash on his camera)

I’m not planning to use my real face as my avatar, even though I’ve read many blogs and articles saying that this is a barrier to properly “connecting” (I suspect this is slightly more of an issue on the other side of the Atlantic), particularly due to the aging process. The blurry avatar that I use was taken when I was 21, yet I still use it in some places. In the four years I’ve used Twitter (I had my anniversary on Tuesday), I’ve seen some people retain the same image of their face. Surely over four years they’ve changed their hairstyle, or gained a few character lines on their face.

While there may be many benefits to using your real face as an avatar, the main drawback is vanity. 70s era Columbo will live forever, and I will continue to use him as long as his personality is consistent with what I want to project.

sk

Enhanced by Zemanta