Big Data, Data Analytics and AI


I was asked by Managing Partners Forum (MPF) recently to give a brief overview of the current status and industry trends in Big Data and Data Analytics, topics I’ve been keeping an eye on for several years. The slides are available on Slideshare. The following is shortened abstract from the presentation.

One of the issues I have with with Big Data is just that – the term “Big Data”. It’s fairly abstract and defies a precise definition. I’m guessing the name began as a marketing invention, and we’ve been stuck with it ever since. I’m a registered user of IBM’s Watson Analytical Engine, and their free plan has a dataset limit of 500MByte. So is that ‘Big Data’? In reality it’s all relative. To a small accountancy firm of 20 staff, their payroll spreadsheet is probably big data, whereas the CERN research laboratory in Switzerland probably works in units of terabytes.

Eric Schmidt (Google) was famously quoted in 2010 as saying “There were 5 exabytes of information created between the dawn of civilisation through 2003, but that much information is now created in 2 days”. We probably don’t need to understand what an ‘exabyte’ is, but we can get a sense that it’s very big, and what’s more, we begin to get a sense of the velocity of information, since according to Schmidt it’s doubling every 2 days, and probably less than that since we’ve moved on by 6 years since his original statement.

It probably won’t come as a surprise to anyone that most organisations still don’t know what data they actually have, and what they’re creating and storing on a daily basis. Some are beginning to realise that these massive archives of data might hold some useful information that can be potentially deliver some business value. But it takes time to access, analyse, interpret and apply actions resulting from this analysis, and in the mean-time, the world has moved on.

According to the “Global Databerg Report” by Veritas Technologies, 55% of all information is considered to be ‘Dark’, or in other words, value unknown. The report goes on to say that where information has been analysed, 33% is considered to be “ROT” – redundant, obsolete or trivial. Hence the ‘credibility’ gap between the rate at which information is being created, and our abilities to process and extract value from this information before it becomes “ROT”.

But the good news is that more organisations are recognising that there is some potential value in the data and information that they create and store, with growing investment in people and systems that can make use of this information.

The PwC Global Data & Analytics Survey 2016 emphasises the need for companies to establish a data-driven innovation culture – but there is still some way to go. Those using data and analytics are focused on the past, looking back  with descriptive (27%) or diagnostic (28%) methods. The more sophisticated organisations (a minority at present)  use a forward-looking predictive and prescriptive approach to data.

What is becoming increasingly apparent is that C-suite executives who have traditionally relied on instinct and experience to make decisions, now have the opportunity to use decision support systems driven by massive amounts of data.  Sophisticated machine learning can complement experience and intuition. Today’s business environment is not just about automating business processes – it’s about automating thought processes. Decisions need to be made faster in order to keep pace with a rapidly changing business environment. So decision making based on a mix of mind and machine is now coming in to play.

One of the most interesting bi-products of this Big Data era is ‘Machine Learning‘ – mentioned above. Machine learning’s ability to scale across the broad spectrum of contract management, customer service, finance, legal, sales, pricing and production is attributable to its ability to continually learn and improve. Machine learning algorithms are iterative in nature, constantly learning and seeking to optimise outcomes.  Every time a miscalculation is made, machine learning algorithms correct the error and begin another iteration of the data analysis. These calculations happen in milliseconds which makes machine learning exceptionally efficient at optimising decisions and predicting outcomes.

So, where is all of this headed over the next few years? I can’t recall the provenance of the quote “never make predictions, especially about the future”, so treat these predictions with caution:

  1. Power to business users: Driven by a shortage of big data talent and the ongoing gap between needing business information and unlocking it from the analysts and data scientists, there will be more tools and features that expose information directly to the people who use it. (Source: Information Week 2016)
  2. Machine generated content: Content that is based on data and analytical information will be turned into natural language writing by technologies that can proactively assemble and deliver information through automated composition engines. Content currently written by people, such as shareholder reports, legal documents, market reports, press releases and white papers are prime candidates for these tools. (Source: Gartner 2016)
  3. Embedding intelligence: On a mass scale, Gartner identifies “autonomous agents and things” as one of the up-and-coming trends, which is already marking the arrival of robots, autonomous vehicles, virtual personal assistants, and smart advisers. (Source: Gartner 2016)
  4. Shortage of talent: Business consultancy A.T. Kearney reported that 72% of market-leading global companies reported that they had a hard time hiring data science talent. (Source: A.T Kearney 2016)
  5. Machine learning: Gartner said that an advanced form of machine learning called deep neural nets will create systems that can autonomously learn to perceive the world on their own. (Source: Ovum 2016)
  6. Data as a service: IBM’s acquisition of the Weather Company — with all its data, data streams, and predictive analytics — highlighted something that’s coming. (Source: Forrester 2016)
  7. Real-time insights: The window for turning data into action is narrowing. The next 12 months will be about distributed, open source streaming alternatives built on open source projects like Kafka and Spark. (Source: Forrester 2016)
  8. Roboboss: Some performance measurements can be consumed more swiftly by smart machine managers aka “robo-bosses,” who will perform supervisory duties and make decisions about staffing or management incentives. (Source: Gartner 2016)
  9. Algorithm markets: Firms will recognize that many algorithms can be acquired rather than developed. “Just add data”. Examples of services available today, including Algorithmia, Data Xu, and Kaggle (Source: Forrester 2016)

The one thing I have taken away from the various reports, papers and blogs I’ve read as party of this research is that you can’t think about Big Data in isolation. It has to be coupled with cognitive technologies – AI, machine learning or whatever label you want to give it. Information is being created at an ever-increasing velocity. The window is getting ever narrower for decision making. These demands can only be met by coupling Big Data and Data Analytics with AI.



Posted in AI, Data | Tagged ai, bigdata, cognitive technologies, machine learning | 1 Comment

Communities of Practice – Planning For Success

My experience of knowledge sharing in organisations stems mainly from my involvement in setting up Communities of Practice (CoPs) for UK local government. This was part of a broader Knowledge Management strategy that I was commissioned to deliver for the Improvement and Development Agency (now part of Local Government Agency -LGA). An online collaboration platform was launched in 2006 to support self-organising, virtual communities of local government and other public sector staff. The purpose was to improve public sector services by sharing knowledge and good practice.

Over the past 10 years, the community platform has grown to support over 1.500 CoPs, with more than 160,000 registered users.  This has led to many service improvement initiatives, from more efficient procurement and project planning to more effective inter-agency collaboration in delivering front-line services, such as health and social care. It has also provided some useful information on the dynamics of social collaboration and community management, e.g. the factors that influence the success of a community.

What does a successful CoP look like?

Success will of course depend on the purpose of the community. Some CoPs have been set up as networks for learning and sharing; others have a defined output, e.g. developing new practice for adult social care.  It is clearly more difficult to establish success criteria for a CoP dedicated to knowledge sharing than it is for – say – a CoP that has a tangible output. Success for the former will rely on more subjective analysis than for the latter, where there will probably be more tangible evidence of an output, e.g. a policy document or case study.

However, rather than argue and debate the criteria for assessing the “success” of a CoP (or other organizational learning system), I’d prefer to consider how we monitor and assess the “health” of a CoP. For this approach I think we have to consider the analogy of a CoP to a living and breathing organism.

A healthy CoP will show clear signs of life; this can be assessed using various quantitative indicators, such as:

  • Number of members
  • Rate of growth of the community
  • Number and frequency of documents uploaded.
  • Number and frequency of documents read or downloaded.
  • Number and frequency of new blog posts
  • Number and frequency of forum posts
  • Number and frequency of comments
  • Number of page views per session
  • Time spent on the CoP per browser session

…etc.

Not that any one of these indicators in isolation will indicate the good health of a CoP, but taken together they can give a general perspective of how vibrant and active the community is.

Continuing with the analogy of a living, breathing organism, different CoPs will have different metabolisms, some may be highly active; others may be fairly sedate. Understanding the community ‘rhythm’ is a key aspect of knowing when any intervention is required in order to maintain this rhythm.  Not all CoPs are going to be vibrant and active all of the time; there may be periods of relative inactivity as a natural part of the CoP lifecycle. But it’s important to know the difference between a CoP that is going through a regular period of inactivity and a CoP that is moribund.

A point to note: inactive CoPs may not necessarily be a cause for concern. One reason for inactivity could be that the CoP has served its purpose and its members have moved on. In which case the knowledge assets of the CoP need to be published and celebrated and the CoP either closed, or (with the agreement of the members) re-purposed to a new topic or outcome.

So, understanding the vital life-signs and metabolism of a CoP is a fundamental part of ensuring the continued good health of the CoP, and therefore more likely to achieve its goals.  And the key to the continued good health of a CoP is knowing how and when to intervene when one or more of the life-signs begins to falter.  Without wishing to labour my analogy of the living, breathing organism too much, it’s the equivalent of knowing when someone is not feeling too well and administering the appropriate medicine. [See concluding section for symptoms and potential cures for an ailing CoP.]

The Online Facilitator

Where does the CoP facilitator or e-moderator come into all of this? Well, I mentioned earlier that over the 10 years since its inception, the Local Government CoP strategy has provided some useful information on the dynamics of social collaboration and community management. For example, there is clear evidence that CoPs that have full or part-time facilitation/e-moderation are much more likely to succeed and be self-sustaining than those that rely entirely on self-organisation or community networks where there are no clearly defined roles or responsibilities.

The most successful CoPs (and I should clarify here that I’m using “success’ to mean “in good health”) are those where there is more than one facilitator/e-moderator and where interventions by the facilitator/e-moderator are frequent and predictable.  This may take various forms, such as regular polls of the CoP members; regular e-bulletins or newsletters; a schedule of events (face to face or virtual); regular input to Forum posts and threads, seeding new conversations; back-channeling to make connections between members of the CoP; etc.

In other words, show me a good and effective CoP facilitator/e-moderator and I can show you – in all probability – a healthy and successful CoP (or similar organisational knowledge sharing community).

Attributes Of A Good Facilitator

I’ve often been asked “what makes a good community facilitator/e-moderator?” This is a difficult one, and I’m of the opinion that it is more of an art than a science. The technical administration functions of the role can be taught, but the good facilitators/e-moderators that I have met bring another dimension to the role, i.e. empathy with, and understanding of, human behaviours and personalities. Something that I suspect comes with experience rather than a pedagogical approach. What I do think is important is having some knowledge (not necessarily ‘expert’ status) and enthusiasm for the topic or theme of the CoP (also referred to as the ‘domain of knowledge’).  This will help where interventions are necessary, and the community members are more likely to appreciate the facilitator/e-moderator as one of their own.

There have been various papers and blogs published about the role and responsibilities of an online CoP facilitator but maybe the following diagram captures the essence of the role. Click to enlarge.

Facilitator Role

Facilitator Role

(Reworked from an original by Dion Hinchcliffe)

Conclusion

The conclusion? Based on a significant body of evidence, coupled with personal experience, if you want to ensure the success of your Community of Interest or Practice, make sure you’ve invested in in a team of good/experienced community facilitators.


Posted in Communities of Practice, Knowledge Management | Tagged , , | Leave a comment

How to fail with Twitter

I’ve been using Twitter since 2007, and though I’m not in the same league as celebrities (or z-listers)  who count their followers in hundreds of thousands, I’m comfortable knowing that my following has been grown organically, I’ve never ever paid for new followers, and I do know and recognise most of them in the virtual world we populate.

There’s much been written about how to use social media – most of it crap, and most aimed at marketing, brand promotion or people with massive egos.

Since I don’t fall into any of these categories, and use Twitter mainly for engaging with people who have something useful to say, picking up on news and ideas, and sharing stuff I’ve learnt (even the useful stuff!), then feel free to ignore the following tips, all of which are aimed at those who use their Twitter statistics to massage their overblown egos:

  • Make sure you auto-reply to new follows with a link to your free (but crap) ebook.
  • Provide an obscure description of who you are and what you do, or…
  • Have a completely blank bio.
  • Have a nice pose showing that six-pack or gawky grin.
  • Have a profile photo or an image that only makes sense to you and your imaginary friends.
  • Attract like-minded followers by posing with a gun, a knife or a swastika flag in the background.
  • Always refer to yourself as an “expert”, “ninja” or “blackbelt”.  You’re in a much better position to judge this than anyone else.
  • Never add a link to a great resource you’ve cited.
  • Have big gaps (e.g. days) between posts.
  • Try and follow thousands of random people. They’re bound to follow you back.
  • Write about the cat/hamster/holiday over and over again, and don’t forget to include the photos.
  • Fill your tweet with obscure abbreviations and hashtags.
  • Send an-auto DM to every new follow suggesting you connect on Facebook or LinkedIn.
  • Retweet EVERYTHING!
  • Follow everyone and everything – even those with zero tweets.
  • Say whatever comes into your head – no need to think (this one is a bit of a challenge for politicians, elected councillors and footballers!)
  • Use Twitter as your primary marketing plan.
  • Try and find an idiot to have an argument with. See who wins.
  • Take credit for tweets that did not originate from you.
  • Tweet on every piece of news you can get your hands on.
  • Tweet about your need for coffee or what you had for breakfast.
  • Be emotional and let off steam.
  • Always remember that your follower count is far more important than the content of your tweets.
  • Pay for followers (most of them will be bots anyway) – quantity trumps quality.
  • Make up new hashtags and try avoid using ones that are already in use to categorise information.
  • Look out for anyone that has only tweeted several times but has many thousands of followers. This is a mark of ‘awesome’ – the followers can’t all be wrong, can they?

I’m sure this is not an exhaustive list. If you have any more tips for growing your ego twitter following, let me know at and I’ll post an updated list.


Posted in fun, Social Media, Twitter | Tagged , tips, Twitter | 1 Comment

Knowledge Management – Don’t Forget The SME’s!


The research paper by Cheng Sheng Lee and Kuan Yew Wong in the December of issue of  Business Information Review raises a number of interesting points that deserve wider discussion. Abstract as follows:

Knowledge management (KM) is recognized as an important means for attaining competitive advantage and improving organizational performance. The evaluation of KM performance has become increasingly vital, as it provides the direction for organizations to enhance their performance and competitiveness. A survey was carried out to test the applicability of 14 constructs based on knowledge resources, KM processes, and KM factors in measuring the KM performance for small and medium enterprises (SMEs) in Malaysia. This article intends to further explore the effects of company size (micro, small, and medium) and KM maturity on knowledge management performance measurement (KMPM). Two-way analysis of variance results indicate that company size and KM maturity do affect some aspects of KMPM in SMEs.

The research focused on the effectiveness of knowledge management techniques in Small to Medium Enterprises (SME’s) in Malaysia. Though the scope of the research is limited to one geographic region, the findings could – and should – be tested against a wider and more international cohort.

According to the research paper, in Malaysia, SME’s account for up to 98.5 percent of the total number of businesses and contribute up to 33.1 percent of GDP. They employ 57.5 percent of the total workforce.

To offer some comparison, UK, SME’s account for over 99.8 percent of the total number of businesses, they contributed over half of UK output in 2013 (GVA) and employ 48 percent of the total private sector workforce.

The EU average SME contribution to GDP is 55 percent.

It is clear from this data that SME’s make up a significant, and growing, contribution to the UK and European economies. It seems quite odd, therefore, that so little research has been undertaken into how knowledge management strategies and techniques have been utilized within and across this sector.

The Cheng Sheng Lee/Kuan Yew Wong research gives us some insights that could be tested against a wider geographic sample of SMEs. Some key points from the research as follows:

  • The literature research identified that the size of an organization affects its behaviour and structure (Edvardsson, 2006; Rutherford et al, 2001) and how it influences the adoption and implementation of KM (Zaied et al, 2012).
  • SME’s should not be perceived as homogenized groups. They themselves can be categorized according to relative size, e.g. micro, small and medium, which can influence the way that KM is implemented.
  • In terms of human capital, medium-sized businesses (SMEs) focus more on codification strategies (explicit knowledge) whereas micro-sized businesses (SMEs) are more dependent on socialization strategies.
  • An obvious point, but reinforced by the research – the need for better infrastructure, such as tools, office layout, rooms etc. increases as the organizations grows.
  • Knowledge Maturity is a key attribute that should be monitored measured. The value of an employee will increase in terms of their contribution to the success of the organization as they progress from beginner, intermediate and advanced staged of KM maturity. Clearly the impact of an employee leaving without an effective knowledge transfer process will be more keenly felt by a small organization. [NB. This is not an excuse for large organizations to treat this is a lower priority!]
  • Company size does make a difference to KM performance measurements. A number of factors are proposed, e.g. impact of high turnover, limited resource redundancy in smaller organizations, smaller organizations will likely prioritize implementation processes over performance measurements etc.
  • KM performance measurement (KMPM) is still new for SME’s, as the majority of analyst reports and case studies remain focused on large organizations, with a mindset that SMEs do not need or are not ready for KMPM.

Overall, this is an excellent piece of research, and highly recommended reading, which despite its limited sample size and geographic boundary, gives some very useful insight into how KM is being implemented across SME’s. Reassuringly it shows that a growing number of SME’s see KMPM as vital to the growth and success of their business.

The paper is also a wake-up call to academia, research, analyst and consultancy organizations in that we need far more definitive and comprehensive studies in this field, to embrace UK, Europe and other key industrial and economic zones.

To finish with a quote from the authors:

Enough with large organizations; SMEs should not be neglected as they play a major role in a country’s economic growth”.

On this evidence, who could disagree?

Image source: http://www.denisflorent.fr/small-is-beautiful/


Posted in Knowledge Hub | Tagged BIR, , , Research, Sage, SME | 1 Comment

Watson Analytics

I recently had an introductory presentation to IBM’s Watson Analytical Engine and was mightily impressed by what I saw.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Unstructured data could typically include  news articles, research reports, social media posts and enterprise system data.

You can set up a freemium account on Watson and get immediate access to the full range of features. As with most freemium  services, there are some limits, these come in the form of file size restrictions and data storage. You can only upload flat files that are no more than 100,000 rows and 50 columns and there is data storage limit is 500 MB. If you want more than this you have to consider the Personal or Professional editions.

To get started you will need to set up an IBM id (e.g. your email) and agree to the Ts & Cs. Nothing ominous here, and you can opt out of any IBM emails. Once you’re email is validated, sign-in to your newly created account

 

Once your account has been validated, sign-in and you’ll see the main Watson interface:


To get started I recommend watching the video.

There is a temptation to dive straight in and work your way through the various tools and features. However, not everything is intuitive, and it’s well worth spending some time looking at the various tutorials and help files.  I recommend:

I had a few problems when uploading some of my own “test” datasets, which as I mentioned earlier are limited to 100,000 rows and 50 columns and 500Mb for the free account. If you just want to have a play with the various features, it’s probably better to use one of the tried and tested datasets available from the Watson Analytic Community

A word of warning – you can get totally immersed in the Watson environment, and I’ve probably lost a day or two somewhere in trying out the technology. However, if your job involves data and decision making, I recommend giving it a go.

Remember too, this is a decision support tool and does not a decision-making tool. You still have to engage your brain when looking at the visualisations, and you do have to have some understanding of your data. And don’t go away thinking that the “Predictions” facility is going to give you the winning numbers for this week’s lottery – but by all means try!


Posted in Data | Tagged analytics, IBM, , Watson | 1 Comment

Connecting Knowledge Communities


The forthcoming NetIKX event “Connecting Knowledge Communities“, scheduled for Wednesday 23rd September, is shaping up to be one of those ‘must attend’ events for anyone who is confused (or bemused) at the plethora of different groups and communities dedicated to the support of knowledge and information professionals.

To quote an abstract from the event promotion:

If you want to consider how membership organisations work and gather ideas and tips for your personal networking, this will be a good meeting to attend. You may also get information (and possibly knowledge) about the organisations that are concerned with knowledge and information!

It does appear to be something of a paradox that on the one hand knowledge professionals eulogise and promote the benefits of knowledge sharing, and on the other hand fragment into multiple organisational domains that – for a variety of reasons – operate more or less independently and with little opportunities for inter-organisation collaboration.  We tend to overcome some of these problems by joining multiple membership organisations in the hope that our personal knowledge integration will act as the ‘sum of the parts’. However, we can’t hope to join them all, and we’ve probably found that each organisation has a particular (and possibly unique) focus.

Currently appearing (in no particular order) are:

See  NetIKX75 – Connecting Knowledge Communities (PDF File) for further details of which organisations are appearing.

So, here then is an event which brings some of the organisations in the “knowledge” landscape together, in one place and at one time. An opportunity to learn about the different KM/IM communities, what they do and what they offer. Perhaps also an opportunity for reflection on our own professional development and the direction we want to travel.  Not least, it will be an opportunity to meet new people, to grow our personal networks, and to become better informed about the different professional communities and what they offer.

Can you really afford to miss this event? If not, register your attendance on the NetIKX website – soon, there is a limit dictated by the room size. I’ll be there!


Posted in collaboration, Information Management, Knowledge Management | Tagged henley, IM, isko, kidmm, kinwbs, , , NetIKX75 | 4 Comments

12 Principles Of Knowledge Management


I recently came across a paper by respected author, consultant and keynote speaker  Verna Allee on the 12 principles of Knowledge Management. Reading the paper, two thoughts occurred to me:

  1. The principles appear to be so simple and obvious
  2. Why didn’t I think of them!

I asked myself whether these statements meet the strict definition of ‘principles’, which is:

a fundamental truth or proposition that serves as the foundation for a system of belief or behaviour or for a chain of reasoning.

and firmly believe they do.

I’ve reproduced the principles below, with due accreditation to Verna Allee. I think these should be imprinted in the minds of anyone aspiring to be a competent and successful knowledge manager:

  1. Knowledge is messy. Because knowledge is connected to everything else, you can’t isolate the knowledge aspect of anything neatly. In the knowledge universe, you can’t pay attention to just one factor.
  2. Knowledge is self-organizing. The self that knowledge organizes around is organizational or group identity and purpose.
  3. Knowledge seeks community. Knowledge wants to happen, just as life wants to happen. Both want to happen as community. Nothing illustrates this principle more than the Internet.
  4. Knowledge travels via language. Without a language to describe our experience, we can’t communicate what we know. Expanding organizational knowledge means that we must develop the languages we use to describe our work experience.
  5. The more you try to pin knowledge down, the more it slips away. It’s tempting to try to tie up knowledge as codified knowledge-documents, patents, libraries, databases, and so forth. But too much rigidity and formality regarding knowledge lead to the stultification of creativity.
  6. Looser is probably better. Highly adaptable systems look sloppy. The survival rate of diverse, decentralized systems is higher. That means we can waste resources and energy trying to control knowledge too tightly.
  7. There is no one solution. Knowledge is always changing. For the moment, the best approach to managing it is one that keeps things moving along while keeping options open.
  8. Knowledge doesn’t grow forever. Eventually, some knowledge is lost or dies, just as things in nature. Unlearning and letting go of old ways of thinking, even retiring whole blocks of knowledge, contribute to the vitality and evolution of knowledge.
  9. No one is in charge. Knowledge is a social process. That means no one person can take responsibility for collective knowledge.
  10. You can’t impose rules and systems. If knowledge is truly self-organizing, the most important way to advance it is to remove the barriers to self-organization. In a supportive environment, knowledge will take care of itself.
  11. There is no silver bullet. There is no single leverage point or best practice to advance knowledge. It must be supported at multiple levels and in a variety of ways.
  12. How you define knowledge determines how you manage it. The “knowledge question” can present itself many ways. For example, concern about the ownership of knowledge leads to acquiring codified knowledge that is protected by copyrights and patents.

Reading through these principles I’m reminded of a famous quote by Mahatma Ghandi:

Truth is by nature self-evident. As soon as you remove the cobwebs of ignorance that surround it, it shines clear.

Amen to that.


Posted in Knowledge Management | Tagged , , principles | 1 Comment

Content Curation Needs Humans After All!

As I ponder my forthcoming session on the topic of “Content Curation” at the CILIP Conference in Liverpool this Friday 3rd July, I’m aware that the slides I was asked to prepare and submit to the organisers last month are already out of date. Unsurprising I guess given the rapidly changing business environment that underpins this discipline.  My notes did include mention of the emergent growth of fully automated content curation tools and platforms, and the inherent problems (as I see them) in thinking that technology alone will help us to make sense of the relentless streams of raw, unfiltered, context-free, data and information that pervades our senses during our working days.

I was therefore both surprised and encouraged by the recent announcements, coming hot off the heels from the likes of Facebook, Apple, Twitter, Google and Yahoo! that humans are in fact better than machines for sense-making and finding relevance. Facebook has announced a return to what Chris Cox, its chief product officer, calls “the qualitative”. This is an acknowledgement that real artificial intelligence needs humans at both ends of the input-output spectrum.

Facebook has hired several hundred people to rate the content that appears on its users’ news feeds. The music services offered by Apple and Google now offer their customers playlists assembled by human beings. Apple is also hiring a team of editors to work on the Apple News app unveiled during the company’s recent WWDC event, before the app’s launch as part of its iOS 9 software later in the year.

Twitter announced details of “Project Lightening”, which will provide collections of tweets curated from key events and trending discussions. They are recruiting a new team of editors who will use data tools to comb through events and recognise emerging trends, and pluck the best content for republishing from the ocean of updates flowing across Twitter’s servers.

So what does all of this tell us? I think it’s the dawning realisation that algorithmic systems (including AI) are not sufficiently advanced (and will they ever be?) to be able to understand the realities of modern life, its politics, its rapidly changing cliques, boundaries, rules and religions. The basic qualities of thought and reflection still elude the logic gates of even the smartest computers.

Though I started this post with a concern that maybe my month-old slides were out of date, on reflection they’re not. Maybe they don’t include incisive commentary about the latest updates from Apple, Facebook etc., but my session does focus significantly on the human elements of content curation, and the need for us to develop the disciplines, skills and competencies to be able to make sense of the world we live in.

Content Curation is done by people— information professionals, editors, writers, me, and perhaps you. It is NOT performed by tools, algorithms, robots or software.  When we curate content we can use these things to help us through the process of content curation, but we can’t rely on these things fully.

It’s a difficult job, but one which is in increasing demand by businesses the world over – as evidence from the likes of Apple and Facebook are demonstrating.

 




Posted in Curation, Information Management, Knowledge Management | Tagged , Information Management, , | 3 Comments

Knowledge Management – Measuring Return on Investment


A common and recurrent theme that I keep coming across is how to measure the value of knowledge management, e.g. the return on investment (ROI) of implementing a knowledge management strategy. This may cross over into having a social media strategy where the goal is to support knowledge sharing, so I’ll use these terms – KM Strategy and social media strategy interchangeably in this particular context.

I don’t doubt the importance of being able to measure results and it’s the job of managers to ensure they get value out of any investment in training, technology, organisational development or whatever.  However, these things are notoriously difficult to measure – for example – how do you put a price on a conversation? This led to me thinking about turning all of this on its head and considering how we should measure the cost of NOT having a knowledge management or social media strategy, or NOT making any change.

Using this approach we can at least examine the current status quo and determine whether business processes, capacity, staff knowledge etc. are fit for purpose.  So, rather than spending time and effort creating a business case for a KM or SM strategy, ask managers to justify why things should stay as they are.

Some pertinent questions for managers might be:

  1. Are your staff currently motivated and inspired?
  2.  Do your staff have all the relevant information to do their jobs effectively?
  3. Do your staff have the right tools for the work they are being asked to do?
  4. Do your staff understand their place in the wider organisation and their input and output dependencies for the business processes they contribute to?
  5. Do your staff have adequate opportunities to share knowledge and information with other parts of the organisation? Are they encouraged to do so?
  6. Are you confident that you can react to rapidly changing demands on your staff?
  7. Do you have sufficient knowledge and information to consider the impact of external events on you and your staff and to plan accordingly?
  8. Do you know what your customers are saying about you (within and external to your organisation)?
  9. Do current policies and guidelines support or hinder you and your staff in their work?
  10. Does your manager fully understand what you and your staff do?

There are probably other questions that could be asked, but the key point is that any question which triggers a negative response is potentially a catalyst for change.  This also means it could become a performance indicator if change is agreed, i.e. using qualitative or quantitative techniques.

So, we have the beginnings of a measurable approach to change; we know where we are now and we should know what the desired outcomes are. The difference is what we need to measure.

Of course, the problem remains that not all changes can be measured in strictly cash value terms, which is what many people consider to be the true meaning of ROI. I go back to the point I made earlier – how do you measure the value of a conversation or some information shared?  The answer is, you don’t, and the sooner that everyone recognises this the better. Measuring impact can be just as important as measuring value.  The impact might be things like improved customer satisfaction (measured using surveys), or less time to complete a task, or improved staff morale (measured using surveys). Any of these can – and potentially will – have an effect in terms of cash value to the organisation, but I firmly believe that converting impact to cash value is an exercise in futility, since more often than not, the formulae and algorithms have too many variables.

So, in terms of ‘ROI’, think ‘Return on Impact’ rather than Return on Investment when considering Knowledge management strategies, and develop the strategy from the starting point of getting staff to justify the present  status quo.  After all, change is part of life, and as Darwin once said:

It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.

(Originally published by Stephen Dale, June 2010)


Posted in Knowledge Management | Tagged , , ROI | 3 Comments

Murdermap Mashup


Spotted originally by my colleague Conrad Taylor, a geospatial application that plots more than 400 homicide cases reported by court reports and the Old Bailey’s archives. Something for the ‘gruesome violence’ mashup category maybe. You can even do deep dive query’s according to the type of murder weapon used, e.g. ligature, knife, gun, etc.

According to the website, the ‘murdermap’ project is dedicated to covering every single case of murder and manslaughter in London from crime to conviction. It aims to create the first ever comprehensive picture of homicide in the modern city by building a database stretching from the era of Jack the Ripper in the late 19th Century to the present day and beyond.

Information is obtained from the police, media coverage, court records and original reporting – and by making the map freely available the site’s owners hope to reveal the stories behind the crime figures.

I’m not quite sure of the utility of this data, other than to criminology researchers, though I guess it might be useful for the housing market, e.g. “am I moving to/living in an area where I’m more likely to be shot or stabbed?” Come to think of it, I’ll check that out!

“Maybe it shows there is a fate worse than death – being mashed up afterwards.” CT

http://www.murdermap.co.uk/murder-map.asp


Posted in Data | Tagged Data, mashup, murder | 1 Comment