Managing Knowledge on Slack 2.0

SummarySlack Logo

The proliferation of Slack into the work place has been just amazing. While the jury is still out whether Slack can replace emails, however there is no questioning the important place it has come to occupy when it comes to communication and collaboration in several businesses. While Slack has many advantages as compared to previous enterprise messaging and collaboration tools, however managing knowledge on Slack is still a challenge. This article explores the importance of knowledge management on Slack, some of the challenges and why we need a tool that has been specifically built for Slack to actually enable knowledge management on Slack.


We are all in love with Slack. Slack has over 4 million users now and continues to grow at a rapid pace, turning the enterprise communication industry on its head. A survey conducted by Hiten Shah of CrazyEgg in 2015 reveals the reasons why people use Slack – the significant ones being reduction in email volume, better interface and lots of Integrations.

Slack wasn’t the first messenger service that entered the enterprise arena. Yammer, Lync, and HipChat are some of the other chat and messaging services for business and enterprise.

Slack User Growth

Slack has a few unusual features that make it perfectly suited for work, including automatic archiving of all your interactions, a good search engine and the ability to work across just about every device you use. Another reason is that Slack is fun to use. Part of this is the helpful Slackbot that guides users and provides assistance with a playful, yet helpful personality as well as the myriad of other bots that are available to add in. Besides Slack also brings a feeling of intimacy with co-workers on the other side of the country.

When email just started out it was still a luxury; not many organizations had email. Over time, it has become an indispensable means of communication. Team messaging is heading in the same direction, and as they take the center stage in business communication, other enterprise tools too need to adjust and build on the new workplace normal. One such tool is knowledge management: how we capture, organize and share knowledge within teams.

Should We Care About Knowledge Management in a Slack Setting?

As a recent report from the Society of Marketing Professionals (SMPS) notes, as we “transition from the Information Age to the Knowledge Era . . . continued training of both marketing and technical staff is vital to a firm’s longevity. So while ignorance may be bliss, knowledge is indeed power.”

Knowledge sharing is probably the most common type of interruption at any company. Team members frequently have to share their knowledge with other team members. This is where it can become quite costly, certainly in terms of employee productivity. A lot of companies don’t have a robust enough process and lose knowledge when employees move on or change roles. They lose their team’s deep smarts: the skills and know-how that have taken a lot of time and effort to cultivate. The cost of this loss is high.

Email, by design, has an inherent filter built into it. To put something down in an email and send it out to people (and have it stay in their inboxes), it had to be sufficiently important. By contrast, chat-based tools such as Slack simply do away with this filter. While this may result in more noise, but it also results in higher conversations, more sharing of data and files. With a more intimate team more conversations can happen in channels, which anyone on the team can join. Those conversations in Slack are what create that magical sense of “ambient awareness” of what’s happening, as well as an archive of organizational knowledge over time. Hence an increasing need to better capture, organize and share all this knowledge.

Challenges of managing knowledge in Slack

Slack uses a product architecture that is based on streams of data ordered by time. That means, by essence, things will get lost as new stuff comes up in an endless waterfall of information. For group chatting and social networking, this is extremely useful. But, for managing knowledge and making it accessible, could become a nightmare

Here are some of the reasons why managing knowledge on Slack can be a challenge:

Information Overload

New knowledge is organically created and shared everyday on Slack, but it quickly moves out of sight in the constant stream of new updates. This sometimes makes it challenging to find, record and share that fleeting knowledge.

Take one look at any team’s Slack channel, and you’ll find people having casual conversations, sharing everything that they would share in an email, including pieces of information that they want their co-workers to have easy access to (like in an email where you bolden or italicize parts that need their attention) – An important link, a piece of code that needs feedback, a file that needs to be viewed, a process document, an important topic that needs discussion. Since Slack is moving fast, most of these pieces of information or knowledge, are lost in the thread.

Users shouldn’t have to always be there just so that they don’t miss out on the important things shared. The chat history becomes way too big for users to mine all the important things they’ve missed out on.

Repetitive Questions

A challenge that several teams face with Slack is repetitive questions that clutter Slack channels. For team members, repetitive questions are annoying and reduce their productivity. Slack is great to preserve conversations but not so for finding answers.


Search in Slack is actually pretty good. Not only is Slack good at retrieving past messages and conversations, but anything that is linked to in Slack or attached as shared objects (text related or with text metadata) in Slack all become searchable. The challenge here is not the search engine itself but the fact since the platform generates so much conversation, getting to the right knowledge actually takes a lot of time. Also, finding related threads and discussions across channels can be cumbersome in search when different terms (synonyms / fungible technical terms) are being used, even if search is good.

There are also situations where you know a specific person uploaded a file but you can’t remember what it was called, or someone talked about a particular subject but you can’t remember who. This makes the information particularly hard to find using Slack’s existing search, and the information gets lost in the ‘noise’ of the channel. This problem is compounded by the high numbers of messages that Slack processes.

Slack Search


It’s often hard to find specific things (documents especially) and even harder to aggregate bits of information to make sense of what’s going on in the environment. Slack way to unlock what’s going on at a “higher level”, aggregating conversational data to find trends that would go unnoticed at a lower level and remain lost in the noise of the conversation.

An important feature of knowledge management is to elicit not just the explicit knowledge shared by people but also the tacit knowledge that can be built by analyzing user behavior and actions. This can be immensely beneficial for organizations to improve their productivity.

Knowledge Management framework

We can apply the model of knowledge activities based on Probst’s building blocks of knowledge management (Probst 2002) to understand how Slack plays a role with respect to a knowledge management framework.

Probst KM building blocks

Probst knowledge activities

If we focus on the application of knowledge within the activities of business process, we see that:

Knowledge generation

Knowledge generation can happen:

  • Internally i.e. knowledge is created within the organization by employees or
  • Externally i.e. knowledge is created together with partners or customers

And knowledge generation includes both creation of new knowledge as well as construction of existing knowledge. Slack does really well in generating knowledge, especially given the collaborative processes of knowledge building.

Knowledge transfer

Knowledge Transfer is basically sharing of knowledge which also happens on Slack but with its own limitations. E.g.  although knowledge in Slack channels can be searched but those in Direct Messages can get lost. Similarly sharing knowledge with external audience, e.g. with customers or channel partners, can be a challenge.

Knowledge organization

The organization of knowledge is building the relevant metadata and taxonomies so that its categorization and access can be improved and secured. The only knowledge organization we can do in Slack is associating it with different channels.

Knowledge Saving

Although Slack maintains a log of all conversations but the possibility to distribute this or refine or perform any intelligent operations on it is not possible.

Does this mean Knowledge Management Cannot Happen on Slack?

Absolutely not. Slack cannot do everything for everyone. And this is why they have created an app marketplace to allow others to build applications that plugs these gaps. Slack’s API’s are also very well documented and they actively support the community in developing helpful extensions to the Slack environment.

The early adopters of Slack were developers, and we can take some cue from them on how they managed their knowledge. The organization of conversation into channels combined with integration of tools such as Trello, GitHub, SVN etc. really helped to efficiently access the needed information and reduce redundancies.

These tools helped users to identify relevant or needed knowledge, follow the progress of a task or project and being aware of dependencies or responsibilities by providing notifications for the tool itself. In fact integration of these tools increased awareness about what the other is doing and what is expected from one, because there is more synchronization and each time for example a card moves in Trello, users get a notification. At KnoBis, we use Trello a lot and the Trello integration has been incredibly useful to us. It automatically posts to our #engineering channel every time a team member adds an update to product backlog board.

This way, Slack included the identification of knowledge, which was stored elsewhere. Slack is used as a central contact point to summarize knowledge that existed on other platforms.

As Slack extends usage to other cross functional teams, there becomes a need for a broader knowledge management system to enable similar knowledge sharing and capturing

Knowledge Management for Slack needs to be thought differently

Slack’s features and uniqueness, which of course makes it more popular, also means that knowledge management for Slack needs to be thought differently. Most existing knowledge base softwares were developed before the era of enterprise messaging and aren’t able to latch on to the uniqueness provided by these platforms, such as:

Conversations as Knowledge

More often than not, knowledge in Slack gets built as casual conversations and not necessarily long form rich text articles or documents. With conversations, the context and history is there to be seen and can be incredibly valuable for someone to understand the background. This is very different from traditional systems, which approached knowledge mostly as rich text articles.

Introduction of Bots

While bots have long lived in the quieter corners of the Internet, Slack is pushing it into the mainstream. Bots are great at making sense out of lots of different types of information (schedules, meeting notes, documents, notifications from other business applications), and making all of that data more useful by allowing people to interact with it like they would in a conversation with a person.

Slack bots range from the obvious—bots for recognizing good work, posting photos, translating text—to the utterly inane, like playing poker. Another tells you who’s talking too much, seemingly to shut them up. There’s one to notify you each time your startup is mentioned somewhere online, streamlining that whole wasting time on the Internet thing. They absolutely can save you time.

This of course presents a very exciting opportunity for knowledge management as a “knowledgeable bot” can answer a lot of questions for team members without them now needing to disturb their team mates.

Text Editor

Most traditional knowledge management systems tend to support WSYWIG editors that do not support Markdown, while Slack uses Markdown. This can create challenges when either capturing content from Slack or posting it to a Slack channel.

Slack APIs

Slack doesn’t allow integrations to create any custom views, instead limiting apps to plain or lightly formatted text. As a result, complex integrations generally have a pseudo-command-line interface, requiring one command to display information and yet another to act upon it. This can make it a bit of challenge for knowledge bases that often depend on a lot of multimedia and metadata for each knowledge content.


It is important to note that Slack doesn’t replace everything. Dave Teare, founder of Agile Bits (developers of 1Password), recently wrote that his company’s “Slack Addiction” led to “using it over all the other tools at our disposal,” which meant that employees posted support issues and development issues into Slack instead of ticketing systems and knowledge bases. This is a classic example of what happens when we try to substitute Slack for everything.

Slack does well to sit alongside those services for conversational interactions and sharing results out of them. It isn’t going to replace a social search or a document management service or a collective aggregation service like KnoBis. Slack not only integrates things into itself, but also can have what is in it as fodder to integrate out, so conversations and things shared in Slack can be honed and more deeply framed and considered in other services and then have results and outcomes of those considerations shared back into Slack. It is a good partner for it to add context and easily drop documents that are relevant from the service into Slack. But, Slack isn’t going to replace document management, even if its search is good, the versioning, permissions, and access controls for compliance and other valid needs aren’t there in Slack. Your document management service could become more pleasurable to use though. And therefore Slack users need a “Knowledge Network” – A place where Slack users can post things that others “need to know”, preferably integrated with Slack so that you can post-once-show-everywhere.

About Rajat

Rajat is the founder of KnoBis. KnoBis is a knowledge base software for Slack and Google Apps teams. Powered by a strong search, KnoBis makes it easy to capture and share knowledge in any format: conversations, rich text articles, multimedia documents etc. Use cases of KnoBis include sales enablement, customer support enablement, intranet/internal team knowledge base and self support module for customers.

Rajat has close to 12 years of experience in the computer software industry in engineering, product management and marketing roles. Rajat is a graduate from IIT BHU.

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

How a knowledge map can help to identify knowledge gaps and needs, against all odds

Without understanding their knowledge needs, organizations can hardly decide on relevant KM activities to strengthen the necessary knowledge domains and advance in their business. Knowledge mapping can be an optimal way to address the challenge.

Knowledge is undoubtedly the trickiest organizational asset: possessing it alone is never enough for organizations to use it in a proper and advantageous way. That’s why, when put into the knowledge management context, organizations often look like antique shops: there are zillions of articles, but it’s absolutely unclear which ones are the most valuable – maybe, a golden statue that shines brightly or a rusty chandelier?

In this context, companies do possess a certain knowledge wealth but, unfortunately, can hardly understand what knowledge they really have and what knowledge they need in order to foster business development. What’s even worse, organizations may overlook critical knowledge gaps, which makes them an arm’s length close to disappointing and recurrent business mistakes. In this respect, the only way for organizations to understand clearly the value of each knowledge item is to go for knowledge mapping.

5 steps to a better understanding of knowledge needs

Knowledge mapping is a knowledge management technique that helps organizations to inventory their explicit and tacit knowledge residing within different departments, business units or the entire organization. As a part of a knowledge management solution, a knowledge map shows companies what knowledge they have, where it is located, who owns it, then allows to understand if the available knowledge is sufficient to cover business needs.

The knowledge mapping process can be divided into 5 logical steps:

Step 1. Outline a general approach to knowledge mapping. This includes:

  • Defining a knowledge map type (for example, strategic, functional, process-based, etc.).
  • Choosing the map’s scope (for example, departmental, cross-departmental, organization-wide map).
  • Identifying key elements of the map, such as knowledge items, knowledge assets, knowledge domains, knowledge owners.
  • Creating relevant questionnaires that will help knowledge managers to inventory knowledge of each particular employee and assess its depth.
  • Bringing questionnaires to a knowledge management system (for example, companies can leverage SharePoint’s capabilities to create surveys of various complexity).

Step 2. Carry out knowledge overview, assessment and structuring through questionnaires and face-to-face meetings.

Step 3. Evaluate available knowledge and benchmark it with both the minimum required and desirable knowledge levels.

Step 4. Reveal knowledge needs and prioritize them to define those that affect business processes and hinder organizational development.

Step 5. Define relevant knowledge management activities to meet critical knowledge needs and patch severe knowledge gaps.

Although these 5 steps seem to be easy, in reality each of them requires important efforts of knowledge managers and employees in general. As a part of our knowledge management consulting practice, we’ve analyzed efforts required to create a comprehensive knowledge map and revealed that the initial organization-wide knowledge mapping is one of the most time-consuming and complicated KM tasks.

However, not only time can be a stumbling block on the knowledge mapping way. Stakeholders’ collaboration can also get burdensome, and here is why.

Why employees block knowledge mapping?

Knowledge mapping means that human-to-human interaction is inevitable. This naturally leads to possible pitfalls since employees may resist it.

  • Line managers resist knowledge mapping as they claim they understand knowledge needs in their departments better than knowledge managers. This can be a result of managers’ protective behavior and their wish to prevent other employees from interfering into the departmental life.
  • Key knowledge owners confront knowledge mapping as they are busy with daily routine and have no time for KM-related interviews and continuous collaboration.
  • Employees can get hostile to knowledge mapping because they aren’t ready to face knowledge gaps and admit them. Accepting a knowledge gap can be difficult from the psychological point of view, as it reveals employees’ imperfections and forces them to take additional self-learning or training activities.
  • Top managers can be skeptical to knowledge mapping as the process itself requires substantial efforts. To add more, it brings no benefits to organizations if nobody takes further improvement steps.

Fortunately, knowledge managers can change such an unfavourable organizational climate if they act according to one of the following scenarios.

2 scenarios to overcome human resistance and map knowledge

To break the resistance, knowledge managers can apply two feasible approaches to bring knowledge mapping into an organization. The main difference between these approaches is how quickly organizations accomplish knowledge mapping and how fast they get decent outcomes.

Scenario 1. Slow and organic knowledge mapping. While opting for this scenario, knowledge managers should look for devoted and engaged employees who are ready to participate in knowledge mapping willingly. This scenario will definitely be slow, and the first positive results won’t come quickly. However, fulfilled by voluntary enthusiasts, knowledge mapping can bring much better outcomes than if enforced. Engaged ‘mappers’ will also spread their positive experience among other employees and will incite them to participate in the mapping process.

Scenario 2. Quick and forced knowledge mapping. Unlike the first option, this scenario requires accomplishing knowledge mapping without waiting for employees’ consent. This is a suitable model if a company starts a new important business program (for example, enters a new market, launches a new product category or implements a new development strategy). In this case, knowledge needs should be defined without any delays, so that managers could understand clearly if the planned initiatives are reasonable and can be successful. The mapping scope also decreases in this case, which is great to get top management’s support and create a KM success story that will be reproduced while enterprise-wide mapping.

Knowledge needs unveiled… what’s next?

Regardless of what model organizations choose, knowledge mapping will lead them directly towards their knowledge needs. This has a great strategic value for any company that considers further business-oriented KM activities. Clearly understanding their knowledge strengths and weaknesses, it is much easier for companies to define what KM steps to take and when, as well as to align the defined KM course with the general business development plan to make knowledge work to the enterprise’s advantage.

By Sandra Lupanava

Sandra Lupanava is SharePoint Evangelist at ScienceSoft, a software development and consulting company headquartered in McKinney, Texas. With her 5+ years in marketing, Sandra voices SharePoint’s strengths to contribute to the platform’s positive image as well as raise user adoption and loyalty. Today Sandra advocates harnessing SharePoint’s non-trivial capabilities to create business-centric, industry-specific innovation and knowledge management solutions.
Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Data & Information Design

Information DesignI’m not sure why it’s taken me so long to find Giorgi Lupi. Fortunately, serendipity came to my aid and I stumbled across her almost by accident. And what a find! Anyone who does anything with data and information should read her postings, starting with this one:…

I’ve picked out a few nuggets:

Embrace complexity. What made cheap marketing infographics so popular is probably their biggest contradiction: the false claim that a couple of pictograms and a few big numbers have the innate power to “simplify complexity.”

One size does not fit all. Business intelligence tools and dataviz tools for marketers have led many to believe that the ideal way to make sense of information is to load data into a tool, pick from among a list of suggested out-of-the-box charts, and get the job done in a couple of clicks. This common approach is actually nothing more than blindly throwing technology at the problem, sometimes without spending enough time framing the question that triggered the exploration in the first place. This often leads to results that are not only practically useless, but also deeply wrong, because prepackaged solutions are rarely able to frame problems that are difficult to define, let alone solve.

Sketching with data?…in a way, removing technology from the equation before bringing it back to finalize the design with digital tools ?introduces novel ways of thinking, and leads to designs that are uniquely customized for the specific type of data problems we are working with.

What a refreshing perspective on data and information design. It’s a fairly long article – about a 10-minute read, but well worth it, in fact worth reading at least twice because there’s so many insightful ideas here. If there’s an underlying message here, it’s that that we should devote the time to enhancing our human knowledge and skills for understanding complexity, and not relying on technology to do it all for us.

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Organisational Knowledge in a Machine Intelligence era

Artificial Intelligence

A preamble to the KIN Winter Workshop 2016, 7th December 2016.

According to Narrative Science, 62 per cent of organisations will be using Artificial Intelligence (AI) by 2018.

If you asked most people when they last encountered something that used artificial intelligence, they’d probably conjure up a mental image of robots, and might be hard pressed to think of something in everyday use. Machine intelligence and machine learning – the new synonyms for “artificial intelligence” – are on the rise and are going to be pervasive. Anyone using a smartphone is already using some sort of machine intelligence with Google Now’s suggestions, Siri’s voice recognition, or Windows Cortana personal assistant. We don’t call these “artificial intelligence”, because it’s a term that alarms some people and has earned some ridicule down the years. But it doesn’t matter what you call it; the ability to get computers to infer information that they aren’t directly supplied with, and to act on it, is already here.

But what does all this mean in a practical sense? Can – or should we –  rely on intelligent machines to do the heavy (physical and cognitive) lifting for us, and if so, what does the future hold for knowledge and information professionals?

The rise of the chatbot

It’s taken about 10 years, but social media has finally been accepted as a business tool, rather than just a means for people to waste time. If you look at any contemporary enterprise collaboration system, you’ll find social media features borrowed from Facebook or Twitter embedded into the functionality. Organisations have (finally) learnt that the goal of social technology within the workplace is not simply to maximize engagement or to facilitate collaboration, but rather to support work activities without getting in the way. Having said that, we still can’t extract ourselves from email as the primary tool for doing business. Email is dead, long live email!

Some progress then. But technology never stands still, and there’s more disruption on the way, led as usual by the consumer society. Early in 2016, we saw the introduction of the first wave of artificial intelligence technology in the form of chatbots and virtual assistants. This is being heralded as a new era in technology that some analysts have referred to as the “conversation interface”. It’s an interface that won’t require a screen or a mouse to use. There will be no need to click, swipe or type. This is an era when a screen for a device will be considered antiquated, and we won’t have to struggle with UX design. This interface will be completely conversational, and those conversations will be indistinguishable from the conversations we have with work colleagues, friends and family.

Virtual Assistants are personalised cross-platform devices that work with third-party services to respond instantly to users requests which could include online searching, purchasing, monitoring and controlling connected devices and facilitating professional tasks and interactions.

Will it be another 10 years before we see this technology accepted as a business tool? I think not, because the benefits are so apparent.  For example, given the choice of convenience and accessibility, would we still use email to get things done, or would we have a real-time conversation? Rather than force workers to stop what they’re doing and open a new application, chatbots and virtual assistants inject themselves into the places where people are already communicating. Instead of switching from a spreadsheet to bring up a calendar, the worker can schedule a meeting without disrupting the flow of their current conversations.

Companies like Amazon and Google are already exploring these technologies in the consumer space, with the Amazon Echo and Google Home products; these are screenless devices that connect to Wi-Fi and then carry out services.  This seamless experience puts services in reach of the many people who wouldn’t bother to visit an App Store, or would have difficulty in using a screen and keyboard, such as the visually impaired.

We’ll be looking at some examples of how chatbots and virtual assistants are being used to streamline business processes and interface with customers at the workshop.

Machine Learning

It is worth clarifying here what we normally mean by learning in AI: a machine learns when it changes its behaviour based on experience. It sounds almost human-like, but in reality the process is quite mechanical. Machine learning began to gain traction when the concept of data mining took off in the 1990’s. Data mining uses algorithms to look for patterns in a given set of information. Machine learning does the same thing, but then goes one step further – it changes its program’s behaviour based on what it learns.

One application of machine learning that has become very popular is image recognition. These applications first must be trained – in other words, humans have to look at a bunch of pictures and tell the system what is in the picture. After thousands and thousands of repetitions, the software learns which patterns of pixels are generally associated with dogs, cats, flowers, trees, etc., and it can make a pretty good guess about the content of images.

This approach has delivered language translation, handwriting recognition, face recognition and more. Contrary to the assumptions of early research into AI, we don’t need to precisely describe a feature of intelligence for a machine to simulate it.

Thanks to machine learning and the availability of vast data sets, AI has finally been able to produce usable vision, speech, translation and question-answering systems. Integrated into larger systems, those can power products and services ranging from Siri and Amazon to the Google car.

The interesting – or worrying, dependent on your perspective – aspect of machine learning, is that we don’t know precisely how the machine arrives at any particular solution. Can we trust the algorithms that the machine has developed for itself? There is so much that can affect accuracy, e.g. data quality, interpretation and biased data. This is just one facet of a broader discussion we will be exploring at the KIN Winter Workshop, and specifically those deployments of machine learning for decision making and decision support.

Jobs and Skills

The one issue that gets most people agitated about AI is the impact on jobs and skills. A recent survey by Deloitte suggested 35% of UK jobs would be affected by automation over the next two decades. However, many counter this by saying the idea is to free up people’s time to take on more customer-focused, complex roles that cannot be done by machines.

I think this video from McKinsey puts the arguments into perspective by differentiating between activities and jobs. Machines have a proven track record of being able to automate repetitive, rule driven or routine tasks. That’s not the same as replacing jobs, where routine processes are only part of a wider job function.  According to McKinsey, taking a cross section of all jobs, 45% of activities can be automated, and we’re not just talking about predominantly manual labour. They go on to say that up to a third of a CEO’s time could be automated.

Other research by the Pew Research Centre has said 53% of experts think that AI will actually create more jobs.

The question we need to be asking ourselves is what knowledge and skills do we need to develop now in order to make the most of this technology revolution happening around us and ensure we remain relevant. If organisations don’t find out more about these technologies and how they can be used to improve efficiency or productivity, they can be sure their competitors are!

If you haven’t yet registered for the KIN Winter Workshop (KIN Member’s Link) – “Knowledge Organisation in the ‘Machine Intelligence’ era, do so soon! If you’re not currently being affected by AI technology, you soon will be. Make sure you’re ready!

Steve Dale
KIN Facilitator


Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Connecting Knowledge Communities: Approaches to Professional Development

Continuing Professional Development
Learning Concept

From the NetIKX website, details of the next NetIKX Seminar on 21st September.

A year ago, NetIKX, with the cooperation of a number of other organisations in the field of knowledge and information management, ran a meeting called “Connecting Knowledge Communities”, at which representatives of a number of professional membership  organisations, including NetIKX , talked about their membership, their focus and their mode of operation.

The organisations were: Henley Forum for Organisational Learning & Knowledge Strategies, the Knowledge and Innovation Network (KIN), IRMS (the Information and Records Management Society), ISKO UK (the UK Chapter of the International Society for Knowledge Organization) and KIDMM (the Knowledge, Information, Data and Metadata Management online forum).

The forthcoming NetIKX seminar (21st September 2016) is intended to take that relationship one stage further by examining an area that is likely to be of interest to all these groups. Speakers will be Luke Stevens-Burke from CILIP, who will talk about CPD at CILIP and the PKSB (Professional Knowledge and Skills Base), and Christopher Reeves and Karen Thwaites from the Department for Education, who will also talk about CPD, particularly focusing on the new Government KIM framework and how it was produced.

Further details in the attached flier (PDF). Go to the NetIKX  website to register for the event.

Connecting Knowledge Communities: Approaches To Professional Development


Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Big Data, Data Analytics and AI

Big Data

I was asked by Managing Partners Forum (MPF) recently to give a brief overview of the current status and industry trends in Big Data and Data Analytics, topics I’ve been keeping an eye on for several years. The slides are available on Slideshare. The following is shortened abstract from the presentation.

One of the issues I have with with Big Data is just that – the term “Big Data”. It’s fairly abstract and defies a precise definition. I’m guessing the name began as a marketing invention, and we’ve been stuck with it ever since. I’m a registered user of IBM’s Watson Analytical Engine, and their free plan has a dataset limit of 500MByte. So is that ‘Big Data’? In reality it’s all relative. To a small accountancy firm of 20 staff, their payroll spreadsheet is probably big data, whereas the CERN research laboratory in Switzerland probably works in units of terabytes.

Eric Schmidt (Google) was famously quoted in 2010 as saying “There were 5 exabytes of information created between the dawn of civilisation through 2003, but that much information is now created in 2 days”. We probably don’t need to understand what an ‘exabyte’ is, but we can get a sense that it’s very big, and what’s more, we begin to get a sense of the velocity of information, since according to Schmidt it’s doubling every 2 days, and probably less than that since we’ve moved on by 6 years since his original statement.

It probably won’t come as a surprise to anyone that most organisations still don’t know what data they actually have, and what they’re creating and storing on a daily basis. Some are beginning to realise that these massive archives of data might hold some useful information that can be potentially deliver some business value. But it takes time to access, analyse, interpret and apply actions resulting from this analysis, and in the mean-time, the world has moved on.

According to the “Global Databerg Report” by Veritas Technologies, 55% of all information is considered to be ‘Dark’, or in other words, value unknown. The report goes on to say that where information has been analysed, 33% is considered to be “ROT” – redundant, obsolete or trivial. Hence the ‘credibility’ gap between the rate at which information is being created, and our abilities to process and extract value from this information before it becomes “ROT”.

But the good news is that more organisations are recognising that there is some potential value in the data and information that they create and store, with growing investment in people and systems that can make use of this information.

The PwC Global Data & Analytics Survey 2016 emphasises the need for companies to establish a data-driven innovation culture – but there is still some way to go. Those using data and analytics are focused on the past, looking back  with descriptive (27%) or diagnostic (28%) methods. The more sophisticated organisations (a minority at present)  use a forward-looking predictive and prescriptive approach to data.

What is becoming increasingly apparent is that C-suite executives who have traditionally relied on instinct and experience to make decisions, now have the opportunity to use decision support systems driven by massive amounts of data.  Sophisticated machine learning can complement experience and intuition. Today’s business environment is not just about automating business processes – it’s about automating thought processes. Decisions need to be made faster in order to keep pace with a rapidly changing business environment. So decision making based on a mix of mind and machine is now coming in to play.

One of the most interesting bi-products of this Big Data era is ‘Machine Learning‘ – mentioned above. Machine learning’s ability to scale across the broad spectrum of contract management, customer service, finance, legal, sales, pricing and production is attributable to its ability to continually learn and improve. Machine learning algorithms are iterative in nature, constantly learning and seeking to optimise outcomes.  Every time a miscalculation is made, machine learning algorithms correct the error and begin another iteration of the data analysis. These calculations happen in milliseconds which makes machine learning exceptionally efficient at optimising decisions and predicting outcomes.

So, where is all of this headed over the next few years? I can’t recall the provenance of the quote “never make predictions, especially about the future”, so treat these predictions with caution:

  1. Power to business users: Driven by a shortage of big data talent and the ongoing gap between needing business information and unlocking it from the analysts and data scientists, there will be more tools and features that expose information directly to the people who use it. (Source: Information Week 2016)
  2. Machine generated content: Content that is based on data and analytical information will be turned into natural language writing by technologies that can proactively assemble and deliver information through automated composition engines. Content currently written by people, such as shareholder reports, legal documents, market reports, press releases and white papers are prime candidates for these tools. (Source: Gartner 2016)
  3. Embedding intelligence: On a mass scale, Gartner identifies “autonomous agents and things” as one of the up-and-coming trends, which is already marking the arrival of robots, autonomous vehicles, virtual personal assistants, and smart advisers. (Source: Gartner 2016)
  4. Shortage of talent: Business consultancy A.T. Kearney reported that 72% of market-leading global companies reported that they had a hard time hiring data science talent. (Source: A.T Kearney 2016)
  5. Machine learning: Gartner said that an advanced form of machine learning called deep neural nets will create systems that can autonomously learn to perceive the world on their own. (Source: Ovum 2016)
  6. Data as a service: IBM’s acquisition of the Weather Company — with all its data, data streams, and predictive analytics — highlighted something that’s coming. (Source: Forrester 2016)
  7. Real-time insights: The window for turning data into action is narrowing. The next 12 months will be about distributed, open source streaming alternatives built on open source projects like Kafka and Spark. (Source: Forrester 2016)
  8. Roboboss: Some performance measurements can be consumed more swiftly by smart machine managers aka “robo-bosses,” who will perform supervisory duties and make decisions about staffing or management incentives. (Source: Gartner 2016)
  9. Algorithm markets: Firms will recognize that many algorithms can be acquired rather than developed. “Just add data”. Examples of services available today, including Algorithmia, Data Xu, and Kaggle (Source: Forrester 2016)

The one thing I have taken away from the various reports, papers and blogs I’ve read as part of this research is that you can’t think about Big Data in isolation. It has to be coupled with cognitive technologies – AI, machine learning or whatever label you want to give it. Information is being created at an ever-increasing velocity. The window is getting ever narrower for decision making. These demands can only be met by coupling Big Data and Data Analytics with AI.

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Communities of Practice – Planning For Success

Community ManagementMy experience of knowledge sharing in organisations stems mainly from my involvement in setting up Communities of Practice (CoPs) for UK local government. This was part of a broader Knowledge Management strategy that I was commissioned to deliver for the Improvement and Development Agency (now part of Local Government Agency -LGA). An online collaboration platform was launched in 2006 to support self-organising, virtual communities of local government and other public sector staff. The purpose was to improve public sector services by sharing knowledge and good practice.

Over the past 10 years, the community platform has grown to support over 1.500 CoPs, with more than 160,000 registered users.  This has led to many service improvement initiatives, from more efficient procurement and project planning to more effective inter-agency collaboration in delivering front-line services, such as health and social care. It has also provided some useful information on the dynamics of social collaboration and community management, e.g. the factors that influence the success of a community.

What does a successful CoP look like?

Success will of course depend on the purpose of the community. Some CoPs have been set up as networks for learning and sharing; others have a defined output, e.g. developing new practice for adult social care.  It is clearly more difficult to establish success criteria for a CoP dedicated to knowledge sharing than it is for – say – a CoP that has a tangible output. Success for the former will rely on more subjective analysis than for the latter, where there will probably be more tangible evidence of an output, e.g. a policy document or case study.

However, rather than argue and debate the criteria for assessing the “success” of a CoP (or other organizational learning system), I’d prefer to consider how we monitor and assess the “health” of a CoP. For this approach I think we have to consider the analogy of a CoP to a living and breathing organism.

A healthy CoP will show clear signs of life; this can be assessed using various quantitative indicators, such as:

  • Number of members
  • Rate of growth of the community
  • Number and frequency of documents uploaded.
  • Number and frequency of documents read or downloaded.
  • Number and frequency of new blog posts
  • Number and frequency of forum posts
  • Number and frequency of comments
  • Number of page views per session
  • Time spent on the CoP per browser session


Not that any one of these indicators in isolation will indicate the good health of a CoP, but taken together they can give a general perspective of how vibrant and active the community is.

Continuing with the analogy of a living, breathing organism, different CoPs will have different metabolisms, some may be highly active; others may be fairly sedate. Understanding the community ‘rhythm’ is a key aspect of knowing when any intervention is required in order to maintain this rhythm.  Not all CoPs are going to be vibrant and active all of the time; there may be periods of relative inactivity as a natural part of the CoP lifecycle. But it’s important to know the difference between a CoP that is going through a regular period of inactivity and a CoP that is moribund.

A point to note: inactive CoPs may not necessarily be a cause for concern. One reason for inactivity could be that the CoP has served its purpose and its members have moved on. In which case the knowledge assets of the CoP need to be published and celebrated and the CoP either closed, or (with the agreement of the members) re-purposed to a new topic or outcome.

So, understanding the vital life-signs and metabolism of a CoP is a fundamental part of ensuring the continued good health of the CoP, and therefore more likely to achieve its goals.  And the key to the continued good health of a CoP is knowing how and when to intervene when one or more of the life-signs begins to falter.  Without wishing to labour my analogy of the living, breathing organism too much, it’s the equivalent of knowing when someone is not feeling too well and administering the appropriate medicine. [See concluding section for symptoms and potential cures for an ailing CoP.]

The Online Facilitator

Where does the CoP facilitator or e-moderator come into all of this? Well, I mentioned earlier that over the 10 years since its inception, the Local Government CoP strategy has provided some useful information on the dynamics of social collaboration and community management. For example, there is clear evidence that CoPs that have full or part-time facilitation/e-moderation are much more likely to succeed and be self-sustaining than those that rely entirely on self-organisation or community networks where there are no clearly defined roles or responsibilities.

The most successful CoPs (and I should clarify here that I’m using “success’ to mean “in good health”) are those where there is more than one facilitator/e-moderator and where interventions by the facilitator/e-moderator are frequent and predictable.  This may take various forms, such as regular polls of the CoP members; regular e-bulletins or newsletters; a schedule of events (face to face or virtual); regular input to Forum posts and threads, seeding new conversations; back-channeling to make connections between members of the CoP; etc.

In other words, show me a good and effective CoP facilitator/e-moderator and I can show you – in all probability – a healthy and successful CoP (or similar organisational knowledge sharing community).

Attributes Of A Good Facilitator

I’ve often been asked “what makes a good community facilitator/e-moderator?” This is a difficult one, and I’m of the opinion that it is more of an art than a science. The technical administration functions of the role can be taught, but the good facilitators/e-moderators that I have met bring another dimension to the role, i.e. empathy with, and understanding of, human behaviours and personalities. Something that I suspect comes with experience rather than a pedagogical approach. What I do think is important is having some knowledge (not necessarily ‘expert’ status) and enthusiasm for the topic or theme of the CoP (also referred to as the ‘domain of knowledge’).  This will help where interventions are necessary, and the community members are more likely to appreciate the facilitator/e-moderator as one of their own.

There have been various papers and blogs published about the role and responsibilities of an online CoP facilitator but maybe the following diagram captures the essence of the role. Click to enlarge.

Facilitator Role
Facilitator Role

(Reworked from an original by Dion Hinchcliffe)


The conclusion? Based on a significant body of evidence, coupled with personal experience, if you want to ensure the success of your Community of Interest or Practice, make sure you’ve invested in in a team of good/experienced community facilitators.

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

How to fail with Twitter

Twitter Logo FailI’ve been using Twitter since 2007, and though I’m not in the same league as celebrities (or z-listers)  who count their followers in hundreds of thousands, I’m comfortable knowing that my following has been grown organically, I’ve never ever paid for new followers, and I do know and recognise most of them in the virtual world we populate.

There’s much been written about how to use social media – most of it crap, and most aimed at marketing, brand promotion or people with massive egos.

Since I don’t fall into any of these categories, and use Twitter mainly for engaging with people who have something useful to say, picking up on news and ideas, and sharing stuff I’ve learnt (even the useful stuff!), then feel free to ignore the following tips, all of which are aimed at those who use their Twitter statistics to massage their overblown egos:

  • Make sure you auto-reply to new follows with a link to your free (but crap) ebook.
  • Provide an obscure description of who you are and what you do, or…
  • Have a completely blank bio.
  • Have a nice pose showing that six-pack or gawky grin.
  • Have a profile photo or an image that only makes sense to you and your imaginary friends.
  • Attract like-minded followers by posing with a gun, a knife or a swastika flag in the background.
  • Always refer to yourself as an “expert”, “ninja” or “blackbelt”.  You’re in a much better position to judge this than anyone else.
  • Never add a link to a great resource you’ve cited.
  • Have big gaps (e.g. days) between posts.
  • Try and follow thousands of random people. They’re bound to follow you back.
  • Write about the cat/hamster/holiday over and over again, and don’t forget to include the photos.
  • Fill your tweet with obscure abbreviations and hashtags.
  • Send an-auto DM to every new follow suggesting you connect on Facebook or LinkedIn.
  • Retweet EVERYTHING!
  • Follow everyone and everything – even those with zero tweets.
  • Say whatever comes into your head – no need to think (this one is a bit of a challenge for politicians, elected councillors and footballers!)
  • Use Twitter as your primary marketing plan.
  • Try and find an idiot to have an argument with. See who wins.
  • Take credit for tweets that did not originate from you.
  • Tweet on every piece of news you can get your hands on.
  • Tweet about your need for coffee or what you had for breakfast.
  • Be emotional and let off steam.
  • Always remember that your follower count is far more important than the content of your tweets.
  • Pay for followers (most of them will be bots anyway) – quantity trumps quality.
  • Make up new hashtags and try avoid using ones that are already in use to categorise information.
  • Look out for anyone that has only tweeted several times but has many thousands of followers. This is a mark of ‘awesome’ – the followers can’t all be wrong, can they?

I’m sure this is not an exhaustive list. If you have any more tips for growing your ego twitter following, let me know at @stephendale and I’ll post an updated list.

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Knowledge Management – Don’t Forget The SME’s!

Small is beautiful

The research paper by Cheng Sheng Lee and Kuan Yew Wong in the December of issue of  Business Information Review raises a number of interesting points that deserve wider discussion. Abstract as follows:

Knowledge management (KM) is recognized as an important means for attaining competitive advantage and improving organizational performance. The evaluation of KM performance has become increasingly vital, as it provides the direction for organizations to enhance their performance and competitiveness. A survey was carried out to test the applicability of 14 constructs based on knowledge resources, KM processes, and KM factors in measuring the KM performance for small and medium enterprises (SMEs) in Malaysia. This article intends to further explore the effects of company size (micro, small, and medium) and KM maturity on knowledge management performance measurement (KMPM). Two-way analysis of variance results indicate that company size and KM maturity do affect some aspects of KMPM in SMEs.

The research focused on the effectiveness of knowledge management techniques in Small to Medium Enterprises (SME’s) in Malaysia. Though the scope of the research is limited to one geographic region, the findings could – and should – be tested against a wider and more international cohort.

According to the research paper, in Malaysia, SME’s account for up to 98.5 percent of the total number of businesses and contribute up to 33.1 percent of GDP. They employ 57.5 percent of the total workforce.

To offer some comparison, UK, SME’s account for over 99.8 percent of the total number of businesses, they contributed over half of UK output in 2013 (GVA) and employ 48 percent of the total private sector workforce.

The EU average SME contribution to GDP is 55 percent.

It is clear from this data that SME’s make up a significant, and growing, contribution to the UK and European economies. It seems quite odd, therefore, that so little research has been undertaken into how knowledge management strategies and techniques have been utilized within and across this sector.

The Cheng Sheng Lee/Kuan Yew Wong research gives us some insights that could be tested against a wider geographic sample of SMEs. Some key points from the research as follows:

  • The literature research identified that the size of an organization affects its behaviour and structure (Edvardsson, 2006; Rutherford et al, 2001) and how it influences the adoption and implementation of KM (Zaied et al, 2012).
  • SME’s should not be perceived as homogenized groups. They themselves can be categorized according to relative size, e.g. micro, small and medium, which can influence the way that KM is implemented.
  • In terms of human capital, medium-sized businesses (SMEs) focus more on codification strategies (explicit knowledge) whereas micro-sized businesses (SMEs) are more dependent on socialization strategies.
  • An obvious point, but reinforced by the research – the need for better infrastructure, such as tools, office layout, rooms etc. increases as the organizations grows.
  • Knowledge Maturity is a key attribute that should be monitored measured. The value of an employee will increase in terms of their contribution to the success of the organization as they progress from beginner, intermediate and advanced staged of KM maturity. Clearly the impact of an employee leaving without an effective knowledge transfer process will be more keenly felt by a small organization. [NB. This is not an excuse for large organizations to treat this is a lower priority!]
  • Company size does make a difference to KM performance measurements. A number of factors are proposed, e.g. impact of high turnover, limited resource redundancy in smaller organizations, smaller organizations will likely prioritize implementation processes over performance measurements etc.
  • KM performance measurement (KMPM) is still new for SME’s, as the majority of analyst reports and case studies remain focused on large organizations, with a mindset that SMEs do not need or are not ready for KMPM.

Overall, this is an excellent piece of research, and highly recommended reading, which despite its limited sample size and geographic boundary, gives some very useful insight into how KM is being implemented across SME’s. Reassuringly it shows that a growing number of SME’s see KMPM as vital to the growth and success of their business.

The paper is also a wake-up call to academia, research, analyst and consultancy organizations in that we need far more definitive and comprehensive studies in this field, to embrace UK, Europe and other key industrial and economic zones.

To finish with a quote from the authors:

Enough with large organizations; SMEs should not be neglected as they play a major role in a country’s economic growth”.

On this evidence, who could disagree?

Image source:

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest

Watson Analytics

Watson logoI recently had an introductory presentation to IBM’s Watson Analytical Engine and was mightily impressed by what I saw.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Unstructured data could typically include  news articles, research reports, social media posts and enterprise system data.

You can set up a freemium account on Watson and get immediate access to the full range of features. As with most freemium  services, there are some limits, these come in the form of file size restrictions and data storage. You can only upload flat files that are no more than 100,000 rows and 50 columns and there is data storage limit is 500 MB. If you want more than this you have to consider the Personal or Professional editions.

Watson 2To get started you will need to set up an IBM id (e.g. your email) and agree to the Ts & Cs. Nothing ominous here, and you can opt out of any IBM emails. Once you’re email is validated, sign-in to your newly created account


Once your account has been validated, sign-in and you’ll see the main Watson interface:

Watson 3



To get started I recommend watching the video.

There is a temptation to dive straight in and work your way through the various tools and features. However, not everything is intuitive, and it’s well worth spending some time looking at the various tutorials and help files.  I recommend:

I had a few problems when uploading some of my own “test” datasets, which as I mentioned earlier are limited to 100,000 rows and 50 columns and 500Mb for the free account. If you just want to have a play with the various features, it’s probably better to use one of the tried and tested datasets available from the Watson Analytic Community

A word of warning – you can get totally immersed in the Watson environment, and I’ve probably lost a day or two somewhere in trying out the technology. However, if your job involves data and decision making, I recommend giving it a go.

Remember too, this is a decision support tool and does not a decision-making tool. You still have to engage your brain when looking at the visualisations, and you do have to have some understanding of your data. And don’t go away thinking that the “Predictions” facility is going to give you the winning numbers for this week’s lottery – but by all means try!

Feel free to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pagePin on Pinterest