Renegade Learning
Popular History at its Best

KM Uncertainty Principle

During a Knowledge Management (KM) working group meeting at the Federal Government of Canada, I could not help but notice the length to which KM initiatives must go to justify their existence. KM initiatives will typically seek to make an improvement in how knowledge is acquired, shared and evolved within the organization. It is true that all organizational investments, or one would hope, have to follow some manner of formal process to declare their objectives and how they will know they have arrived at success. These processes usually travel, in larger institutions, under the name of an ‘accountability framework’. In smaller organizations where the tolerance for missteps is necessarily smaller, we would call it the ‘fitting your head into the noose’ framework.

Map of Knowledge Management

So we will take it as a given that there is a cultural and, at times, an operational need to position initiatives within the accepted ‘accountability framework’ of the organization expected to make the investments. That just seems fair. But in some circumstances, this essential step may in fact eliminate the conditions for success. In these cases, it becomes a form of Catch-22. Upon hearing about a KM initiative studying the relative merits of, and selecting from amongst, various accountability frameworks, it occurred to me that KM may be such an instance.

The premise here is that if steps are taken to formalize a KM initiative specifically to sustain an accountability obligation, its formalization and the resulting investments of energy to implementing that formality, will in fact work against its potential for success.

For those who must live in this type of organizational world, or those that, in general, embrace the ‘formalization’ of business monitoring (results-based management), this premise will seem at best odd, and at worst ill-considered.

But if, as I have argued in my paper on the anatomy of knowledge, there is no deterministic causal link between knowledge and the domain of judgment and action (and therefore results), then any results-based accountability model will be, essentially, missing the point (or to be more precise, creating a fictitious one). If there are no causal links, again in a directly deterministic way, then managing initiatives by causal outcomes will be impossible. If such a management regime is forcefully adopted, then, all energies directed to its implementation will be energies away from the focus, presumably, of KM – being the creation and advancement of shared knowledge. The attempts to align with an accountability framework would, in this light, become tangible impediments to success.

So to summarize, if a KM initiative is squarely focused upon cultivating improved knowledge within a community, then it will not be possible, directly, to measure outcomes in terms of business performance changes. It would be with reference to such changes that a business justification, in the form of a Return-on-Investment (ROI) calculation, would be made. If a KM initiative is soundly structured and can be monitored with reference to these types of measurable outcomes, then it is – by definition – not directed towards knowledge but rather towards some other operational goal. It would thus cease to be a KM initiative or at least insofar as it accommodates this redirection. So in echoing the argument demonstrated by Heisenberg for sub-atomic particles, we have the “KM Uncertainty Principle”.

Mercifully, and again by recourse to scientific models, we can highlight the complexity of the relationship between knowledge and the emergent operational phenomena and we can refocus our measurement activities in the domain of judgment and action in ways which help to illuminate impacts that occur across a more broad spectrum. If changes in knowledge ultimately do represent a change in potentiality then the consequences of KM interventions will indeed become observable and will impact measurements that are tuned to the right frequency. But there are many other factors that will also influence those observable outcomes so categorically tracing any one measurable consequence, whether good or ill, back to a specific KM investment is still not possible, strictly speaking.

Ideally, this should simply remind us that the prudent method for advancing a KM initiative is to position it as a longer term, strategic investment that, like hiring and developing good people or upholding sustainability as a key business consideration, will bear fruit over time. It can also remind us that it can be a prudent tactic to combine KM initiatives with other, perhaps more grounded, investments. This way we can make progress on many levels.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Heimo Hänninen

Hello Joe, you are right on the topic again!

It is crucial to find more business oriented metrics on KM productivity. The following findings I gathered in some MS Press Information worker productivity book:

In study over 250 companies, Microsoft found certain correlations between I-work improvement areas and company level business indicators.

Business indicator -*- Correlating I-work improvement area
---------------------------------------------------------------------
Changes to budget expense -*- Reduction in cycle times

Improvements in operating excellence -*- Higher percentage of work completed on time

Improvements in return on assets (ROA) -*- Increased work quality

Changes to cost of production -*- Project cycle times, project costs and percentage of projects completed on time

cheers, Heimo ex-XIA

Joe Gollner

Hi Heimo

While it may not be possible to determine, exactly, how a benefit was spawned when one or more layers of indirection exist, it can certainly be determined that benefits have occurred. For example, I was thinking about you only yesterday and, by some cosmological ministrations, here you are. (:-]

I like the general orientation of these types of measurements (business indicators). It is one of the merits of computing and communication technology becoming so ubiquitous and of the fact that it is usually, and increasingly, possible to pool data into useful "puddles" for analysis. This means that the more simplistic measures, like the financial bottom line, need not be the only tool for evaluating the effects of interventions and foreseeing outcomes.

I am reminded of some of the arguments emerging from the late 1980s about what was called the "Information Technology (IT) deficit". It was highlighted, repeatedly, that after a decade of heavy investment in IT in North America (the predominant deployment venue at the time) that "white collar" productivity actually went down. Using more simple measures, and also acknowledging the fledgling nature of IT at the time, it looked like those businesses investing heavily in this area were making a mistake.

The question that could then be raised, and which I could not resist raising on more than one occasion, was "So how did the companies fare that did not make these investments?" The people who were raising these objections to IT investments, and adopting the rhetorical posture of the sage business-minded manager armed with binders of financial measurements, would mumble an answer that was barely audible - "Oh. Those companies are no longer in business." There would seem to have been effects from those IT investments that escaped the measurement strategies that were designed for evaluating the building of factories, the acquisition of machinery and the purchase of office furniture.

Kuan

Hi Joe, I agree on the part that the firm ought to think: had this KM initiative not been in place, is the firm better off?

A KM initiative will [permanently] be a long-term investment and has to be viewed as so; the ROI is oftentimes indirect.

For the sake of accountability and justifying a KM project, the accountability framework will have to stretch for a longer period of time, say 5 yrs or more, and perhaps beyond the employment time of any long-serving manager. This is not something that can be accurately reflected on the annual report; or made meaningful comparisons of, when studying two consecutive annual reports or any year-to-year report, for that matter.

On the other hand, for advocates of KM (and the KM manager himself), there has to be measurable metrics in place to gain buy-in. And this is important in institutionalizing KM in the larger domain of the firm, formally (also if the KM team wants to keep their jobs). Remember, what you can't measure, you can't manage.

Joe Gollner

Hello Kuan,

Thanks for your note. It is very timely. I have been thinking a lot about justifications & accountability lately because I have been working on a conference presentation that addressed these very topics. The conference was quite specific in its focus (XML-in-Practice 2009) and my presentation was cryptically called "The Reason and Passion of XML".

In this talk, I set out to describe how people can promote technology innovations that seek to improve how we codify, share and capitalize upon explicit knowledge. Very often, these types of innovations depend upon the openness and extensibility of XML - and almost just as often without the promoters or beneficiaries of these innovations even realizing it. My talk was partly about crafting an effective business case (of which I have written many) and partly about designing projects and solutions so that they deliver a range of benefits, some short term and some long term. To distill my premise to its shortest form, I would say that you must always address the short (probable), mid-term (plausible) and long-term (possible) benefits. I continue with the claim that, somewhat surprisingly, of the three (assuming all three are present) it is the last one, the realm of the possible, that is most important in giving an initiative the institutional fortitude to persevere when the implementation proves challenging.

So yes, it is important that KM initiatives attend to some of these practicalities.

That said, my post was grounded in my own peculiar definition of knowledge and its relation to other key concepts such as data, information and management. In this framework, it is an important and, I would argue useful, thing to see knowledge as the potential for action but not in any deterministic way. My whitepaper on the Anatomy of Knowledge and my recent post on The Truth about Content lay out my framework in more detail and this post on the KM Uncertainty Principle should be understood in that context.

And finally, I do know the phrase "what you can't measure, you can't manage" and that it is widely accepted as an article of faith. I don't think it is true, however, and even if it were it is easier to find cases where believing it has led to problems than it is to find cases that categorically prove its merit.

Joe Gollner

This blog entry spawned an exchange on a blog that has since vanished from the ether. As it was an interesting exchange, I thought I would include it here for the sake of completeness.

The original comment cited this post as an example of a non-physicist mis-using the Uncertainty Principle with this mis-use stemming from mis-understanding the Uncertainty Principle. The comment follows in italics.


The Power of Getting it Wrong

Concrete to abstract
We constantly take the concrete from one discipline and make it abstract in another. A physicist formulates a theory, say the Uncertainty Principle, and then a non-physicist applies it to something else, say Knowledge Management.

This can annoy physicists.

I guess you could argue that the physicists shouldn't have borrowed 'our' words in the first place. We had first dibs on both 'uncertainty' and 'principle'.

Capital letters are silent
Knowledge Management suffers from a similar problem. I understand what 'knowledge' is and I understand what 'management' is - hey, I know what Knowledge Management is! The capital letters are silent.

Other candidates: Service Architecture, Information Architecture, Anything-other-than-buildings Architecture, Design Thinking

Vagueness is a vacuum
I wonder if a little misunderstanding here is a good thing? These compound nouns are often more resonant than descriptive, is that necessarily a weakness?

What if you were to sit down with a group of people and ask them to think about what Knowledge Management and Service Architecture would mean at their workplace? Would they come up with something less valuable than if you prescribed definitions of what the two terms meant?

Abracadabra!
Is the fact that very few non-physicists using the term Uncertainty Principle actually understand what it means a bad thing?

Ambiguity can be a powerful tool to encourage participation.


Needless to say I simply had to post a response, with this including an honest request for enlightenment as to how I was mis-using the Uncertainty Principle (which was not provided sadly as I was genuinely interesting in learning more). Of course, I could not resist the opportunity to turn the tables on my critic so as to see if this physicist was willing to "dance". My response follows in italics:


Although I am being deployed as the example of the non-physicist misappropriating terms, I do endorse your premise. Your final point reminds me of William Empson's Seven Types of Ambiguity which now takes us crashing over into, of all things, literary criticism. And if you are going to be pegged with a comparison from that domain you could do an awful lot worse than the venerable William Empson.

As an aside, I would actually, not being a physicist, like to understand how far afield I was in "echoing", as I called it, the uncertainty principle, into a discussion of knowledge management. I used the term "echo" precisely because I sometimes object to the overly cavalier, or unconscious, redeployment of terms. In some circumstances, this practice can in my experinece lead to missteps, sometimes on a grand scale, so some caution is advisable.

Now in keeping with your theme here, some might point out that handling being given to the concept of language, such as with the claim that terms come with clear correspondences to meanings that some understand and some manifestly do not, is somewhat out-of-step with most thinking on the nature of language (yet another specialization). These same reviewers might suggest that this itself constitutes a mis-use of that root concept and then declare that this further illustrates and endorses your premise. Perhaps your post is its own most complete proof.

Simon Bostock

Hi Joe

I've only just found this - the blog didn't vanish into the ether, although I'm in the process of deleting it and moving bits and bobs that I don't hate over to more permanent residences.

I don't hate this piece :)

With regard to dancing, it was merely that I didn't recognise the invitation. I couldn't parse the final paragraph and certainly couldn't tease an honest request for enlightenment out of the response.

I think I know why, having found this. Evidently, I wasn't as clear as I thought I was being. This is often the case, sadly.

So, to be clear: I've met physicists who get annoyed by people approapriating their terminology. What's not at all clear from my post is a physicist friend who used to rage that 'no matter how apposite the analogy, it's still an analogy; you can only describe uncertainty through maths!' and other such stuff.

I don't get annoyed by this, nor do I think it 'misuse'. In fact, as I tried – and failed – to say, I think this is a good thing.

Summary:
An abstraction (ie 'physicists' based on a single friend who perhaps was a little too keen on showing off his PhD) might possibly think you 'misuse' the term 'Uncertainty Principle'.

I don't. And think that the abstraction is being precious and discounting the value of resonance over formal definition.

Phew. Hope that's clearer.

Joe Gollner

Hi Simon

And I am most pleased to see that your blog has not vanished into the ether. The blogosphere is packed with lots of things, but there is always room for interesting ideas.

The pursuit of clarity is as tricky as it is laudable. This is true in all domains and sometimes the only thing we have at hand are analogies. It is interesting when more specialist domains make use of the very technique of importing terminology and leveraging analogies only to then declare their usage more precise and as taking precedence. Despite their claims to the contrary, what is the more common occurrence is that the more specialized reuse of a concept itself becomes misled by the broader meanings that they have imported and unwittingly embraced.

I still remember a Cambridge logician speaking at my undergraduate university. It was a fascinating talk because it recounted the experiences of a government sponsored science project that had burned through millions of pounds (back when the pounds converted to a significant handful of dollars) and achieved nothing. Although this alone is not unusual what was unusual was the reason why they had verred off track and how they came to recognize this fact. The project had become such a complete failure that someone thought it was time to bring in some philosophers (they must have been desparate indeed). This gesture in fact worked because the philosophers were able, in short order, determine that the project was mobilized to pursue an impossible goal. The project was chartered to study the "language of the brain" and the breadth of meanings associated with the term "language" were such that the project was effectively tilting at windmills - chasing a chimera. No amount of industry or investment was going to change the fact that it's charter didn't make sense. The scientists who so relished "precision" were themselves unable to see that they were being "fuzzy headed" in their use of terminology and sloppy in their deployment of analogies.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)