KM Writ Large

Sometimes, if you only look at a small facet of KM, you can come up with competing theories of what to do. When that happens, it helps to look at a hard example to test those theories.

Let's take a really hard example - Homeland Security.

Before 9/11, there were 20 agencies, all jealously guarding their turf, all with different systems, and all with scraps of knowledge. They still came achingly close to connecting the dots. A memo sat for too long on someone's desk.

OK, so we will have them all report to the same person, we will make all their systems the same, and.... An outsider would probably say KM at this scale is impossible - there are physical boundaries constraining the FBI and the CIA, for example, so some facetisation cannot be avoided.

If it can be made to work, then this is classic knowledge management in the large. Some knowledge is hot - it is of value only for days or a few weeks (I don't want to be cruel and say hours), some knowledge is tepid - it changes every few months or a year, some knowledge is cold - it is timeless. If the hot knowledge is not delivered in a timely fashion, it doesn't go cold, it goes off.

It was interesting that the critical piece of knowledge - "Someone wants to learn to fly a commercial jet, but doesn't want to learn to land" - carried its own use by date - we have the information, we know it will take 2 months to learn to fly, so do not sit on it for 3 months, or it is useless. It is also of the form of "the dog that didn't bark" of Sherlock Holmes - it is not data or information, but knowledge - it requires a cognitive transformation into a pre-existing knowledge structure to be useful.

Now we need to build a system that routes hot and cold knowledge in different pipes among the agencies, we prevent too high a pressure of hot knowledge at any point - the person processing the knowledge would get burned (overloaded). We can split off the operation to process an unpredictable combination of hot, tepid and cold knowledge as not being KM, but they are obviously very closely coupled. Some of what is being processed does not fit in databases except as text, and neither does the output, so we are back to the management of knowledge again. Unless someone somewhere can see the big picture, we still have useless fragments. We will define knowledge as an asset used to predict future behaviour. It does not mean - we have seen this situation before and we should do this. HS are fending off intelligent adversaries who can detect weaknesses in the infrastructure - the World Trade Towers were as fragile as eggshells in comparison to a strong lump like the Empire State Building - and who can adapt in a fraction of the time of a lumbering behemoth (so HS can't be one, obviously). Fortunately the adversaries have a penchant for megalomania. If you can bring the organisation's response time down to months instead of years, still useless. You don't want to learn from a string of costly failures, you want to use the entire panoply of government and research institutions to avoid such failures.

Be suspicious whenever you see a hierarchy - KM, knowledge processing, business processing. In reality, it all goes round and round - a hierarchy is one (static) view of it. Homeland Security has almost no business processing as such - its business is KM. It does have knowledge processing, which generates more knowledge (some hot, some cold, some wrong), which causes other knowledge processing in other centres, which... until the overall process either generates a hit (some actionable knowledge) or dies away.

You might say there is art in the knowledge processing - hunches, flashes of insight, etc - but if the scrap of knowledge does not come out of the KM pipe, art is no use. We also need to differentiate between science and operationalised science. It is said that 99% of what people do in science is garbage. They often look at some small facet, make heroic assumptions (the magic of exogenous variables), and end up with a patchwork of different theories that hold together while it is untested or tested on a small facet (economics is a prime example). Only when science is operationalised and it can be seen to work does it acquire sufficient rigour to be useful - otherwise there is little difference between art and science.

Homeland Security is a very hard example, but it is also a good example on which to notionally test KM nostrums - if they make no difference to the difficulty of this example, then they probably make no measurable difference to any other. They may be nice to have, but a different rationale for their deployment - staff satisfaction, whatever - is required.

Orion KM