Quality Control Response

>In my way of thinking a quality control mechanism would take some action to rectify the problem if the quality of target item is not up to scratch. This is not necessarily just going to be discarded. To think that quality control means "throwing the bad bits out" is an assumption on your part.
>

What is crucial with QC is that it does not take over the fixing, otherwise the process becomes corrupt.
When it finds an error in a document, it could

1) Discard the object. This works well for fungible products like eggs, which are produced at high quality with high reliability, but discarding the air tanker spec and accepting a paper towel holder spec is not going to cut it.

2) Show the error and "expect" the writer to learn from the feedback.

3) The writer may be able to make the case that the system is wrong, and it needs to change (this happens in manufacturing QC too – the gauge is worn out).

4) The whole approach is wrong – recast the specification another way. The machine can’t do this, but it could show the reader would be overloaded by the present structure.

5) Send a note to HR to have the person assigned to more appropriate activities.

Obviously, only 2, 3 and 4 are feasible, someone else can do 5 when they are sick of the cycle in 2.

>However, something is wrong with the entire process if we continue to create drivel that can be misinterpreted by a human! We should ask ourselves, why should we continue to try and fix it at the receiving end if we are smart enough to fix it at the source?
>

But this is why we have QC as a separate process. It is no good asking the hens "not to speak to strange men", and the most reliable machines will occasionally make a bad piece. We are talking about systems with 99.8% reliability. Humans with documents have 95-98%, with the further problem, the larger the document, the lower the reliability and the greater the cost leverage.

The comment about drivel is too harsh – I gave an example where the intent was perfectly clear, but it could be wilfully misinterpreted another way. It is very difficult to write a document that is antagonist-proof. The person writing it is a decent guy, being honest and saying what he or she wants, and doesn’t want to be a lawyer, protecting themselves at every turn. The contractor sees it can be interpreted differently to their advantage. Or a person is an expert on engines, writes it in a way that anybody who knows anything about engines would understand. The person reading it is a programmer, whose only knowledge of engines is they make a Vroom-vroom noise, but must also understand exactly what it says. OK, we drag in a business analyst, who, 3 months later, is now expected to synthesise all the knowledge in the specification, with the contributors scattered to the four winds.

That is another tenet of QC – catch an error trend quickly. If a system can be catching errors virtually immediately they occur, it is much more valuable than catching them a few months down the track, because the confusion is increasing exponentially – for a few days, 10 people, then a hundred, then thousands.

But can we really reduce the error rate of the source? Forget the simple stuff, when you see highly competent and conscientious people making complex errors, it doesn’t give confidence that a bit of training will solve the problem. The experienced people are doing all the things you mention and more, but the errors keep coming. You mention long term memory – that is what we can’t afford. The specification is new, and will begin to be implemented rapidly. Given time and familiarity, many errors would surface, but that time means a great deal of money wasted.

We spend our days looking at the inferences a person would need to have made to successfully read a document, and then trying to make those inferences available to the system.

Often a heading won’t make sense until you read the following paragraph, and then the heading helps to disambiguate something in the paragraph – much has to be held in abeyance – the system is creating a string of jobs for itself – a bit like scratching an arrow in the margin. But then you have to remember what each one meant when there are fifty of them. There are a large number of subtle things happening (dare I say no more than 9 at any one point) – I would say we are operating at our limits to do as well as we do. The generalised Peter Principle – we have gone to our limit of adaptive competence, and now we need something else to get us further.

One obvious approach is to have the machine assess how many things are in play at any one point (that would be quite an interesting number), and put a limit on it by means as simple as rearranging the text so you have already read something that will be needed – minimise the forward references and distance without increasing the complexity anywhere else. You would think this was natural, but these documents grow by accretion – something is thought of later, and stuffed in wherever it sort of fits (don’t say you wouldn’t do that – after you had worked on it for 3 months, you would be sick of it too).

I think we need a much better understanding of the limits of the human knowledge handling apparatus for KM to be useful.

Response to

Message: 1

Date: Thu, 22 Oct 2009 18:55:43 +1100
Subject: Re: [Actkm] Quality control of knowledge
To: "'ActKM Discussion List'" actkm@actkm.org
Message-ID: <005101ca52ed$0e8c1a30$2ba44e90$@net.au>

Thanks for the comprehensive reply. I think there are a number of assumptions at play (for both of us) that need to be teased out if we aspire to get to the next level.

- Dealing with the document does not necessarily mean storing it all in the head.

- The experienced will build concepts in their mind as they read through. These mental models equate to "one item" of information. If a new specification does not match the mental model (or does not build upon that model), this would be flagged as a potential error, which can be checked at an appropriate time.

- The reader can make notes on other pieces of paper (or even electronically mark documents), thus negating the need to store it all in the head and butting up against this "information item" limit.

- The reader will use short term and long term memory to help process the items. As these work differently, this numerical measure (6-9 items) is not as meaningful as you might think in this discussion.

In my way of thinking a quality control mechanism would take some action to rectify the problem if the quality of target item is not up to scratch. This is not necessarily just going to be discarded. To think that quality control means "throwing the bad bits out" is an assumption on your part. What action is taken will depend on the QC policies of the organisation or work process, and the nature of the error that has been found.

My comment about the limited number of items the human brain can hold at any one time was not intended as a comment of the veracity of the concept, rather a comment on its applicability in this case (see above comments).

I agree that, at the moment, the most efficient method is to have automated systems to flag potential errors, and then have the human as the final arbiter. I agree that humans can make errors (and sometimes not realise it).

However, something is wrong with the entire process if we continue to create drivel that can be misinterpreted by a human! We should ask ourselves, why should we continue to try and fix it at the receiving end if we are smart enough to fix it at the source?

Again, the original premise that was floated was that this could be an automated quality control concept - this discussion has shown that it performs an automated check on an element of the quality of a specific document [set].