Ontologies - Do they represent knowledge?

Introduction

First, what is knowledge? Is it what is in people's heads, or is it what is written down?

If we assume that people have knowledge in their heads, then what is written down is only a message. It is a message between one active structure in someone's head, the one that holds the knowledge, and another active structure, which seeks to gain the knowledge.

Given the limitations of human language, we should expect considerable transformation between the knowledge in a person's head and the message, written in a sequential textual form, or drawn in some diagrammatic form. We also should expect similar transformation between the written form and the structure that is finally built in the recipient.

Why stress activity? An active structure can do all sorts of things that a passive structure cannot. But can't we read a passive structure and do the same things. If we read it, yes, because we convert it into an active structure in our heads. If we use something else to read it, like a program, the same transformation does not occur, and many of the inferences, that relied on the transformation occurring, are lost.

This is the algorithm/data structure problem.  We expect that if someone acquires knowledge, that knowledge will become part of the way they reason about the world - learn Newton's laws of Motion, think differently about the world.  However, if you change the data structure, the algorithm that reasons about the data structure doesn't change with it, so your ability to change the data structure is severely restricted - it is limited to constructs the algorithm knows how to handle. The mechanism driving the algorithm will have its own limitations - the sorts of things it can move around, how it handles logic.

Transformations

Here are some simple, seemingly trivial, examples of transformations that are necessary to extract the inferences available from a message about knowledge.

A = B + C + D

If we combine those pluses into one operator, we can make more inferences about values, because when the operator has the focus, it can see more of the structure around it. If the values being propagated through the structure are ranges, then the humble plus operator can perform all sorts of high level operations, such as detection of an opportunity for binary cutting (a way of finding solutions faster).

IF A = 5 THEN B = 3
               ELSE B = 9

We can immediately see that, if this statement is true, then B is either 3 or 9. Our human internal representation is nothing like IF...THEN, and neither is Orion's.

A = SUM(LIST)

A list will arrive, and then we will take the sum of it. But it isn't that simple. We might know the value of A and all the members of the list except one. A human would have no problem telling you the value of the remaining one, because their internal representation allows it. If we transform this statement into an active link between a plus operator and a list, where the members of the list, when it arrives, are coupled to the plus, the structure provides the inference.

Without these transformations, and many more besides, much of the meaning of the message about knowledge would be lost, driving us back to the bankrupt paradigm of programming a solution.

Pure First Order Logic

Here is another misunderstanding. Our attempts at formalised logic are a way of describing the local operation of the active networks in our heads. We seem to use concepts like AND, OR, NOT when reasoning. We can write down premises and reason about them using this logic. While we use our internal mechanisms, it all works fine, because we have activity and we transform the structure. When we attempt to do the same in an algorithm/data structure form, it does not work nearly as well. When we reason, something may need to be an AND in one direction, and an OR in the other - we have no difficulty, because first order logic is a simplified description of how we reason (validly).

When we attempt to represent an active structure in a passive form, we have to expect that very much gets left out. The logic in the message was valid, only as long as it was to be transformed into an active structure with similar properties to the one which originated the message.

Complex Messages

Humans can send complex messages through their active structures, or at least they are not limited to on and off. Their structures can control their own activity and can form new connections. The external knowledge message must be passive and sequential if it is written down, although it may be followed by other static messages which describe other states and which give a clue to the phasing of the activity.

A message about knowledge assumes that the active structure to be built will be capable of the same activity as the active structure which constructed the message. This is easily seen in the prerequisites for a university course. Without the prerequisites, the appropriate structures will not exist to be built upon. It is also assumed that those structures will moderate the growth of new structure.

Self Phasing

An active structure automatically phases itself, with activity resulting from change. The messages propagating through the structure cause activity to occur, which leads to further messages moving along other paths, even messages moving down new paths formed by other messages or messages returning along the paths they came. This is easy if the messages are inside the structure, extremely difficult if an algorithm external to the structure is attempting to work out what to do next. It could have a model of the structure, but the structure is busy changing itself, so a static model won't do, so throw the algorithm away and just use the structure and make it active.

Ontologies

Ontologies are said to represent a message about knowledge. The question that remains is, how are they to be used? Are they to be transformed into active structure, or are they a static structure to be reasoned about by an algorithm?

The syntax of the language used to create the ontology is often of a simplistic form, giving no hint of the complex logical structuring needed to represent knowledge. They insist on a single parent in the hierarchy and usually stratify rapidly as well, with different languages used to represent different needs - domain, task, etc. This doesn't sound like a useful way to combine different types of knowledge into an integrated whole. If we can't find a single underlying form, then we are doing something wrong.

If ontologies rely on an algorithm to interpret them, then the combination must fail for several reasons - no transformation, no complex messaging, no self phasing - simply put, no fluidity of operation.

Here is a fragment from Wordnet, a large ontology of the English language -

Sense 3
diamond -- (a playing card in the minor suit of diamonds)
       => playing card -- (one of a pack of cards used in playing card games)
           => card -- (one of a set of small pieces of stiff paper marked in various...
               => paper -- (a material made of cellulose pulp derived mainly from ...
                   => material, stuff -- (the tangible substance that goes into the makeup...
                       => substance, matter -- (that which has mass and occupies space;...
                           => object, physical object -- (a physical (tangible and visible) ...
                               => entity, something -- (anything having existence (living or ...      

Here, you can see the notion of "the six of diamonds" turning into a piece of stiff paper, with no allowance for its symbolic value. The creators of Wordnet are keen to change its organisation from an ontology into something else, as they recognise the severe limitation an ontology imposes. The European Union has an R&D arm - IST - which is trying to fund "a self-organising ontology" because of the perceived limitations of the form.

By only handling objects as static entities, ontologies ignore the most potent way to acquire properties - by relation with other objects. Here is a seller, Fred, acquiring properties in an active structure - through a relation. Each new relation provides more information about Fred, either by direct connection, or at higher levels.

findparents1.gif (70116 bytes)

(The diagram doesn't show the logical, existential and temporal connections that control if and when this happened)

Another way of seeing what ontologies lack is to consider the fabric of language or knowledge. Inheritance provides the warp, and relations provide the weft.

It sounds obvious. If we want a mechanism to cooperate or compete with a human in the use of knowledge, the human can't be allowed to have several overwhelming advantages. Humans don't use an algorithm/data structure approach nor a rigid single-parent hierarchy - neither should our automated assistants.

Extensions to Active Structure - including relations as objects and computable inheritance

Some Aspects of Classing

Searching an Active Ontology

Finding Parents

What Things Are Not

Ontology Reliability

Knowledge Representation