Knowledge Networks and Neural Networks

A knowledge network cheats - it uses different operators to do different things.

Neural networks are somehow considered "pure" because each node is precisely the same as every other node, the only thing that differs is the weighting in the connections. That would be of benefit in developing a theory of cognition if one could show similarity of nodes in a neural network to the neurons in a brain. As critics of neural networks have pointed out, neurons in a reasoning area of a brain are not all the same, and presumably do not have the same characteristics. We don't need to look inside people's heads to see the falsity of the argument.

People learn things, then use what they have learnt to learn more complex things. If a human has struggled as a child to learn the concept of plus, there is presumably no barrier to setting up some part of a more complex mental model as the plus they already know. On the contrary, neural networks start from scratch each time, and there is no obvious way they can be combined.  Humans may take a week to learn their multiplication tables, then a week to learn Ohm's Law and a lifetime to understand

E = MC^2

but the multiply operator did not require another week in each case - they already knew what it meant. What this says is that humans can copy structure - neuronal connection patterns - that have certain properties.

As a model of human cognition, artificial neural networks fail at the first hurdle, unable to learn by aggregation of existing structure.

Knowledge networks bundle up behaviour into operators - a PLUS operator knows how to add. The operator is copied into wherever it is relevant.

Some critics have said that the backpropagation in neural networks has no basis in human neuronal behaviour, because there is no obvious feedback path. Have they never watched a dog dreaming. It is obviously capable of providing input to its sensory system, and then responding to that input.

Knowledge networks go much further than "backprop", superseding the concept of diode - resistor structure of the neural network. Not only can errors flow back into the structure, but information can flow anywhere to anywhere, "back" having no meaning. It is no more startling than what happens when two people speak to each other - there are two centres of control, with information flowing to and fro. Information from one can alter the states and connections in the other, which can then alter the states and connections in the first one. Is this too difficult to imagine as a way in which networks of neurons may learn?

Artificial Neural networks or ANNs (to give them their full name) do not pass control to individual neurons - there isn't anything there to give control to. The popular notion of massive computational parallelism is entirely fallacious. The sum of incoming signals and their weightings for each node is calculated to produce an outgoing weighting - the control algorithm knows exactly what to do because all the neurons have a stereotypical output behaviour. There can be amusing variations on how the weights are calculated, but never is a neuron individually queried about what it would like to do in the circumstances. If human neurons were all the same, one might congratulate the designers of neural networks on their perspicacity. On the contrary, human neurons switch, transmit complex messages, have refractory periods, have feedback of their own states, on and on. The notion that this behaviour can be captured by a resistor-diode combination with a static value across it, being driven by a control program, and the weighting curve be differentiable to make it easy to model, seems ludicrous.

A knowledge network has no idea of what to do next except give control to an operator which has a changed input. The operator then decides to change an output (including possibly the connection on which the changed input appeared) or do nothing. The message transmitted is not limited to a single value, and while hardly comparable with a complex neuronal firing pattern, it does allow some complexity - ranges, lists, including lists of alternatives, and structures.

The act of ceding control means that the network is "micro-scheduling" its behaviour, based on activation patterns. An algorithm wasn't needed to work out what to do next, except to grab the next job off the activation queue. Usually the hardest part of algorithm design is to work out what to do next, and what the right level of granularity should be. The granularity in the knowledge network is atomic.

Neural networks avoid the problem of what to schedule when by having a simple control program that is incapable of responding to change while it is operating. In fact, neural networks avoid all the problems that must be faced by systems wishing to emulate the behaviour of humans, including whether two fully interacting systems can be emulated by one control program.

But isn't the knowledge network just another algorithm, just a little more complicated than the neural network control program.

The knowledge network can respond to change by changing its structure, which changes its notional algorithm.

But neural networks can change their weightings.

A change in connection is much more than changing a weighting. A new connection can change the topology, so the behaviour changes radically - rather as tying a snake's head to its tail changes its behaviour because its topology is different. Less dramatically, a new connection from output to input, while the system is in operation, can lead to large changes. The greater the range over which this connection can be made, the more drastic can the change in behaviour be - the snake again. The neural network has neatly separated the  learning and running states, perhaps necessary for study of its behaviour in a laboratory, but rather useless for operational behaviour.  

One can imagine a neural network with zero weighted connections to every other possible point, but one would also need some means of identification for which connection to change - identifiability which the neural network lacks. One could add named variables and... and finish up with a knowledge network or active structure.

We show an active structure reading complex legal text - inconceivable for an ANN, and also inconceivable for a network of real neurons, until we get sufficient layering that its base properties - particularly its directionality - disappear.

Active Structure

Home