Linear Logic and Active Structure

Linear logic is based on the notion that each logical statement is a "resource" - something that can only be used once in a proof. This changes the meaning of logical connectives. Linear logic is seen by some as a basis for a formalism of natural language.

How does linear logic compare with Active Structure?realnet1.gif (24234 bytes)

The logical statements for Active Structure are turned into objects which are linked together through connectives which are embedded in the structure, so in that sense, each statement can only be used once in a proof - the variables it connects are not free, and the connections can support only one truth value.

This implies that Active Structure is both a full implementation of linear logic, and that linear logic is but one small facet of a complete formalism of language.

Proponents of linear logic write down logical statements without concerning themselves about the logical surface on which they write. Unless this surface is fully integrated into the formalism, it will not cover very much ground.

Natural language is easily capable of setting up logical functions - structures which can be called multiple times, either by recursion or by repeated naming in different areas of the same proof. Here, the same structure has to be able to support multiple truth values - Active Structure does this either by copying structure or by structural backtrack.

Natural language has a much broader logical structure than is encompassed by linear logic - language also talks about existence, and readily mixes existential and logical truth values at connectives.

Natural language is mostly about objects and the relations among them. Objects have existence, and relations have existence and logical validity, both of which are functions of time. Only when a formalism attempts to capture all these things in an integrated way can we talk about a formalism of natural language.

See

Relations

NLP