# Linear logic: Wikis

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

# Encyclopedia

Linear logic is a substructural logic proposed by Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter.[1] Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such as programming languages, game semantics, and quantum physics[2], particularly because of its emphasis on resource-boundedness, duality, and interaction.

Linear logic lends itself to many different presentations, explanations and intuitions. Proof-theoretically, it derives from an analysis of classical sequent calculus in the absence of the structural rules of weakening and contraction. (This has the effect that certain propositions which are classically/intuitionistically valid are not directly provable in linear logic, although both classical and intuitionistic logic can be encoded in linear logic by means of additional modal connectives, the so-called exponentials.) Operationally, the rejection of weakening and contraction can be seen as reorienting the subject of logic, from persistent truths to ephemeral resources. Logical deduction then corresponds to local (possibly destructive) transformations on these resources, rather than the usual view of deduction as building up an ever-expanding collection of facts. Denotationally, linear logic can be seen as refining the interpretation of intuitionistic logic by replacing cartesian closed categories by symmetric monoidal categories, or the interpretation of classical logic by replacing boolean algebras by C*-algebras.

## Connectives, duality, and polarity

### Syntax

The language of classical linear logic (CLL) is defined inductively by the BNF notation

 φ ::= p ∣ p⊥ ∣ φ ⊗ φ ∣ φ ⊕ φ ∣ φ & φ ∣ φ ⅋ φ ∣ 1 ∣ 0 ∣ ⊤ ∣ ⊥ ∣ !φ ∣ ?φ

Here p and p range over logical atoms. For reasons to be explained below, the connectives , , 1, and are called multiplicatives, the connectives &, , , and 0 are called additives, and the connectives ! and ? are called exponentials. We can further employ the following terminology:

• is called "multiplicative conjunction" or "times" (or sometimes "tensor")
• is called "additive disjunction" or "plus"
• & is called "additive conjunction" or "with"
• is called "multiplicative disjunction" or "par"
•  ! is pronounced "of course" (or sometimes "bang")
•  ? is pronounced "why not"

Every proposition φ in CLL has a dual φ, defined as follows:

 (p)⊥ = p⊥ (p⊥)⊥ = p (φ ⊗ ψ)⊥ = φ⊥ ⅋ ψ⊥ (φ ⅋ ψ)⊥ = φ⊥ ⊗ ψ⊥ (φ ⊕ ψ)⊥ = φ⊥ & ψ⊥ (φ & ψ)⊥ = φ⊥ ⊕ ψ⊥ (1)⊥ = ⊥ (⊥)⊥ = 1 (0)⊥ = ⊤ (⊤)⊥ = 0 (!φ)⊥ = ?φ⊥ (?φ)⊥ = !φ⊥

Observe that (-) is an involution, i.e., φ⊥⊥ = φ for all propositions. φ is also called the linear negation of φ.

The columns of the table suggest another way of classifying the connectives of linear logic, termed polarity: the connectives negated in the left column (, , 1, 0, !) are called positive, while their duals on the right (, &, ⊥, , ?) are called negative.

Linear implication is not included in the grammar above, but is definable in CLL using linear negation and multiplicative disjunction, by φ ψ := φ ψ. The connective is sometimes pronounced "lollipop" (owing to its shape).

## Sequent calculus presentation

One way of fixing the formal meaning of the connectives is by explaining how to prove linear logic propositions. Here we follow Girard's original presentation of CLL as a one-sided sequent calculus (in the style of Schütte and Tait).

A context ( Γ, Δ) is a list of propositions A1, ..., An. A (one-sided) sequent is the assertion $\vdash$ Γ of a context. We give inference rules describing how to build proofs of sequents.

First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of exchange:

 $\vdash$ Γ $\vdash$ Γ'
(Γ' a permutation of Γ)

(Alternatively we could accomplish the same thing by declaring contexts to be multisets rather than lists.) Note that we do not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present.

Next we add initial sequents and cuts:

 $\vdash$ A, A⊥
 $\vdash$ Γ, A $\vdash$ A⊥, Δ $\vdash$ Γ, Δ

The cut rule can be seen as a way of composing proofs, and initial sequents serve as the units for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus.

Now, we explain the connectives by giving logical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ) effectively play the role of left-rules for its dual (). So, we should expect a certain "harmony" between the rule(s) for a connective and the rule(s) for its dual.

### Multiplicatives

The rules for multiplicative conjunction and disjunction:

 $\vdash$ Γ, A $\vdash$ Δ, B $\vdash$ Γ, Δ, A ⊗ B
 $\vdash$ Γ, A, B $\vdash$ Γ, A ⅋ B

and for their units:

 $\vdash$ 1
 $\vdash$ Γ $\vdash$ Γ, ⊥

Observe that the rules for multiplicative conjunction and disjunction are admissible for plain conjunction and disjunction under a classical interpretation (i.e., they are admissible rules in LK).

The rules for additive conjunction and disjunction:

 $\vdash$ Γ, A $\vdash$ Γ, B $\vdash$ Γ, A & B
 $\vdash$ Γ, A $\vdash$ Γ, A ⊕ B
 $\vdash$ Γ, B $\vdash$ Γ, A ⊕ B

and for their units:

 $\vdash$ Γ, ⊤
(no rule for 0)

Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises.

### Exponentials

The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for  ?'d propositions:

 $\vdash$ Γ $\vdash$ Γ, ?A
 $\vdash$ Γ, ?A, ?A $\vdash$ Γ, ?A

and use the following logical rules:

 $\vdash$ ?Γ, A $\vdash$ ?Γ, !A
 $\vdash$ Γ, A $\vdash$ Γ, ?A

One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, and that there is no longer such a clear symmetry between the duals ! and ?. This situation is remedied in alternative presentations of CLL (e.g., the LU presentation).

## Encoding classical/intuitionistic logic in linear logic

Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as !A B, and classical implication as !A  ?B.[citation needed]

## The resource interpretation

Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations. Sir Antony Hoare (1985)'s classical example of the vending machine can be used to illustrate this idea.

Suppose we represent a candy bar by the atomic proposition candy, and a dollar by \$1. To state the fact that a dollar will buy you one candy bar, we might write the implication \$1candy. But in ordinary (classical or intuitionistic) logic, from A and AB one can conclude AB. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, we can avoid this problem by using more sophisticated encodings, although typically such encodings suffer from the frame problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than \$1candy, we express the property of the vending machine as a linear implication \$1 candy. From \$1 and this fact, we can conclude candy, but not \$1 candy. In general, we can use the linear logic proposition A B to express the validity of transforming resource A into resource B.

Running with the example of the vending machine, let us consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.)

Multiplicative conjunction (A B) denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requesting gum drink. The constant 1 denotes the absence of any resource, and so functions as the unit of .

Additive conjunction (A & B) represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write \$1 (candy & chips & drink). We do not write \$1 (candy chips drink), which would imply that one dollar suffices for buying all three products together. However, from \$1 (candy & chips & drink), we can correctly deduce \$3 (candy chips drink), where \$3 := \$1 \$1 \$1. The unit of additive conjunction can be seen as a wastebasket or garbage collector for irrelevant alternatives. For example, we can write \$3 (candy ) to express that three dollars will buy you a candy bar and something else (we don't care what).

Additive disjunction (A B) represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as \$1 (candy chips drink). The constant 0 represents a product that cannot be made, and thus serves as the unit of (a machine that might produce A or 0 is as good as a machine that always produces A because it will never succeed in producing a 0).

Multiplicative disjunction (A B) is more difficult to gloss in terms of the resource interpretation, although we can encode back into linear implication, either as A B or B A.

## Decidability/complexity of entailment

The entailment relation in full CLL is undecidable[3]. Fragments of CLL are often considered, for which the decision problem is more subtle:

• Multiplicative linear logic (MLL): only the multiplicative connectives. MLL entailment is NP-complete.
• Multiplicative-additive linear logic (MALL): only multiplicatives and additives (i.e., exponential-free). MALL entailment is PSPACE-complete.
• Multiplicative-exponential linear logic (MELL): only multiplicatives and exponentials. The decidability of MELL entailment is currently open.

## Variants of linear logic

Many variations of linear logic arise by further tinkering with the structural rules:

• Affine logic, which forbids contraction but allows global weakening.
• Strict logic or relevant logic, which forbids weakening but allows global contraction.
• Non-commutative logic or ordered logic, which removes the rule of exchange, in addition to barring weakening and contraction. In ordered logic, linear implication divides further into left-implication and right-implication.

Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives , ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives , ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (see first-order logic and higher-order logic).

## Notes

1. ^ Girard, Jean-Yves (1987). "Linear logic". Theoretical Computer Science 50 (1): 1–102. doi:10.1016/0304-3975(87)90045-4.
2. ^ Baez, John; Stay, Mike (2008). Bob Coecke. ed. "Physics, Topology, Logic and Computation: A Rosetta Stone". New Structures of Physics.
3. ^ For this and the below complexity results, see: Lincoln, Patrick; Mitchell, John; Scedrov, Andre; Shankar, Natarajan (1992). "Decision Problems for Propositional Linear Logic". Annals of Pure and Applied Logic 56: 239–311. doi:10.1016/0168-0072(92)90075-B.

## References

• Girard, Jean-Yves. Linear logic, Theoretical Computer Science, London Mathematical 50:1, pp. 1-102, 1987.
• Girard, Jean-Yves, Lafont, Yves, and Taylor, Paul. Proofs and Types. Cambridge Press, 1989. (An electronic version is online at [1].)
• Hoare, Sir Antony, 1985. Communicating Sequential Processes. Prentice-Hall International.
• Lafont, Yves, 1993. Introduction to Linear Logic. Lecture notes from TEMPUS Summer School on Algebraic and Categorical Methods in Computer Science, Brno, Czech Republic.
• Troelstra, A.S. Lectures on Linear Logic. CSLI (Center for the Study of Language and Information) Lecture Notes No. 29. Stanford, 1992.
• A. S. Troelstra, H. Schwichtenberg (1996). Basic Proof Theory. In series Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, ISBN 0-521-77911-1.