In mathematics, logic and computer science, type theory is any of several formal systems that can serve as alternatives to naive set theory, or the study of such formalisms in general. In programming language theory, a branch of computer science, type theory can refer to the design, analysis and study of type systems, although some computer scientists limit the term's meaning to the study of abstract formalisms such as typed λcalculi.
Bertrand Russell invented the first type theory in response to his discovery that Gottlob Frege's version of naive set theory was afflicted with Russell's paradox. This type theory features prominently in Whitehead and Russell's Principia Mathematica. It avoids Russell's paradox by first creating a hierarchy of types, then assigning each mathematical (and possibly other) entity to a type. Objects of a given type are built exclusively from objects of preceding types (those lower in the hierarchy), thus preventing loops.
Alonzo Church, inventor of the lambda calculus, developed a higherorder logic commonly called Church's Theory of Types,^{[1]} in order to avoid the Kleene–Rosser paradox afflicting the original pure lambda calculus. Church's type theory is a variant of the lambda calculus in which expressions (also called formulas or λterms) are classified into types, and the types of expressions restrict the ways in which they can be combined. In other words, it is a typed lambda calculus. Today many other such calculi are in use, including Per MartinLöf's Intuitionistic type theory, JeanYves Girard's System F and the Calculus of Constructions. In typed lambda calculi, types play a role similar to that of sets in set theory.
Contents 
In the 1920s, Leon Chwistek^{[2]} and Frank P. Ramsey^{[3]} noticed that, if one is willing to give up the vicious circle principle, the hierarchy of levels of types in the "ramified theory of types" (see the History section for more on this) can be collapsed. The resulting simplified logic is called the simple theory of types or, more briefly, simple type theory. ST is equivalent with Russell's ramified theory plus the Axiom of reducibility. Detailed formulations of simple type theory were published in the late 1920s and early 1930s by R. Carnap, K. Gödel, W.V.O. Quine, and A. Tarski. In 1940 Alonzo Church (re)formulated it as simply typed lambda calculus.^{[4]}
The following system is Mendelson's (1997, 289–293) ST. The domain of quantification is partitioned into an ascending hierarchy of types, with all individuals assigned a type. Quantified variables range over only one type; hence the underlying logic is firstorder logic. ST is "simple" (relative to the type theory of Principia Mathematica) primarily because all members of the domain and codomain of any relation must be of the same type. There is a lowest type, whose individuals have no members and are members of the second lowest type. Individuals of the lowest type correspond to the urelements of certain set theories. Each type has a next higher type, analogous to the notion of successor in Peano arithmetic. While ST is silent as to whether there is a maximal type, a transfinite number of types poses no difficulty. These facts, reminiscent of the Peano axioms, make it convenient and conventional to assign a natural number to each type, starting with 0 for the lowest type. But type theory does not require a prior definition of the naturals.
The symbols peculiar to ST are primed variables and infix . In any given formula, unprimed variables all have the same type, while primed variables (x') range over the next higher type. The atomic formulas of ST are of two forms, x = y (identity) and . The infix symbol suggests the intended interpretation, set membership.
All variables appearing in the definition of identity and in the axioms Extensionality and Comprehension, range over individuals of one of two consecutive types. Only unprimed (primed) variables, ranging over the "lower" ("higher") type, can appear to the left (right) of ''. The firstorder formulation of ST rules out quantifying over types. Hence each pair of consecutive types requires its own axiom of Extensionality and of Comprehension, which is possible if Extensionality and Comprehension below are taken as axiom schemata "ranging over" types.
ST reveals how type theory can be made very similar to axiomatic set theory. Moreover, the more elaborate ontology of ST, grounded in what is now called the "iterative conception of set," makes for axiom (schemata) that are far simpler than those of conventional set theories, such as ZFC, with simpler ontologies. Set theories whose point of departure is type theory, but whose axioms, ontology, and terminology differ from the above, include New Foundations and Scott–Potter set theory.
Church’s type theory has been extensively studied by two of Church’s students, Leon Henkin and Peter B. Andrews. Since ST is a higher order logic, and in higher order logics one can define propositional connectives in terms of logical equivalence and quantifiers, in 1963 Henkin developed a formulation of ST based on equality, but in which he restricted attention to propositional types. This was later simplified later that year by Andrews in Q_{0}.^{[5]} In this respect ST can be seen as a particular kind of a higherorder logic, classified by P.T. Johnstone in Sketches of an Elephant, as having a lambdasignature, that is a higherorder signature that contains no relations, and uses only products and arrows (function types) as type constructors. Furthermore, as Johnstone put it, ST is "logicfree" in the sense that it contains no logical connectives or quantifiers in its formulae.^{[6]}
Origin of Russell's Theory of Types: In a letter from Russell to Gottlob Frege (1902) Russell announced his discovery of the paradox in Frege's Begriffsschrift^{[7]}. Frege promptly responded, acknowledging the problem and proposing a solution in a technical discussion of "levels". To quote Frege: "Incidentally, it seems to me that the expression "a predicate is predicated of itself" is not exact. A predicate is as a rule a firstlevel function, and this function requires an object as argument and cannot have itself as argument (subject). Therefore I would prefer to say "a concept is predicated of its own extension"^{[8]}. He goes about showing how this might work but seems to pull back from it. (In a footnote, van Heijenoort notes that in Frege 1893 Frege had used a symbol (horseshoe) "for reducing secondlevel functions to firstlevel functions"^{[9]}). As a consequence of what has become known as Russell's paradox both Frege and Russell had to quickly emend works that they had at the printers. In an Appendix B that Russell tacked on to his 1903 Principles of Mathematics one finds his "tentative" "theory of types"^{[10]}.
The matter plagued Russell for about five years (19031908). Willard Quine in his preface to Russell's (1908a) Mathematical logic as based on the theory of types^{[11]} presents a historical synopsis of the origin of the theory of types and the "ramified" theory of types: Russell proposed in turn a number of alternatives: (i) abandoning the theory of types (1905) followed by three theories in 1905: (ii.1) the zigzag theory, (ii.2) theory of limitation of size, (ii.3) the noclass theory (19051906), then (iii) readopting the theory of types (1908ff)".
Quine observes that Russell's introduction of the notion of "apparent variable" had the following result: "the distinction between 'all' and 'any': 'all' is expressed by the bound ('apparent') variable of universal quantification, which ranges over a type, and 'any' is expressed by the free ('real') variable which refers schematically to any unspecified thing irrespective of type". Quine dismisses this notion of "bound variable" as "pointless apart from a certain aspect of the theory of types"^{[12]}.
Quine explains the ramified theory as follows: "It has been so called because the type of a function depends both on the types of its arguments and on the types of the apparent variables contained in it (or in its expresion), in case these exceed the types of the arguments"^{[13]}. Stephen Kleene in his 1952 Introduction to Metamathematics describes the ramified theory of types this way:
But because the stipulations of the ramified theory would prove (to quote Quine) "onerous", Russell in his 1908 Mathematical logic as based on the theory of types^{[14]} also would propose his axiom of reducibility. By 1910 Whitehead and Russell in their Principia Mathematica would further augment this axiom with the notion of a matrix  a fullyextensional specification of a function. From its matrix a function could be derived by the process of "generalization" and vice versa, i.e. the two processes are reversible  (i) generalization from a matrix to a function ( by use apparent variables ) and (ii) the reverse process of reduction of type by coursesofvalues substitution of arguments for the apparent variable. By this method impredicativity could be avoided^{[15]}.
Eventually Emil Post (1921) would lay waste to Russell's "cumbersome"^{[16]} Theory of Types with his "truth functions" and their truth tables. In his "Introduction" to his 1921 Post places the blame on Russell's notion of apparent variable: "Whereas the complete theory [of Whitehead and Russell (1910, 1912, 1913)] requires for the enunciation of its propositions real and apparent variables, which represent both individuals and propositional functions of different kinds, and as a result necessitates the cumbersome theory of types, this subtheory uses only real variables, and these real variables represent but one kind of entity, which the authors have chosen to call elementary propositions".
At about the same time Ludwig Wittgenstein made short work of the theory of types in his 1922 work Tractatus LogicoPhilosophicus in which he points out the following in parts 3.331–3.333:
3.331 From this observation we get a further view – into Russell's Theory of Types. Russell's error is shown by the fact that in drawing up his symbolic rules he has to speak of the meanings of his signs.3.332 No proposition can say anything about itself, because the propositional sign cannot be contained in itself (that is the "whole theory of types").
3.333 A function cannot be its own argument, because the functional sign already contains the prototype of its own argument and it cannot contain itself...
Wittgenstein proposed the truthtable method as well. In his 4.3 through 5.101, Wittgenstein adopts an unbounded Sheffer stroke as his fundamental logical entity and then lists all 16 functions of two variables (5.101).
The notion of matrixastruthtable appears as late as the 19401950's in the work of Tarski, e.g. his 1946 indexes "Matrix, see: Truth table"^{[17]}
Russell in his 1920 Introduction to Mathematical Philosophy devotes an entire chapter to "The axiom of Infinity and logical types" wherein he states his concerns: "Now the theory of types emphatically does not belong to the finished and certain part of our subject: much of this theory is still inchoate, confused, and obscure. But the need of some doctrine of types is less doubtful than the precise form the doctrine should take; and in connection with the axiom of infinity it is particularly easy to see the necessity of some such doctrine"^{[18]}.
Russell abandons the axiom of reducibility: In the second edition of Principia Mathematica (1927) he acknowledges Wittgenstein's argument^{[19]}. At the outset of his Introduction he declares "there can be no doubt ... that there is no need of the distinction between real and apparent variables..."^{[20]}. Now he fully embraces the matrix notion and declares "A function can only appear in a matrix through its values" (but demurs in a footnote: "It takes the place (not quite adequately) of the axiom of reducibility"^{[21]}). Furthermore, he introduces a new (abbreviated, generalized) notion of "matrix", that of a "logical matrix . . . one that contains no constants. Thus pq is a logical matrix"^{[22]}. Thus Russell has virtually abandoned the axiom of reducibility^{[23]}, but in his last paragraphs he states that from "our present primitive propositions" he cannot derive "Dedekindian relations and wellordered relations" and observes that if there is a new axiom to replace the axiom of reducibility "it remains to be discovered"^{[24]}.
The most obvious application of type theory is in constructing type checking algorithms in the semantic analysis phase of compilers for programming languages.
Type theory is also widely in use in theories of semantics of natural language, especially Montague grammar and its descendants. The most common construction takes the basic types e and t for individuals and truthvalues, respectively, and defines the set of types recursively as follows:
A complex type is the type of functions from entities of type a to entities of type b. Thus one has types like which are interpreted as elements of the set of functions from entities to truthvalues, i.e. characteristic functions of sets of entities. An expression of type is a function from sets of entities to truthvalues, i.e. a (characteristic function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981).
Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types.
Definitions of type system vary, but the following one due to Benjamin C. Pierce roughly corresponds to the current consensus in the programming language theory community:
[A type system is a] tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.^{[25]}
In other words, a type system divides program values into sets called types — this is called a type assignment — and makes certain program behaviors illegal on the basis of the types that are thus assigned. For example, a type system may classify the value "hello" as a string and the value 5 as a number, and prohibit the programmer from adding "hello" to 5 based on that type assignment. In this type system, the program
"hello" + 5
would be illegal. Hence, any program permitted by the type system would be provably free from the erroneous behavior of adding strings and numbers.
The design and implementation of type systems is a topic nearly as broad as the topic of programming languages itself. In fact, type theory proponents commonly proclaim that the design of type systems is the very essence of programming language design: "Design the type system correctly, and the language will design itself."
Redirecting to Type theory
