The Full Wiki

Turing completeness: Wikis

Advertisements
  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

Advertisements

From Wikipedia, the free encyclopedia

In computability theory, a collection of data-manipulation rules (an instruction set, programming language, or cellular automaton) is said to be Turing Complete when the rules followed in sequence on arbitrary data can produce the result of any calculation. A device with a Turing complete instruction set is the definition of a universal computer. To be Turing complete, it is enough to have conditional branching (an "if" and "goto" statement), and the ability to change memory.

To show that something is Turing complete, it is enough to show that it can be used to simulate the most primitive computer, since even the simplest computer can be used to simulate the most complicated one. All general purpose programming languages and modern machine instruction sets are Turing complete, up to relatively trivial finite-memory issues. Turing complete machines are defined as having unlimited amounts of memory, while machine instruction sets are usually designed to only work with a certain limited amount of RAM.

In computability theory, there is a closely-related definition:

Turing equivalence --- Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P.

A Turing-complete system is one that can simulate a Turing machine, but the term is most often used to mean Turing-equivalent to a Turing machine. The reason is that any real world computer can be simulated by a Turing machine, an observation codified as the Church Turing thesis.

A computer with access to an infinite tape of data is sometimes more powerful than a Turing machine, since the tape can in principle contain the solution to the halting problem, or some other undecidable problem. An infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot. A machine which is universal, except for having access to some Turing oracle is called weakly universal.

Contents

Overview

Turing completeness, named after Alan Turing, is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church-Turing thesis states that this is a law of nature--- that a universal Turing machine can, in principle, perform any calculation that any other programmable computer is capable of. Obviously, this says nothing about the effort required to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

While truly Turing-complete machines require unlimited amounts of working memory, Turing completeness is often loosely attributed to physical machines or programming languages that would be universal if they had unlimited storage and adressing. All modern computers are Turing-complete in this loose sense, they are linear bounded automaton complete.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do any better. From the 1830s to the 1940s, mechanical calculating machines such as adders and multipliers were built and improved. But these machines could not perform an "if/goto", and are not true computers.

In the late 19th century, Leopold Kronecker had formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions which compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathemtics with precise axioms and precise logical rules of deduction which could be performed by a machine. It soon it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved to be enough to produce every theorem by Kurt Gödel in 1930. Gödel's completeness theorem of 1930 implictly contains a definition of a universal computer, because the logical rules acting on some axioms of arithmetic will eventually prove as a theorem the result of any computation.

The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation which deduces their theorems. Church then Turing identified the computational core of the incompleteness theorem, and were able to produce undecidable problems. This work, along with Gödel's work on General recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.

John Von Neumann was inspired by these results to design actual working universal computing machines. The first formal programming languages of the 1930s demonstrated that such a computer would be useful. The first actual implementation of a Turing-complete machine appeared in 1941: the program-controlled Z3 of Konrad Zuse, but the first machine explicitly designed to be Turing complete and widely appreciated as being universal was the 1946 ENIAC. This machine was able to solve a wide range of effective problems in the 1940s, many related to atomic bomb design.

All known laws of physics have consequences which are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident, that it is because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can physically built (see philosophical implications in the Church–Turing thesis).

See the article on computability theory for a long list of systems that are Turing-complete, as well as several systems that are less powerful, and several theoretical systems that are even more powerful than a universal Turing machine.

Computability theory

The first result of computability theory is that it is impossible in general to predict what a Turing-complete program will do over an arbitrarily long time. For example, it is impossible to determine whether such a program will stop, or whether it will continue forever in an infinite loop (see halting problem). It is impossible to determine whether the program will return "true" or whether it will return "false". For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold. This can cause problems in practice when analyzing real-world computer programs. One way to avoid this is to cause programs to stop executing after a fixed period of time, or to limit the power of flow control instructions. Such systems are not Turing-complete by design.

Another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with finite looping capabilities (a language that guarantee any program will halt). Given a guaranteed halting language, the computable function which is produced by Cantor's diagonal argument on all computable functions in that language is not computable in that language.

Examples

The computational systems (algebras, calculi) that are discussed as Turing complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:

Most programming languages, conventional and unconventional, are Turing-complete. This includes:

The specific language features used to achieve Turing-completeness can be quite different; FORTRAN systems would use loop constructs or possibly even GOTO statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Turing-completeness is an abstract statement of capability, rather than a prescription of specific language features used to implement that capability.

Turing-completeness in SQL is implemented through proprietary extensions, illustrating one of the reasons why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most "typical" computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing-complete.

Non-Turing-complete languages

Many computational languages exist which are not Turing complete. One such example is the set of regular languages, most commonly regular expressions, which are generated by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compilation. Additional examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions, or a series of mathematical formulae in a spreadsheet with no cycles. While it is possible to perform many interesting operations in such a system, this fails to be Turing-complete as it is impossible to form loops.

There are some languages where all functions are total, and must terminate, such as Charity and Epigram. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types.

Languages based on total functions that can work on streams that are in potentia, but not formally, infinite are currently being investigated by researchers such as David Turner. This would enable turing-incomplete languages to be used even for tasks such as Systems programming.

Data languages

Languages such as XML, JSON, YAML and S-expressions are not Turing complete because they are only used to represent structured data, not describe computation. These are sometimes referred to as markup languages, or more properly as "data description languages".

Non-Turing complete languages, especially data languages, are desirable in that they can be manipulated. This is codified in the web design principle know as the "Rule of Least Power", which recommends avoiding the use of Turing complete languages, and in fact using the least powerful language that satisfies a task, so that the result data can more easily be reused and transformed.

See also

References

  • Brainerd, W.S., Landweber, L.H. (1974), Theory of Computation, Wiley.
  • Simplest 'universal computer' wins student $25,000 by Jim Giles, New Scientist, October 24, 2007.
  • The Universal Turing Machine: A Half-Century Survey (1995), ed. Rolf Herken, Springer Verlag.

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message