Inference is the process of drawing a conclusion by applying clues (of logic, statistics etc.) to observations or hypotheses; or by interpolating the next logical step in an intuited pattern. The conclusion drawn is also called an inference.
Contents 
The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.
Greek philosophers defined a number of syllogisms, correct threepart inferences, that can be used as building blocks for more complex reasoning. We'll begin with the most famous of them all:
All men are mortal Socrates is a man  Therefore Socrates is mortal.
The reader can check that the premises and conclusion are true, but Logic is concerned with inference: does the truth of the conclusion follow from that of the premises?
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if the parts are true. But a valid form with true premises will always have a true conclusion.
For example, consider the form of the following symbological track:
All A are B C is A  Therefore C is B
The form remains valid even if all three parts are false:
All apples are blue. A banana is an apple.  Therefore a banana is blue.
For the conclusion to be necessarily true, the premises need to be true.
Now we turn to an invalid form.
All A are B. C is a B.  Therefore C is an A.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.
All apples are fruit. (true) Bananas are fruit. (true)  Therefore bananas are apples. (false)
A valid argument with false premises may lead to a false conclusion:
All fat people are Greek John Lennon was fat  Therefore John Lennon was Greek
where a valid argument is used to derive a false conclusion from false premises. The inference is valid because it follows the form of a correct inference.
A valid argument can also be used to derive a true conclusion from false premises:
All fat people are musicians John Lennon was fat  Therefore John Lennon was a musician
In this case we have two false premises that imply a true conclusion.
An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.
AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines.
An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.
bunk
mortal(X) : man(X). man(socrates).
( Here : can be read as if. Generally, if P Q (if P then Q) then in Prolog we would code Q:P (Q if P).)
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:
? mortal(socrates).
(where ? signifies a query: Can mortal(socrates). be deduced from the KB using the rules) gives the answer "Yes".
On the other hand, asking the Prolog system the following:
? mortal(plato).
gives the answer "No".
This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the socalled closed world assumption). Finally ? mortal(X) (Is anything mortal) would result in "Yes" (and in some implemenations: "Yes": X=socrates)
Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.
Recently automatic reasoners found in semantic web a new field of application. Being based upon firstorder logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.
Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes).
Bayesianists identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem, which gave its name to the field.
See Bayesian inference for examples.
Source: Article of André Fuhrmann about "Nonmonotonic Logic"
A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is nonmonotonic. Deductive inference, at least according to the canons of classical logic, is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premisses are added.
By contrast, everyday reasoning is mostly nonmonotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce’s theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.


Please help develop this page
This page was created, but so far, little content has been added. Everyone is invited to help expand and create educational content for Wikiversity. If you need help learning how to add content, see the editing tutorial and the MediaWiki syntax reference. To help you get started with content, we have automatically added references below to other Wikimedia Foundation projects. This will help you find materials such as information, media and quotations on which to base the development of "Logical inference" as an educational resource. However, please do not simply copyandpaste large chunks from other projects. You can also use the links in the blue box to help you classify this page by subject, educational level and resource type. 

please add
please add
Artificial Intelligence: History of AI  Intelligent Agents  Search techniques  Constraint Satisfaction  Knowledge Representation and Reasoning  Logical Inference  Reasoning under Uncertainty  Decision Making  Learning and Neural Networks  Bots
