The philosophical treatment of causality extends over millennia. In the Western philosophical tradition, discussion stretches back at least to Aristotle, and the topic remains a staple in contemporary philosophy. Aristotle distinguished between accidental (cause preceding effect) and essential causality (one event seen in two ways). Aristotle's example of essential causality is a builder building a house. This single event can be analyzed into the builder building (cause) and the house being built (effect). Aristotle also had a theory that answered the question "why?" four different ways. The first was material cause, next was formal cause, then efficient cause, and lastly was final cause. These rules are known as "Aristotle's four causes". 
Though cause and effect are typically related to events, candidates include objects, processes, properties, variables, facts, and states of affairs; characterizing the causal relationship can be the subject of much debate.
However, according to Sowa (2000), "relativity and quantum mechanics have forced physicists to abandon these assumptions as exact statements of what happens at the most fundamental levels, but they remain valid at the level of human experience."
In the case of a mis-attribution of a cause to an effect, the event is known as questionable cause.
In his Posterior Analytics and Metaphysics, Aristotle wrote, "All causes are beginnings...", "... we have scientific knowledge when we know the cause...", and "... to know a thing's nature is to know the reason why it is..." This formulation set the guidelines for subsequent causal theories by specifying the number, nature, principles, elements, varieties, order of causes as well as the modes of causation. Aristotle's account of the causes of things is a comprehensive model.
Aristotle's theory enumerates the possible causes which fall into several wide groups, amounting to the ways the question "why" may be answered; namely, by reference to the material worked upon (as by an artisan) or what might be called the substratum; to the essence, i.e., the pattern, the form, or the structure by reference to which the "matter" or "substratum" is to be worked; to the primary moving agent of change or the agent and its action; and to the goal, the plan, the end, or the good that the figurative artisan intended to obtain. As a result, the major kinds of causes come under the following divisions:
Additionally, things can be causes of one another, reciprocally causing each other, as hard work causes fitness, and vice versa - although not in the same way or by means of the same function: the one is as the beginning of change, the other is as its goal. (Thus Aristotle first suggested a reciprocal or circular causality - as a relation of mutual dependence, action, or influence of cause and effect.) Also; Aristotle indicated that the same thing can be the cause of contrary effects - as its presence and absence may result in different outcomes. In speaking thus he formulated what currently is ordinarily termed a "causal factor," e.g., atmospheric pressure as it affects chemical or physical reactions.
Aristotle marked two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and incidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes; so that generic effects assigned to generic causes, particular effects to particular causes, and operating causes to actual effects. It is also essential that ontological causality does not suggest the temporal relation of before and after - between the cause and the effect; that spontaneity (in nature) and chance (in the sphere of moral actions) are among the causes of effects belonging to the efficient causation, and that no incidental, spontaneous, or chance cause can be prior to a proper, real, or underlying cause per se.
All investigations of causality coming later in history will consist in imposing a favorite hierarchy on the order (priority) of causes; such as "final > efficient > material > formal" (Aquinas), or in restricting all causality to the material and efficient causes or, to the efficient causality (deterministic or chance), or just to regular sequences and correlations of natural phenomena (the natural sciences describing how things happen rather than asking why they happen)..
Causality has taken many journeys in the minds of human beings for over 3000 years. Determinism and existentialism are but a few of the manifestations of this journey.
The deterministic world-view is one in which the universe is no more than a chain of events following one after another according to the law of cause and effect. To hold this worldview, as an incompatibilist, there is no such thing as "free will". However, compatibilists argue that determinism is compatible with, or even necessary for, free will.
Existentialists have suggested that people believe that while no meaning has been designed in the universe, we each can provide a meaning for ourselves.
Though philosophers have pointed out the difficulties in establishing theories of the validity of causal relations, there is yet the plausible example of causation afforded daily which is our own ability to be the cause of events. This concept of causation does not prevent seeing ourselves as moral agents.
Theories of causality in Indian philosophy focus mainly on the relationship between cause and effect. The various philosophical schools (darsanas) provide different theories.
The doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause.
The doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising.
See Nyaya for some details of the theory of causation in the Nyaya school.
Causes are often distinguished into two types: Necessary and sufficient. A third type of causation, which requires neither necessity nor sufficiency in and of itself, but which contributes to the effect, is called a "contributory cause."
If x is a necessary cause of y, then the presence of y necessarily implies the presence of x. The presence of x, however, does not imply that y will occur.
If x is a sufficient cause of y, then the presence of x necessarily implies the presence of y. However, another cause z may alternatively cause y. Thus the presence of y does not imply the presence of x.
A cause may be classified as a "contributory cause," if the presumed cause precedes the effect, and altering the cause alters the effect. It does not require that all those subjects which possess the contributory cause experience the effect. It does not require that all those subjects which are free of the contributory cause be free of the effect. In other words, a contributory cause may be neither necessary nor sufficient but it must be contributory.
J. L. Mackie argues that usual talk of "cause," in fact refers to INUS conditions (insufficient and non-redundant parts of unnecessary but sufficient causes). For example, a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's destruction (since many other collections of events certainly could have destroyed the house). Within this collection, the short circuit is an insufficient but non-redundant part (since the short circuit by itself would not have caused the fire, but the fire would not have happened without it, everything else being equal). So, the short circuit is an INUS cause of the house burning down.
Conditional statements are not statements of causality. An important distinction is that statements of causality require the antecedent to precede the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however.
For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional:
The first is true since both the antecedent and the consequent are true. The second is true because the antecedent is false and the consequent is true. The third is true because both the antecedent and the consequent are false. These statements are trivial examples. Of course, although none of these statements expresses a causal connection between the antecedent and consequent, they are nonetheless all true because no statement has the combination of a true antecedent and false consequent. Logic requires only that truth not be deceptive.
The ordinary indicative conditional has somewhat more structure than the material conditional. For instance, although the first is the closest, none of the preceding three statements seems true as an ordinary indicative reading. But the sentence
intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it.
Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements:
In the first case, it would not be correct to say that A's being a triangle caused it to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true.
A full grasp of the concept of conditionals is important to understanding the literature on causality. A crucial stumbling block is that conditionals in everyday English are usually loosely used to describe a general situation. For example, "If I drop my coffee, then my shoe gets wet" relates an infinite number of possible events. It is shorthand for "For any fact that would count as 'dropping my coffee', some fact that counts as 'my shoe gets wet' will be true". This general statement will be strictly false if there is any circumstance where I drop my coffee and my shoe doesn't get wet. However, an "If..., then..." statement in logic typically relates two specific events or facts—a specific coffee-dropping did or did not occur, and a specific shoe-wetting did or did not follow. Thus, with explicit events in mind, if I drop my coffee and wet my shoe, then it is true that "If I dropped my coffee, then I wet my shoe", regardless of the fact that yesterday I dropped a coffee in the trash for the opposite effect—the conditional relates to specific facts. More counterintuitively, if I didn't drop my coffee at all, then it is also true that "If I drop my coffee then I wet my shoe", or "Dropping my coffee implies I wet my shoe", regardless of whether I wet my shoe or not by any means. This usage would not be counterintuitive if it were not for the everyday usage. Briefly, "If X then Y" is equivalent to the first-order logic statement "A implies B" or "not A-and-not-B", where A and B are predicates, but the more familiar usage of an "if A then B" statement would need to be written symbolically using a higher order logic using quantifiers ("for all" and "there exists").
A breakdown of causality logic is known as questionable cause. This fallacy is committed when a person assumes that one event must cause another just because the events occur together. This fallacy involves drawing the conclusion that A is the cause of B simply because A and B are in regular conjunction (and there is not a common cause that is actually the cause of A and B). The mistake being made is that the causal conclusion is being drawn without adequate justification. One factor that makes causal reasoning complicated is that it is not always evident what is the cause and what is the effect. This is particularly true when A and B cause each other by way of system feedback, where cycles tend to reinforce each other. Individual perception of causality can be clouded by emotions and ideologies. Errors of causality logic can be avoided by a careful study of the temporal sequence of events.
Confusing Cause and Effect is a fallacy that has the following general form:
Other reasons that A and B may regularly occur together, which tend to prove the fallacy, are B causes A, and some other event, C, causes both A, and B. It could also be the case that A and B are not really related by causality but only appear so in a limited study.
In some cases it will be evident that the fallacy is being committed. For example, a person might claim that an illness was caused by a person getting a fever. In this case, it would be quite clear that the fever was caused by illness and not the other way around. In other cases, the fallacy is not always evident. One factor that makes causal reasoning quite difficult is that it is not always evident what is the cause and what is the effect. For example, a problem child might be the cause of the parents being short tempered or the short temper of the parents might be the cause of the child being problematic. The difficulty is increased by the fact that some situations might involve feedback. For example, the parents' temper might cause the child to become problematic and the child's behavior could worsen the parents' temper. In such cases it could be rather difficult to sort out what caused what in the first place.
In order to determine that the fallacy has been committed, it must be shown that the causal conclusion has not been adequately supported and that the person committing the fallacy has confused the actual cause with the effect. Showing that the fallacy has been committed will typically involve determining the actual cause and the actual effect. In some cases, as noted above, this can be quite easy. In other cases it will be difficult. In some cases, it might be almost impossible. Another thing that makes causal reasoning difficult is that people often have very different conceptions of cause and, in some cases, the issues are clouded by emotions and ideologies. For example, people often claim violence on TV and in movies must be censored because it causes people to like violence. Other people claim that there is violence on TV and in movies because people like violence or simply because it exists in the world in the first place. In this case, it is not obvious what the cause really is and the issue is clouded by the fact that emotions often run high on this issue.
The philosopher David Lewis notably suggested that all statements about causality can be understood as counterfactual statements. So, for instance, the statement that John's smoking caused his premature death is equivalent to saying that had John not smoked he would not have prematurely died. (In addition, it need also be true that John did smoke and did prematurely die, although this requirement is not unique to Lewis' theory.)
Translating causal into counterfactual statements would only be beneficial if the latter were less problematic than the former. This is indeed the case, as is demonstrated by the structural account of counterfactual conditionals devised by the computer scientist Judea Pearl (2000). This account provides clear semantics and effective algorithms for computing counterfactuals which, in contrast to Lewis' closest world semantics does not rely on the ambiguous notion of similarity among worlds. For instance, one can compute unambiguously the probability that John would be alive had he not smoked given that, in reality, John did smoke and did die. The quest for a counterfactual interpretation of causal statements is therefore justified.
One problem Lewis' theory confronts is causal preemption. Suppose that John did smoke and did in fact die as a result of that smoking. However, there was a murderer who was bent on killing John, and would have killed him a second later had he not first died from smoking. Here we still want to say that smoking caused John's death. This presents a problem for Lewis' theory since, had John not smoked, he still would have died prematurely. Lewis himself discusses this example, and it has received substantial discussion (cf). A structural solution to this problem has been given in [Halpern and Pearl, 2005].
Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer. As a result, many turn to a notion of probabilistic causation. Informally, A probabilistically causes B if A's occurrence increases the probability of B. This is sometimes interpreted to reflect imperfect knowledge of a deterministic system but other times interpreted to mean that the causal system under study has an inherently chancy nature.
When experiments are infeasible or illegal, the derivation of cause effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian Networks or path diagrams. The mathematical theory underlying these derivations relies on the distinction between conditional probabilities, as in P(cancer | smoking), and interventional probabilities, as in P(cancer | do(smoking)). The former reads: "the probability of finding cancer in a person known to smoke" while the latter reads: "the probability of finding cancer in a person forced to smoke". The former is a statistical notion that can be estimated directly in observational studies, while the latter is a causal notion (also called "causal effect") which is what we estimate in a controlled randomized experiment.
The theory of "causal calculus" permits one to infer interventional probabilities from conditional probabilities in causal Bayesian Networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of X on Y is any set of non-descendants of X that d-separate X from Y after removing all arrows emanating from X. This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement.
While derivations in Causal Calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl (1987) and rests on the distinction between the three possible types of causal substructures allowed in a directed acyclic graph (DAG):
Type 1 and type 2 represent the same statistical dependencies (i.e., X and Z are independent given Y) and are, therefore, indistinguishable. Type 3, however, can be uniquely identified, since X and Z are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when X and Z have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed   .
Alternative methods of structure learning search through the many possible causal structures among the variables, and remove ones which are strongly incompatible with the observed correlations. In general this leaves a set of possible causal relations, which should then be tested by designing appropriate experiments. If experimental data is already available, the algorithms can take advantage of that as well. In contrast with Bayesian Networks, path analysis and its generalization, structural equation modeling, serve better to estimate a known causal effect or test a causal model than to generate causal hypotheses.
For nonexperimental data, causal direction can be hinted if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be set up by simple linear regression models, for instance, with an analysis of covariance in which baseline and follow up values are known for a theorized cause and effect. The addition of time as a variable, though not proving causality, is a big help in supporting a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported by data from a longitudinal study than by data from a cross-sectional study.
The Nobel Prize holder Herbert Simon and Philosopher Nicholas Rescher claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics.
Some theorists have equated causality with manipulability. Under these theories, x causes y just in case one can change x in order to change y. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing the causes of crime so that we might find ways of reducing it.
These theories have been criticized on two primary grounds. First, theorists complain that these accounts are circular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty.
The second criticism centers around concerns of anthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world.
Some attempts to save manipulability theories are recent accounts that don't claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation.
Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984). These theorists often want to distinguish between a process and a pseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not.
Salmon (1984) claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along.
These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes.
In addition, many scientists in a variety of fields disagree that experiments are necessary to determine causality. For example, the link between smoking and lung cancer is considered proven by health agencies of the United States government, but experimental methods (for example, randomized controlled trials) were not used to establish that link. This view has been controversial. In addition, many philosophers are beginning to turn to more relativized notions of causality. Rather than providing a theory of causality in toto, they opt to provide a theory of causality in biology or causality in physics.
The Fountain Theory of Realms of Science, part of the philosophy of science, says that the laws of physics describe cause and effect within physics, and can be seen as the underlying causes of biological events, but that the causal connections between the realms of physics and biology are rarely observed directly, and thus are not a major part of either science. For example, laws of gravity and molecular forces are seen as causes of animal motion. But studies of animal behavior such as hunting and sleeping rarely get down to series of events caused by individual molecular forces. Rather, biology mostly references cause and effect relationships within biology, using facts such as population, competition, and reproduction. Thus physics and biology exist in their own realms, and laws of cause and effect within each tend towards internal consistency, and towards whatever consistency is possible given the smaller connections the other realm of science (for example, animals are effected by the force of gravity). Contemporary knowledge in biology is thus expressed in a self-referential way, with limited causal connections from physics to the current state of scientific knowledge. A similarly limited causal connection is shown between other scientific realms in succeeding levels, such as physics, chemistry, biology, psychology, and sociology.
Physicists conclude that certain elemental forces: gravity, the strong and weak nuclear forces, and electromagnetism are said to be the four fundamental forces which are the causes of all other events in the universe. The notion of causality that appears in many different physical theories is hard to interpret in ordinary language. One problem is typified by earth's interaction with the moon. It is inaccurate to say, "the moon exerts a gravitic pull and then the tides rise." In Newtonian mechanics gravity, rather, is a constant observable relationship among masses, and the movement of the tides is an example of that relationship. There are no discrete events or "pulls" that can be said to precede the rising of tides. Interpreting gravity causally is even more complicated in general relativity. Similarly, quantum mechanics is another branch of physics in which the nature of causality is particularly unclear. For statistical generalization, causality has further implications due to its intimate connection with the Second Law of Thermodynamics (see the fluctuation theorem).
A causal system is a system with output and internal states that depends only on the current and previous input values. A system that has some dependence on input values from the future (in addition to possible past or current input values) is termed an acausal system, and a system that depends solely on future input values is an anticausal system. Acausal filters, for example, can only exist as digital filters, because these filters can extract future values from a memory buffer or a file.
Austin Bradford Hill built upon the work of Hume and Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that the following aspects of an association be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation:
Consistency refers to phenomena that have been observed in many places at many times by many different observers in different circumstances.
Specificity is where the effect is limited to certain workers in certain specific situations and where there is no other association between the work and other modes of dying.
Temporality is to do with the direction of causality. Which is the cart and which is the horse? This is particularly relevant where slowly progressing disease is concerned. Does the patient's diet cause the disease or does the disease alter the patient's diet?
Biological gradient, otherwise known as a dose-response relationship, when more of the alleged cause is associated with more of the response (or disease). For example, not only do smokers have a higher prevalence of lung cancer than non-smokers, but also heavy smokers have a higher prevalence than light smokers.
Plausibility refers to the scientific credibility of the relationship. In the case of smoking, cigarette smoke is known to contain many established toxins, which makes it a plausible cause of cancer.
Coherence is the idea that the possibility of the causal relationship should not conflict with what is known about the natural history and biology of the disease.
Experimental evidence may be relevant. For example, if it is suspected that dust is causing the disease then an experiment in which dust filters are fitted would be appropriate and, if successful, would bolster the theory that dust was a causal factor in the incidence of the disease.
Analogy is where we reason from similar phenomena, causes and diseases to the disease at hand.
The above theories are attempts to define a reflectively stable notion of causality. This process uses our standard causal intuitions to develop a theory that we would find satisfactory in identifying causes. Another avenue of research is to empirically investigate how people (and non-human animals) learn and reason about causal relations in the world. This approach is taken by work in psychology. It also is possible to tackle causalities in surveys with a technique of elaboration. Given a relationship between two variables, what can be learned by introducing a third variable into the analysis (Rosenberg, 1968, xiii)? So elaboration is a device of the analysis that results in different kinds of relationships between variables e.g. suppression, extraneous, and distorter relations.
Causation and salience
Naming and causality
In the discussion of history, events are often considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the causes of the French Revolution. This is a somewhat Platonic and Hegelian view that reifies causes as ontological entities. In Aristotelian terminology, this use approximates to the case of the efficient cause.
According to law and jurisprudence, legal cause must be demonstrated in order to hold a defendant liable for a crime or a tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a 'sufficient causal link' relates the defendant's actions to the criminal event or damage in question.
As F.R. John Laux, M.A. puts it,
"In our experience every event (effect) is determined by a cause. That cause is in its turn determined by another cause. But we cannot assume an infinite series of causes, because an infinite series with no beginning involves a contradiction. And even if we did suppose the possibility of an infinite series, that would not explain how causation began. Hence there must be an uncaused Cause, the ultimate Cause of all the events which proceed from it. This ultimate and supreme Cause we call God."
Two questions that can help to focus the argument are:
Critics of this argument point out problems with it. The Big Bang theory states that it is the point in which all dimensions came into existence, the start of both space and time. Then, the question "What was there before the Universe?" makes no sense; the concept of "before" becomes meaningless when considering a situation without time, and thus the concepts of cause and effect so necessary to the cosmological argument no longer apply. This has been put forward by Stephen Hawking, who said that asking what occurred before the Big Bang is like asking what is north of the North Pole. However, some cosmologists and physicists do attempt to investigate what could have occurred before the Big Bang, using such scenarios as the collision of branes to give a cause for the Big Bang.
A question related to this argument is which came first, the chicken or the egg?
For example, if a person always does good deeds then it is believed that he or she will be "rewarded" for his or her behavior with fortunate events such as avoiding fatal accident or winning the lottery. If he or she always commits antagonistic behaviors, then it is believed that he will be punished with unfortunate events.
According to Buddhism, inequality amongst living beings is due not only to heredity, environment, "nature and nurture", but also to Karma. In other words, it is the result of our own past actions and our own present doings. We ourselves are responsible for our own happiness and misery. We create our own Heaven. We create our own Hell. We are the architects of our own fate.
Perplexed by the seemingly inexplicable, apparent disparity that existed among humanity, a young truth-seeker approached the Buddha and questioned him regarding this intricate problem of inequality:
"What is the cause, what is the reason, O Lord," questioned he, "that we find amongst mankind the short-lived and long-lived, the healthy and the diseased, the ugly and beautiful, those lacking influence and the powerful, the poor and the rich, the low-born and the high-born, and the ignorant and the wise?"
The Buddha’s reply was:
"All living beings have actions (Karma) as their own, their inheritance, their congenital cause, their kinsman, their refuge. It is Karma that differentiates beings into low and high states."
He then explained the cause of such differences in accordance with the law of cause and effect.
Psychology & Medicine:
Sociology & Economics:
| The English used in this article or section may not be easy for everybody to understand.
You can help Wikipedia by making this page or section simpler.
Causality is a way to describe how different events relate to one another. Suppose there are two events A and B. If B happens because A happened, then people say that A is the cause of B, or that B is the effect of A.
What looks very simple, is in fact a difficult problem. Many people have tried to solve it, they have come up with different solutions
This can be used to explain causality. Aristotle found different kinds of causes:
Additionally, things can be causes of one another as hard work causes fitness, and vice versa.
Aristotle told people of two types of causes: proper (prior) causes and accidental (chance) causes. Both types of causes, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes; so that generic effects assigned to generic causes, particular effects to particular causes, and operating causes to actual effects. It is also essential that ontological causality does not suggest the temporal relation of before and after - between the cause and the effect; that spontaneity (in nature) and chance (in the sphere of moral actions) are among the causes of effects belonging to the efficient causation, and that no incidental, spontaneous, or chance cause can be prior to a proper, real, or underlying cause per se.
All investigations of causality coming later in history will consist in imposing a favorite hierarchy on the order (priority) of causes; such as "final > efficient > material > formal" (Aquinas), or in restricting all causality to the material and efficient causes or, to the efficient causality (deterministic or chance), or just to regular sequences and correlations of natural phenomena (the natural sciences describing how things happen rather than asking why they happen).
Hume says that if someone is used to always seeing the same things occur in the same order, he will get accustumed to them being in that order. When he sees one event occur, he will expect the other to occur as well:
|“||I immediately perceive, that they are contiguous in time and place, and that the object we call cause precedes the other we call effect. In no one instance can I go any farther, nor is it possible for me to discover any third relation betwixt these objects. I therefore enlarge my view to comprehend several instances; where I find like objects always existing in like relations of contiguity and succession. At first sight this seems to serve but little to my purpose. The reflection on several instances only repeats the same objects; and therefore can never give rise to a new idea. But upon farther enquiry I find, that the repetition is not in every particular the same, but produces a new impression, and by that means the idea, which I at present examine. For after a frequent repetition, I find, that upon the appearance of one of the objects, the mind is determin’d by custom to consider its usual attendant, and to consider it in a stronger light upon account of its relation to the first object. ‘Tis this impression, then, or determination, which affords me the idea of necessity.||”|
—David Hume, Treatise 1.3.14
Logic is the science that looks at how to build an argument. In Logic, there are usually two different types of causes. They are called necessary cause and sufficient cause.