In statistics, a sample is a subset of a population. Typically, the population is very large, making a census or a complete enumeration of all the values in the population impractical or impossible. The sample represents a subset of manageable size. Samples are collected and statistics are calculated from the samples so that one can make inferences or extrapolations from the sample to the population. This process of collecting information from a sample is referred to as sampling.
The best way to avoid a biased or unrepresentative sample is to select a random sample, also known as a probability sample. A random sample is defined as a sample where the probability that any individual member from the population being selected as part of the sample is exactly the same as any other individual member of the population. Several types of random samples are simple random samples, systematic samples, stratified random samples, and cluster random samples.
A sample that is not random is called a nonrandom sample or a nonprobability sample. Some examples of nonrandom samples are convenience samples, judgment samples, purposive samples, quota samples, snowball samples, and quadrature nodes in quasiMonte Carlo methods.
Contents 
In mathematical terms, given a random variable X with distribution F, a random sample of length n =1,2,3,... is a set of n independent, identically distributed (iid) random variables with distribution F. ^{[1]}
A sample concretely represents n experiments in which we measure the same quantity. For example, if X represents the height of an individual and we measure n individuals, X_{i} will be the height of the ith individual. Note that a sample of random variables (i.e. a set of measurable functions) must not be confused with the realizations of these variables (which are the values that these random variables take). In other words, X_{i} is a function representing the measurement at the ith experiment and x_{i} = X_{i}(ω) is the value we actually get when making the measurement.
The concept of a sample thus includes the process of how the data are obtained (that is, the random variables). This is necessary so that mathematical statements can be made about the sample and statistics computed from it, such as the sample mean and covariance.
crystal
Please help develop this page
This page was created, but so far, little content has been added. Everyone is invited to help expand and create educational content for Wikiversity. If you need help learning how to add content, see the editing tutorial and the MediaWiki syntax reference. To help you get started with content, we have automatically added references below to other Wikimedia Foundation projects. This will help you find materials such as information, media and quotations on which to base the development of "Sample (statistics)" as an educational resource. However, please do not simply copyandpaste large chunks from other projects. You can also use the links in the blue box to help you classify this page by subject, educational level and resource type. 

Run a search on Sample (statistics) at Wikipedia. 
In statistics a sample is part of a population. The sample is carefully chosen. It should represent the whole population fairly, without bias. The reason samples are needed is that populations may be so large that counting all the individuals may not be possible or practical.
Therefore, solving a problem in statistics usually starts with sampling.^{[1]} Sampling is about choosing which data to take for later analysis. As an example, suppose the pollution of a lake should be analysed for a study. Depending on where the samples of water were taken, the studies can have different results. As a general rule, samples need to be random. This means the chance or probability of selecting one individual is the same as the chance of selecting any other individual.
In practice, random samples are always taken by means of a welldefined procedure. A procedure is a set of rules, a sequence of steps written down on paper and followed to the letter. Even so, some bias may remain in the sample. Consider the problem of desiging a sample to predict the result of an election poll. All known methods have their problems, and the results of an election are often different from predictions based on a sample. If you collect opinions by using telephones, or by meeting people in the street, the sample always has bias. Therefore, in cases like this a completely neutral sample is never possible.^{[2]} In such cases a statistician will think about how to measure the amount of bias, and there are ways to estimate this.
A similar situation occurs when scientists measure a physical property, say the weight of a piece of metal, or the speed of light.^{[3]} If we weigh an object with sensitive equipment we will get minutely different results. No system of measurement is ever perfect. We get a series of estimates, each one being a measurement. These are samples, with a certain degree of error. Statistics is designed to describe error, and carry out analysis on this kind of data.
If a population has obvious subpopulations, then each of the subpopulations needs to be sampled. This is called stratified sampling.
Suppose an experiment set out to sample the incomes of adults. Obviously, the incomes of college graduates might differ from that of nongraduates. Now suppose the numbers of male graduates was 30% of the total male adults (imaginary figures). Then you would arrange for 30% of the total sample to be male graduates picked at random, and 70% of the total to be male nongraduates. Repeat the process for females, because the percentage of female graduates is different from males. That gives a sample of the adult population statified by sex and college education. The next step would be to divide each of your subpopulations by age groups, because (for example) graduates might gain more income relative to nongraduates in middle age.
Another type of stratified sample deals with variation. Here larger samples are taken from the more variable subpopulations so that the summary statistics such as the means and standard deviations, are more reliable.
