In digital signal processing, quantization is the process of approximating ("mapping") a continuous range of values (or a very large set of possible discrete values) by a relatively small ("finite") set of ("values which can still take on continuous range") discrete symbols or integer values. For example, rounding a real number in the interval [0,100] to an integer
In other words, quantization can be described as a mapping that represents a finite continuous interval I = [a,b] of the range of a continuous valued signal, with a single number c, which is also on that interval. For example, rounding to the nearest integer (rounding ½ up) replaces the interval [c − .5,c + .5) with the number c, for integer c. After that quantization we produce a finite set of values which can be encoded by binary techniques for example.
In signal processing, quantization refers to approximating the output by one of a discrete and finite set of values, while replacing the input by a discrete set is called discretization, and is done by sampling: the resulting sampled signal is called a discrete signal (discrete time), and need not be quantized (it can have continuous values). To produce a digital signal (discrete time and discrete values), one both samples (discrete time) and quantizes the resulting sample values (discrete values).
In electronics, adaptive quantization is a quantization process that varies the step size based on the changes of the input signal, as a means of efficient compression. Two approaches commonly used are forward adaptive quantization and backward adaptive quantization.
In signal processing the quantization process is the necessary and natural follower of the sampling operation. It is necessary because in practice the digital computer with is general purpose CPU is used to implement DSP algorithms. And since computers can only process finite word length (finite resolution/precision) quantities, any infinite precision continuous valued signal should be quantized to fit a finite resolution, so that it can be represented (stored) in CPU registers and memory.
We shall be aware of the fact that, it is not the continuous values of the analog function that inhibits its binary encoding, rather it is the existence of infinitely many such values due to the definition of continuity,(which therefore requires infinitely many bits to represent). For example we can design a quantizer such that it represents a signal with a single bit (just two levels) such that, one level is "pi=3,14..." (say encoded with a 1) and the other level is "e=2.7183..." ( say encoded with a 0), as we can see, the quantized values of the signal take on infinite precision, irrational numbers. But there are only two levels. And we can represent the output of the quantizer with a binary symbol. Concluding from this we can see that it is not the discreteness of the quantized values that enable them to be encoded but the finiteness enabling the encoding with finite number of bits.
In theory there is no relation between quantization values and binary code words used to encode them (rather than a table that shows the corresponding mapping, just as examplified above). However due to practical reasons we may tend to use code words such that their binary mathematical values has a relation with the quantization levels that is encoded. And this last option merges the first two paragrahs in such a way that, if we wish to process the output of a quantizer within a DSP/CPU system (which is always the case) then we can not allow the representation levels of the quantizers to take on arbitrary values, but only a restricted range such that they can fit in computer registers.
A quantizer is identified with its number of levels M, the decision boundaries {di} and the corresponding representation values {ri}.
The output of a quantizer has two important properties: 1) a Distortion resulting from the approximation and 2) a BitRate resulting from binary encoding of its levels. Therefore the Quantizer design problem is a RateDistortion optimization type.
If we are only allowed to use fixed length code for the output level encoding (the practical case) then the problem reduces into a distortion minimization one.
The design of a quantizer usually means the process to find the sets {di} and {ri} such that a measure of optimality is satisfied (such as MMSEQ (Minimum Mean Squarred Quantization Error))
Given the number of levels M, the optimal quantizer which minimizes the MSQE with regards to the given signal statistics is called the MaxLloyd quantizer, which is a nonuniform type in general.
The most common quantizer type is the uniform one. It is simple to design and implement and for most cases it suffices to get satisfactory results. Indeed by the very inherent nature of the design process, a given quantizer will only produce optimal results for the assumed signal statistics. Since it is very difficult to correctly predict that in advance, any static design will never produce actual optimal performance whenever the input statistics deviates from that of the design assumption. The only solution is to use an adaptive quantizer.
At the most fundamental level, some physical quantities are quantized. This is a result of quantum mechanics (see Quantization (physics)). Signals may be treated as continuous for mathematical simplicity by considering the small quantizations as negligible.
In any practical application, this inherent quantization is irrelevant for two reasons. First, it is overshadowed by signal noise, the intrusion of extraneous phenomena present in the system upon the signal of interest. The second, which appears only in measurement applications, is the inaccuracy of instruments. Thus, although all physical signals are intrinsically quantized, the error introduced by modeling them as continuous is vanishingly small.

