# Overlap-add method: Wikis

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

# Encyclopedia

Updated live from Wikipedia, last check: May 21, 2013 15:25 UTC (50 seconds ago)

The overlap-add method (OA, OLA) is an efficient way to evaluate the discrete convolution between a very long signal x[n] with a finite impulse response (FIR) filter h[n]:

\begin{align} y[n] = x[n] * h[n] \ \stackrel{\mathrm{def}}{=} \ \sum_{m=-\infty}^{\infty} h[m] \cdot x[n-m] = \sum_{m=1}^{M} h[m] \cdot x[n-m], \end{align}

where h[m]=0 for m outside the region [1, M].

The concept is to divide the problem into multiple convolutions of h[n] with short segments of x[n]:

$x_k[n] \ \stackrel{\mathrm{def}}{=} \begin{cases} x[n+kL] & n=1,2,\ldots,L\ 0 & \textrm{otherwise}, \end{cases}$

where L is an arbitrary segment length. Then:

$x[n] = \sum_{k} x_k[n-kL],\,$

and y[n] can be written as a sum of short convolutions:

\begin{align} y[n] = \left(\sum_{k} x_k[n-kL]\right) * h[n] &= \sum_{k} \left(x_k[n-kL]* h[n]\right)\ &= \sum_{k} y_k[n-kL], \end{align}

where  $y_k[n] \ \stackrel{\mathrm{def}}{=} \ x_k[n]*h[n]\,$  is zero outside the region [1,L+M-1].  And for any parameter  $N\ge L+M-1,\,$  it is equivalent to the $N\,$-point circular convolution of $x_k[n]\,$ with $h[n]\,$  in the region [1,N].

The advantage is that the circular convolution can be computed very efficiently as follows, according to the circular convolution theorem:

$y_k[n] = \textrm{IFFT}\left(\textrm{FFT}\left(x_k[n]\right)\cdot\textrm{FFT}\left(h[n]\right)\right)$

(Eq.1)

where FFT and IFFT refer to the fast Fourier transform and inverse fast Fourier transform, respectively, evaluated over N discrete points.

## The algorithm

Figure 1: Overlap-add Method.

Fig. 1 sketches the idea of the overlap-add method. The signal x[n] is first partitioned into non-overlapping sequences, then the discrete Fourier transforms of the sequences yk[n] are evaluated by multiplying the FFT of xk[n] with the FFT of h[n]. After recovering of yk[n] by inverse FFT, the resulting output signal is reconstructed by overlapping and adding the yk[n] as shown in the figure. The overlap arises from the fact that a linear convolution is always longer than the original sequences. Note that L should be chosen to have N a power of 2 which makes the FFT computation efficient. A pseudocode of the algorithm is the following:

   Algorithm 1 (OA for linear convolution)
Evaluate the best value of N and L
H = FFT(h,N)       (zero-padded FFT)
i = 1
while i <= Nx
il = min(i+L-1,Nx)
yt = IFFT( FFT(x(i:il),N) * H, N)
k  = min(i+N-1,Nx)
y(i:k) = y(i:k) + yt    (add the overlapped output blocks)
i = i+L
end


## Circular convolution with the overlap-add method

When sequence x[n] is periodic, and Nx is the period, then y[n] is also periodic, with the same period.  To compute one period of y[n], Algorithm 1 can first be used to convolve h[n] with just one period of x[n].  In the region M ≤ n ≤ Nx,  the resultant y[n] sequence is correct.  And if the next M-1 values are added to the first M-1 values, then the region 1 ≤ n ≤ Nx will represent the desired convolution. The modified pseudocode is:

   Algorithm 2 (OA for circular convolution)
Evaluate Algorithm 1
y(1:M-1) = y(1:M-1) + y(Nx+1:Nx+M-1)
y = y(1:Nx)
end


## Cost of the overlap-add method

The cost of the convolution can be associated to the number of complex multiplications involved in the operation. The major computational effort is due to the FFT operation, which for a radix-2 algorithm applied to a signal of length N roughly calls for $C=\frac{N}{2}\log_2 N$ complex multiplications. It turns out that the number of complex multiplications of the overlap-add method are:

$C_{OA}=\left\lceil \frac{N_x}{N-M+1}\right\rceil N\left(\log_2 N+1\right)\,$

COA accounts for the FFT+filter multiplication+IFFT operation.

The additional cost of the ML sections involved in the circular version of the overlap-add method is usually very small and can be neglected for the sake of simplicity. The best value of N can be found by numerical search of the minimum of $C_{OA}\left(N\right)=C_{OA}\left(2^m \right)$ by spanning the integer m in the range $\log_2\left(M\right)\le m\le\log_2 \left(N_x\right)$. Being N a power of two, the FFTs of the overlap-add method are computed efficiently. Once evaluated the value of N it turns out that the optimal partitioning of x[n] has L = NM + 1. For comparison, the cost of the standard circular convolution of x[n] and h[n] is:

$C_S=N_x\left(\log_2 N_x+1\right)\,$

Hence the cost of the overlap-add method scales almost as $O\left(N_x\log_2 N\right)$ while the cost of the standard circular convolution method is almost $O\left(N_x\log_2 N_x \right)$. However such functions accounts only for the cost of the complex multiplications, regardless of the other operations involved in the algorithm. A direct measure of the computational time required by the algorithms is of much interest. Fig. 2 shows the ratio of the measured time to evaluate a standard circular convolution using  Eq.1 with the time elapsed by the same convolution using the overlap-add method in the form of Alg 2, vs. the sequence and the filter length. Both algorithms have been implemented under Matlab. The bold line represent the boundary of the region where the overlap-add method is faster (ratio>1) than the standard circular convolution. Note that the overlap-add method in the tested cases can be three times faster than the standard method.

Figure 2: Ratio between the time required by  Eq.1 and the time required by the overlap-add Alg. 2 to evaluate a complex circular convolution, vs the sequence length Nx and the filter length M.

## References

• Rabiner, Lawrence R.; Gold, Bernard (1975). Theory and application of digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp. 63–67. ISBN 0-13-914101-4.
• Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-13-214635-5.
• Hayes, M. Horace (1999). Digital Signal Processing. Schaum's Outline Series. New York: McGraw Hill. ISBN 0-07-027389-8.