To derive the FFT, we assume that the signal's duration is a power of two: . Consider what happens to the even-numbered and odd-numbered elements of the sequence in the DFT calculation.
Each term in square brackets has the form of a -length DFT. The first one is a DFT of the even-numbered elements, and the second of the odd-numbered elements. The first DFT is combined with the second multiplied by the complex exponential . The half-length transforms are each evaluated at frequency indices . Normally, the number of frequency indices in a DFT calculation range between zero and the transform length minus one. The computational advantage of the FFT comes from recognizing the periodic nature of the discrete Fourier transform. The FFT simply reuses the computations made in the half-length transforms and combines them through additions and the multiplication by , which is not periodic over , to rewrite the length-N DFT. Figure 1 illustrates this decomposition. As it stands, we now compute two length- transforms (complexity ), multiply one of them by the complex exponential (complexity ), and add the results (complexity ). At this point, the total complexity is still dominated by the half-length DFT calculations, but the proportionality coefficient has been reduced.
Now for the fun. Because , each of the half-length transforms can be reduced to two quarter-length transforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left with length-2 transforms. This transform is quite simple, involving only additions. Thus, the first stage of the FFT has length-2 transforms (see the bottom part of Figure 1). Pairs of these transforms are combined by adding one to the other multiplied by a complex exponential. Each pair requires 4 additions and 4 multiplications, giving a total number of computations equaling . This number of computations does not change from stage to stage. Because the number of stages, the number of times the length can be divided by two, equals , the complexity of the FFT is .
Doing an example will make computational savings more obvious. Let's look at the details of a length-8 DFT. As shown on Figure 1, we first decompose the DFT into two length-4 DFTs, with the outputs added and subtracted together in pairs. Considering Figure 1 as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTs into the final calculation because of the periodicity of the DFT output. Examining how pairs of outputs are collected together, we create the basic computational element known as a butterfly (Figure 2).
By considering together the computations involving common output frequencies from the two half-length DFTs, we see that the two complex multiplies are related to each other, and we can reduce our computational work even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their outputs, we arrive at the diagram summarizing the length-8 fast Fourier transform (Figure 1). Although most of the complex multiplies are quite simple (multiplying by means negating real and imaginary parts), let's count those for purposes of evaluating the complexity as full complex multiplies. We have complex multiplies and additions for each stage and stages, making the number of basic computations as predicted.
Note that the ordering of the input sequence in the two parts of Figure 1 aren't quite the same. Why not? How is the ordering determined?
The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has. The ordering is determined by the algorithm.
Other "fast" algorithms were discovered, all of which make use of how many common factors the transform length N has. In number theory, the number of prime factors a given integer has measures how composite it is. The numbers 16 and 81 are highly composite (equaling and respectively), the number 18 is less so ( ), and 17 not at all (it's prime). In over thirty years of Fourier transform algorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. It is so computationally efficient that power-of-two transform lengths are frequently used regardless of what the actual length of the data.