Return to Video

4.9a - FFT: history and algorithms

  • 0:01 - 0:05
    We have seen a lot of formulas so far in this module, and this is good
  • 0:05 - 0:09
    because we needed to set the stage for Fourier Transforms.
  • 0:09 - 0:12
    Now we are going to look at the computational Fourier Transform.
  • 0:12 - 0:16
    So I give you a signal of length n, and you want to compute the DFT.
  • 0:16 - 0:18
    How complex is that going to be?
  • 0:18 - 0:22
    Well, if you're not smart about it, it's going to cost you n square operations.
  • 0:22 - 0:27
    And when n is a very large number, 100,000 or a million or so, n square is even larger,
  • 0:27 - 0:31
    and at some point it becomes prohibitive to calculate the Discrete Fourier Transform.
  • 0:31 - 0:33
    But there is a magic trick.
  • 0:33 - 0:35
    This magic trick was first noticed by Gauss,
  • 0:35 - 0:39
    actually even before Fourier invented the Fourier transform,
  • 0:39 - 0:41
    just shows how smart Gauss actually was.
  • 0:41 - 0:46
    It was reinvented many times in the 19th century and in the early 20th century.
  • 0:46 - 0:47
    But then, there was a quantum step
  • 0:47 - 0:53
    with the publication of a paper by Cooley and Tuckey, for a Fast Fourier Transform algorithm.
  • 0:53 - 0:58
    This algorithm is both very famous and relatively simple to understand.
  • 0:58 - 1:00
    So we are going to derive it here, in this module,
  • 1:00 - 1:03
    by looking at divide and conquer methods in general,
  • 1:03 - 1:07
    and then specifically for the Fast Fourier Transform computation.
  • 1:08 - 1:11
    Once we have this algorithm, we'll talk about some generalizations.
  • 1:11 - 1:13
    We are not going to, to study them in detail.
  • 1:13 - 1:16
    We could do an entire class about Fast Fourier Transforms.
  • 1:16 - 1:19
    That's not the purpose here, but you will understand the basics
  • 1:19 - 1:22
    of one of the most famous algorithms of all times,
  • 1:22 - 1:26
    and also one that runs not only on super computers, but on your mobile phone,
  • 1:26 - 1:28
    on everyday applications.
  • 1:29 - 1:33
    Module 4.9 is about the Fast Fourier Transform,
  • 1:33 - 1:35
    its history and algorithms.
  • 1:35 - 1:38
    So the overview of the module is that we are going to cover this history
  • 1:38 - 1:41
    and see that Gauss actually invented the Fast Fourier Transform,
  • 1:41 - 1:44
    even before Fourier proposed the Fourier Transform.
  • 1:45 - 1:47
    Then, in the second half of the module,
  • 1:47 - 1:52
    we will specifically describe the Cooley Tuckey Fast Fourier Transform algorithm
  • 1:52 - 1:54
    in sufficient detail to understand how it works,
  • 1:54 - 1:57
    and then briefly overview other possible algorithms.
  • 1:59 - 2:03
    The overview of this lecture on Fast Fourier Transform is
  • 2:03 - 2:05
    that we first look at some history,
  • 2:05 - 2:11
    namely from the earliest days to the current practice of FFTs.
  • 2:11 - 2:15
    Then we study in detail one particular algorithm that is very popular,
  • 2:15 - 2:17
    the Cooley-Tuckey FFT.
  • 2:17 - 2:22
    Then we look at some factorizations examples, and conclude.
  • 2:24 - 2:29
    Joseph Fourier, the inventor of the Fourier transform, of course knew about
  • 2:29 - 2:33
    how to calculate Fourier Series, for example, but he did this long hand
  • 2:33 - 2:36
    with very extensive calculations.
  • 2:38 - 2:42
    It turns out that Karl Frederick Gauss, the prince of mathematicians,
  • 2:42 - 2:46
    had already a fast way to compute Fourier series,
  • 2:46 - 2:49
    even before Fourier proposed Fourier Series.
  • 2:51 - 2:55
    This points out that there is an interesting history about computing efficiently
  • 2:55 - 2:58
    Fourier series, Discrete Fourier Transforms and the like.
  • 2:58 - 3:05
    Gauss, in 1805, comes up with a neat trick to compute trigonometric series efficiently,
  • 3:05 - 3:10
    which happened to be the same trick used much later in the Fast Fourier Transform.
  • 3:10 - 3:14
    Just a bit later, Joseph Fourier invents Fourier series
  • 3:14 - 3:18
    with no particularly efficient way to compute them.
  • 3:18 - 3:23
    During the 19th century, a number of people have to compute Fourier series
  • 3:23 - 3:25
    for various applications.
  • 3:25 - 3:29
    And many of them develop tricks to speed up the calculation,
  • 3:29 - 3:31
    because the calculations were done by hand,
  • 3:31 - 3:36
    or one had to construct machines to calculate Fourier Series.
  • 3:37 - 3:41
    In the 20th century, a number of people in applied mathematics
  • 3:41 - 3:46
    and numerical linear algebra came up with tricks and algorithms,
  • 3:46 - 3:50
    but none of them really were very influential.
  • 3:50 - 3:56
    Then in 1958, Good, who is a statistician, comes up with an algorithm
  • 3:56 - 3:59
    which turns out to be one of the classic algorithms used today.
  • 3:59 - 4:05
    But the quantum step was really Cooley and Tuckey's rediscovery of the Fast Fourier Transform
  • 4:05 - 4:09
    in 1965, in a very celebrated paper,
  • 4:09 - 4:14
    for lengths of Fourier Transforms which are a power of a prime.
  • 4:15 - 4:20
    The concluding work on Fast Fourier Transform was maybe done by Winograd,
  • 4:20 - 4:24
    who has really studied the complexity of FFT in great detail,
  • 4:24 - 4:27
    and put all the tricks together, added a few of his own,
  • 4:27 - 4:32
    to come up with the most efficient ways to compute the Fast Fourier Transform.
  • 4:34 - 4:37
    Remember the Discrete Fourier Transform matrix.
  • 4:37 - 4:44
    We have the Nth root of unity, W sub N, is equal to e to the -j 2π/N.
  • 4:44 - 4:48
    When the context is clear, we drop the subscript N.
  • 4:49 - 4:54
    The powers of N can be taken modulo capital N, because W to the N is equal to 1.
  • 4:54 - 5:00
    So a DFT matrix of size N by N is this very structured matrix shown here,
  • 5:00 - 5:03
    with a first line and column of ones.
  • 5:03 - 5:07
    And then successive powers of the roots of unity on successive lines.
  • 5:07 - 5:13
    The matrix is symmetric and, as can be seen, has a lot of structure in it.
  • 5:15 - 5:18
    Let's look at the few small matrices.
  • 5:18 - 5:20
    N is equal to 2 is the easiest one.
  • 5:20 - 5:27
    The second root of unity is minus 1, and the matrix is a simple symmetric matrix, shown here.
  • 5:29 - 5:34
    For N is equal to 3, the matrix already looks more interesting.
  • 5:34 - 5:40
    In the middle, we simplified the powers of W by taking them modulo 3 (?).
  • 5:40 - 5:44
    So we see that we only have two powers, W and W square.
  • 5:44 - 5:48
    And at the right side, we write out the actual values of the matrix.
  • 5:51 - 5:57
    N is equal to 4, so W4 is given here, we simplify the powers by taking them modulo 2, (?)
  • 5:57 - 6:00
    then we write out explicitly.
  • 6:00 - 6:02
    We know that W in this case is -j.
  • 6:02 - 6:04
    You see, this is a very simple matrix.
  • 6:04 - 6:09
    Actually, it doesn't require any multiplication, only changes of signs
  • 6:09 - 6:15
    and changing real and imaginary parts when we multiply by j or -j.
  • 6:16 - 6:19
    Let's look at N is equal to 5.
  • 6:19 - 6:24
    So W5 is given here, then we simplify the powers by taking them modulo 5. (?)
  • 6:24 - 6:28
    So we have only powers from 0 to 4 that are left.
  • 6:28 - 6:31
    And you see this matrix is very structured,
  • 6:31 - 6:36
    and we'll see that it has a hidden structure that can be taken advantage of.
  • 6:37 - 6:39
    One more step, N is equal to 6.
  • 6:39 - 6:46
    Now, this matrix is, again, with powers simplified by taking modulo 6, (?)
  • 6:46 - 6:49
    and we see again that it has a lot of structure
  • 6:49 - 6:52
    very different from the previous one.
  • 6:52 - 6:56
    So the message so far is that for every N we have seen so far,
  • 6:56 - 6:58
    the matrix looks interesting,
  • 6:58 - 7:01
    but looks very different from one case to the other.
  • 7:02 - 7:04
    We have seen all of these matrices,
  • 7:04 - 7:07
    now we need an algorithm to compute them efficiently.
  • 7:07 - 7:11
    And the idea behind a fast algorithm, like with many fast algorithms,
  • 7:11 - 7:15
    is the idea of Divide et Impera, in Latin,
  • 7:15 - 7:18
    which in English says Divide and conquer.
  • 7:19 - 7:21
    So this goes back to the Roman Empire.
  • 7:21 - 7:25
    It's usually attributed to Julius Caesar, it's a way to conduct war.
  • 7:26 - 7:28
    I'm not sure if that's what we want to pursue here.
  • 7:28 - 7:34
    And it is a standard attack for developing algorithms, and the idea is very simple.
  • 7:34 - 7:38
    You take a problem, you split it into two problems,
  • 7:38 - 7:40
    let's say the upper and the lower here,
  • 7:40 - 7:45
    so you have two subproblems which you can split again up to simple subproblems.
  • 7:45 - 7:48
    Here we have four simple subproblems in the middle.
  • 7:48 - 7:52
    Then from the subproblems, you merge them to get intermediate solutions,
  • 7:52 - 7:55
    merge again, and you finally get the solution.
  • 7:55 - 8:01
    So as long as the splitting and the merging, plus the sum of simple subproblems
  • 8:01 - 8:06
    is such that it is cheaper than computing the entire problem,
  • 8:06 - 8:10
    you have a faster algorithm for the problem at hand.
  • 8:12 - 8:17
    Let's look at the idea of divide and conquer for the Discrete Fourier Transform.
  • 8:17 - 8:20
    And we start by doing one step of divide and conquer.
  • 8:20 - 8:23
    Recall that the computation of the DFT has complexity N square.
  • 8:23 - 8:28
    So the idea is to take the problem of size N, where N is a power of 2.
  • 8:28 - 8:31
    That will allow us to do recursion
  • 8:31 - 8:35
    and to cut this problem into two problems of size N/2.
  • 8:35 - 8:42
    Each one will have complexity N square over 4, since it's a quadratic complexity,
  • 8:42 - 8:46
    and there might be some complexity to recover the full solution,
  • 8:46 - 8:48
    say complexity of order N.
  • 8:48 - 8:53
    So the divide and conquer solution has a complexity of N squared over 2 plus N.
  • 8:53 - 8:58
    And this is -- for large enough N -- is better than N squared.
  • 8:58 - 9:02
    So doing one step of divide and conquer is already a gain.
  • 9:04 - 9:09
    Graphically, split the DFT into two pieces of size N/2.
  • 9:09 - 9:13
    Compute two DFTs of size N/2, merge the two results.
  • 9:14 - 9:17
    And we see this here in this diagram
  • 9:17 - 9:23
    where we start with a problem with an input from x[0] to x[N1].
  • 9:23 - 9:27
    We do a splitting, we compute two DFTs of size N/2,
  • 9:27 - 9:35
    we merge the results of the two DFTs and we get our final result, X, from 0 to N-1.
  • 9:35 - 9:39

    So that's one step of Divide and Conquer for the DFT.
  • 9:42 - 9:46
    Let's do divide and conquer in multiple steps: it's a very simple idea.
  • 9:46 - 9:50
    If it worked once, we will apply it again and this will give more gains.
  • 9:50 - 9:55
    So cut the two problems of size N/2 into four problems of size N/4.
  • 9:55 - 10:00
    We can do this, because N is a power of 2, so we can keep cutting into halves.
  • 10:00 - 10:05
    There might be some complexity to recover the full solution after these cuts.
  • 10:05 - 10:09
    Let's say it's linear in N, so there's a cost order N at each step.
  • 10:09 - 10:15
    Now the division into subproblems can be done, order log N times,
  • 10:15 - 10:21
    precisely actually log in base 2 of N-1, until the problem of size 2 is obtained,
  • 10:21 - 10:26
    because that one is trivial, as we have seen when we looked at the matrix W2.
  • 10:26 - 10:30
    Each cutting and merging requires order and complexity.
  • 10:30 - 10:34
    So the total complexity of this divide and conquer solution
  • 10:34 - 10:37
    is order N log in base 2 of N.
  • 10:38 - 10:43
    Now for N large enough, this is a tremendous gain with respect to N square,
  • 10:43 - 10:46
    as we can easily see by plugging in some numbers.
  • 10:46 - 10:51
    And this is the power of the Fast Fourier Transform algorithm.
  • 10:53 - 10:55
    Let's summarize the steps we went through.
  • 10:55 - 11:05
    We split the DFT into two, four and eight pieces of sizes, respectively, N/2, N/4 and N/8.
  • 11:05 - 11:09
    Then we compute 8 DFTs of size N/8.
  • 11:09 - 11:14
    We merge the results successively into DFTs of size N/4, N/2
  • 11:14 - 11:17
    and finally N to obtain the result.
  • 11:18 - 11:25
    This is shown here in this diagram where we see the recursive splits into 2 halves,
  • 11:25 - 11:28
    then 4 pieces, finally 8 pieces.
  • 11:28 - 11:32
    In the center, the DFTs of size N/8,
  • 11:32 - 11:38
    then the mergers into DFTs of size N/4, then N/2,
  • 11:38 - 11:42
    and finally the result, the total DFT of size N.
Title:
4.9a - FFT: history and algorithms
Description:

See "Chapter 4 Fourier Analysis" in "Signal Processing for Communications" by Paolo Prandoni and Martin Vetterli, available from http://www.sp4comm.org/ , and the slides for the entire module 4 of the Digital Signal Processing Coursera course, in https://spark-public.s3.amazonaws.com/dsp/slides/module4-0.pdf .

And if you are registered for this Coursera course, see https://class.coursera.org/dsp-001/wiki/view?page=week3

more » « less
Video Language:
English

English subtitles

Incomplete

Revisions