Return to Video

5.1 - Linear Filters

  • 0:01 - 0:04
    Hi, and welcome to Module 5 of Digital Signal Processing.
  • 0:05 - 0:08
    In the previous modules, we concentrated on the concept of signal.
  • 0:08 - 0:12
    We looked at signals, we took them apart, and we put them back together.
  • 0:12 - 0:15
    And now, it's time to start addressing the second part of the story,
  • 0:15 - 0:18
    the processing part in Digital Signal Processing.
  • 0:18 - 0:22
    Now, when we process a signal, we no longer simply analyze it,
  • 0:22 - 0:25
    but we manipulate it, and we transform it into another signal.
  • 0:27 - 0:31
    There are very many ways to skin a cat, and probably even more ways to process a signal.
  • 0:32 - 0:36
    But of all the possible ways, we're interested in the class of processing algorithms
  • 0:36 - 0:39
    that go under the name of linear-time invariant filters.
  • 0:40 - 0:43
    Filters are simple and yet very, very powerful devices
  • 0:43 - 0:47
    that have been around for a very long time, especially in analog electronics.
  • 0:47 - 0:51
    And to think what a filter does, think of the knobs on your stereo
  • 0:51 - 0:56
    with which you boost the bass or dim the treble in the music you listen to.
  • 0:56 - 0:59
    In the digital world, filters have the added advantage
  • 0:59 - 1:03
    that they can be implemented very simply on general-purpose architectures.
  • 1:04 - 1:09
    This is a rather long module since not only do we need to define the filtering paradigm,
  • 1:09 - 1:14
    but then we also need to understand how to implement and design filters
  • 1:14 - 1:16
    that do what we want them to do.
  • 1:16 - 1:17
    And, in spite of the length,
  • 1:17 - 1:23
    this module only scratches the surface of the world of active signal processing.
  • 1:23 - 1:27
    We will leave for future classes concepts like adaptive signal processing
  • 1:27 - 1:29
    or non-linear signal processing.
  • 1:29 - 1:32
    But I'm sure that this module will whet your appetite
  • 1:32 - 1:36
    with respect to everything that you can do in the world of digital signal processing.
  • 1:36 - 1:38
    The module is structured like this:
  • 1:38 - 1:41
    The first three sub-modules will concentrate on the concept of filter
  • 1:41 - 1:44
    and will characterize the filter in the time domain.
  • 1:44 - 1:47
    But we'll illustrate the key points using a mock problem
  • 1:47 - 1:51
    where we try to remove noise from an otherwise smooth signal.
  • 1:52 - 1:56
    By now, we know that the time domain is always only half of the picture.
  • 1:56 - 1:59
    And so, we will spend modules 5.4 and 5.10
  • 1:59 - 2:02
    to extend the filtering paradigm to the frequency domain.
  • 2:02 - 2:06
    This will allow us to define a class of filters called ideal filters
  • 2:06 - 2:10
    that represent really the best-behavior filter that we can think of.
  • 2:10 - 2:14
    Unfortunately, ideal filters turn out to be too good to be true,
  • 2:14 - 2:17
    and so we will have to spend modules 5.6 to 5.10
  • 2:17 - 2:21
    to explore the kind of filters that we can implement and design in practice.
  • 2:21 - 2:24
    At first, we will try to mimic ideal filters,
  • 2:24 - 2:26
    but the method will show its limitations very soon.
  • 2:26 - 2:29
    So, we will introduce a new tool called the Z-Transform
  • 2:29 - 2:35
    that will allow us to explore the full range of filters that we can design and implement.
  • 2:35 - 2:39
    Finally, to wrap it all up, we will have a very hands-on module
  • 2:39 - 2:41
    where we will talk about real time signal processing
  • 2:41 - 2:47
    and we will show you how easy it is to implement real time guitar effects on your PC.
  • 2:47 - 2:49
    So, let's get started on Module 5
  • 2:49 - 2:53
    and let's make the acquaintance of our new friend, the linear filter.
  • 2:54 - 2:58
    Hi, and welcome to Module 5.1 of Digital Signal Processing
  • 2:58 - 3:00
    in which we will talk about linear filters.
  • 3:00 - 3:04
    We will examine the two key properties of linear filters,
  • 3:04 - 3:07
    which are linearity and time invariance.
  • 3:07 - 3:09
    And then, we will describe the convolution operator,
  • 3:09 - 3:12
    which captures both mathematically and algorithmically
  • 3:12 - 3:14
    the inner workings of a linear filter.
  • 3:14 - 3:18
    In general, when we talk about signal processing, we imagine a situation
  • 3:18 - 3:23
    where we have input signal, x [n], an output signal, y[n],
  • 3:23 - 3:26
    produced by some sort of black box here
  • 3:26 - 3:29
    that manipulates the input into the output.
  • 3:29 - 3:33
    We can write the relationship mathematically, like so
  • 3:33 - 3:38
    where y[n] is equal to some operator H that is applied to the input x[n].
  • 3:38 - 3:42
    Already, when we draw this block diagram,
  • 3:42 - 3:46
    we are making assumptions on the structure of the processing device
  • 3:46 - 3:51
    in the sense that we consider a system with a single input and a single output.
  • 3:51 - 3:54
    We could imagine a system with multiple inputs or multiple outputs.
  • 3:54 - 3:56
    But even with these limitations,
  • 3:56 - 4:01
    the possibilities for what goes inside this block here are pretty much limitless.
  • 4:01 - 4:04
    And unless we impose some structure on the kind of processing
  • 4:04 - 4:06
    that happens into this block,
  • 4:06 - 4:11
    we will not be able to say anything particularly meaningful about the filtering operation.
  • 4:11 - 4:15
    So, the first requirement that we impose on a filter is linearity.
  • 4:15 - 4:19
    Linearity means that if we have two inputs,
  • 4:20 - 4:23
    and we take a linear combination of said inputs,
  • 4:23 - 4:28
    well, the output is a linear combination of outputs
  • 4:28 - 4:33
    that could have been obtained by filtering each sequence independently.
  • 4:34 - 4:38
    This is actually a very reasonable requirement, for instance, think a situation
  • 4:38 - 4:40
    where your processing device is an amplifier
  • 4:40 - 4:43
    and you connect a guitar to your amplifier.
  • 4:43 - 4:47
    Now, if you play one note, and then you play it the same note louder,
  • 4:47 - 4:57
    you expect the amplifier to produce just a louder note. [music]
  • 4:57 - 5:02
    Similarly, if you play one note and then another note,
  • 5:02 - 5:07
    and then you play two notes together, you'll expect the amplifier to amplify the sum of two notes
  • 5:07 - 5:17
    as the sum of two independent amplifications. [music]
  • 5:17 - 5:20
    Now, note that this is not necessarily the case in all situations,
  • 5:20 - 5:25
    for instance, in some kinds of rock music, you want to introduce some distortion
  • 5:25 - 5:30
    and so you add a fuzz box that will distort the signal non-linearly
  • 5:30 - 5:32
    to create very interesting effects.
  • 5:32 - 5:44
    But they belong to a completely different category of processing. [music]
  • 5:45 - 5:48
    The second requirement that we impose on the processing device
  • 5:48 - 5:52
    is time invariance: time invariance, in layman terms,
  • 5:52 - 5:55
    simply means that the system will behave exactly in the same way,
  • 5:55 - 5:58
    independently of when it's switched on.
  • 5:58 - 6:03
    Mathematically, we can say that if y[n] is the output of the system
  • 6:03 - 6:09
    when the input is x[n], well, if we put a delayed version of the input inside the system,
  • 6:09 - 6:15
    x[n-n0], what we get is the same output delayed by n0.
  • 6:15 - 6:18
    And again, we can use a guitar amplifier as an example.
  • 6:18 - 6:23
    If I turn it on today, well, I expect it to amplify the notes
  • 6:23 - 6:26
    exactly in the same way that it amplified them yesterday.
  • 6:26 - 6:31
    But again, some types of guitar effects exploit time variance
  • 6:31 - 6:34
    to introduce a different flavor of the music that's been played.
  • 6:34 - 6:38
    For instance, the wow pedal is a time-varying effect
  • 6:38 - 6:40
    that will change the envelope of a sound
  • 6:40 - 6:53
    in ways that are not time-invariant. [music]
  • 6:54 - 6:58
    So, what are the ingredients that go into a Linear Time Invariant system?
  • 6:58 - 7:03
    Well, linear ingredients: addition, which is a linear operation,
  • 7:03 - 7:08
    Scalar multiplication, another linear operation, and delays.
  • 7:09 - 7:13
    Another requirement that is not mandatory but makes a lot of sense
  • 7:13 - 7:17
    if you want to use a Linear Time-Invariant system in real time,
  • 7:17 - 7:19
    is that the system be causal. (?)
  • 7:19 - 7:20
    By that, we mean that the system
  • 7:20 - 7:25
    can only have access to input and output values from the past.
  • 7:25 - 7:29
    In that case, you can write the input/outout relationship as follows.
  • 7:29 - 7:37
    The output is a linear functional of past values of the input and past values of the output.
  • 7:37 - 7:40
    The impulse response is the output of a filter
  • 7:40 - 7:42
    when the input is a δ function.
  • 7:42 - 7:45
    A fundamental result states that the impulse response
  • 7:45 - 7:48
    fully characterizes the behavior of an LTI system.
  • 7:48 - 7:49
    Let's see why that is so.
  • 7:50 - 7:54
    Assume that we have a filter and we can measure its impulse response
  • 7:54 - 7:56
    by inputting a δ function,
  • 7:56 - 8:00
    and it turns out that the impulse response is an exponential and decaying sequence,
  • 8:00 - 8:05
    h [n] = α to the power of n, times the unit's depth (?).
  • 8:05 - 8:10
    Now, we want to use the same filter to filter an arbitrary sequence, x[n],
  • 8:10 - 8:14
    that in this example is simply a three-point sequence
  • 8:14 - 8:20
    that is equal to 2 for n=0, is equal to 3 for n=1 and is equal to 1 for n=2,
  • 8:20 - 8:22
    and is 0 everywhere else.
  • 8:23 - 8:26
    So, we can always write our sequence
  • 8:26 - 8:29
    as a linear combination of delayed δ function.
  • 8:29 - 8:32
    So, in particular, for our example,
  • 8:32 - 8:40
    x[n] = 2δ[n] + 3δ[n-1] + δ[n-2].
  • 8:41 - 8:45
    Now, we know the impulse response, so the response to the δ,
  • 8:45 - 8:49
    and by exploiting linearity and time invariance,
  • 8:49 - 8:54
    we can compute the response of the system to the input sequence x[n],
  • 8:54 - 8:57
    just by knowing the impulse response.
  • 8:57 - 9:00
    Indeed, we apply the real filter to the linear combination of δs,
  • 9:00 - 9:07
    and by exploiting linearity first, we can split the operation of the filter
  • 9:07 - 9:09
    over the three components of the signal,
  • 9:09 - 9:13
    and by exploiting time invariance, we just sum together
  • 9:13 - 9:17
    a properly (?) scaled version of the impulse response delayed.
  • 9:17 - 9:20
    We can look at this graphically, and we see that
  • 9:20 - 9:27
    when we filter the first component, 2δ[n], we get 2h[n].
  • 9:28 - 9:33
    The second component of the signal is 3δ[n-1],
  • 9:33 - 9:36
    And this gives rise to 3h[n-1]
  • 9:36 - 9:39
    which we plot on top of the other response.
  • 9:39 - 9:43
    And finally, the last component is simply δ[n-2],
  • 9:43 - 9:46
    which, filtered (?) gives h[n-2].
  • 9:46 - 9:48
    So now, we have the three components.
  • 9:48 - 9:51
    We sum them together by linearity
  • 9:51 - 9:55
    and we obtain the response of the system to our arbitrary input.
  • 9:55 - 9:59
    In general, remember that we can always write x[n],
  • 9:59 - 10:01
    a generic discrete time sequence,
  • 10:01 - 10:05
    as the sum for k that goes from minus infinity to plus infinity
  • 10:05 - 10:11
    of a sequence of time delayed δs, scaled by the values of the sequence itself.
  • 10:12 - 10:16
    So, this probably seemed like a futile exercise in Module 3.2,
  • 10:16 - 10:19
    but now we see the usefulness of this representation,
  • 10:19 - 10:23
    because by linearity and time invariance, we can express the output
  • 10:23 - 10:27
    as the sum from k that goes from minus infinity to plus infinity
  • 10:27 - 10:36
    of the values of the sequence times the impulse response, time-reversed and delayed by n.
  • 10:38 - 10:42
    This sum here is so important in signal processing that it gets its own name,
  • 10:42 - 10:47
    and it's called the convolution of sequences x[n] and h[n].
  • 10:47 - 10:51
    The convolution which represents the output of a filter,
  • 10:51 - 10:56
    given its impulse response and an arbitrary input sequence x[n],
  • 10:56 - 11:00
    is actually an algorithmic formula to compute the output of the filter.
  • 11:00 - 11:05
    The ingredients are an input sequence x[m] and a second sequence, h[m].
  • 11:05 - 11:08
    And the recipe involves the following steps.
  • 11:08 - 11:13
    First, we time-reverse the impulse response, so we flip it in time.
  • 11:13 - 11:17
    If it goes like this, then it will look like this.
  • 11:17 - 11:21
    And at each step from minus infinity to plus infinity,
  • 11:21 - 11:26
    we center the time-reversed impulse response in the current sample n.
  • 11:26 - 11:30
    So, we shift the time-reversed impulse response by -n
  • 11:31 - 11:33
    and then, we compute the inner product
  • 11:33 - 11:38
    between this shifted replica of the impulse response and the input sequence.
  • 11:39 - 11:44
    Let's look at this graphically, using the same examples that we used before.
  • 11:44 - 11:48
    So, we have an impulse response, which is a decaying exponential sequence,
  • 11:48 - 11:51
    and we have a three-point simple input sequence.
  • 11:51 - 11:55
    We plot these three actors on a chart like this:
  • 11:55 - 11:59
    we have the input sequence on top,
  • 11:59 - 12:04
    we have the time-reversed and the delay impulse response on the second panel.
  • 12:04 - 12:08
    and here, we have the inner product between these two sequences.
  • 12:09 - 12:14
    So, at each step, as I said, we center the time-reversed impulse response
  • 12:14 - 12:18
    on the current sample, so we start at -4, and we compute the inner product.
  • 12:18 - 12:24
    Since the input signal in our example is nonzero only between 0 and 2
  • 12:24 - 12:29
    up to 0, fundamentally nothing happens and the inner product is always 0.
  • 12:29 - 12:33
    And we can see that it was 0 for values before -4,
  • 12:33 - 12:38
    and it will continue to be 0 until we hit 0.
  • 12:38 - 12:42
    At which point, we start to have an overlap between these two sequences,
  • 12:42 - 12:45
    and the inner product will not be 0.
  • 12:45 - 12:47
    In particular, on the first step,
  • 12:47 - 12:51
    it will be equal to this sample, which is equal to 1, times this sample, which is equal to 2.
  • 12:51 - 12:53
    So, the sum will be equal to 2.
  • 12:54 - 12:56
    We advance another step,
  • 12:56 - 13:01
    and then we see that the overlap involves two points now, here and here,
  • 13:01 - 13:04
    and we compute their products and their sum,
  • 13:04 - 13:07
    and we get the second point in our output sequence.
  • 13:07 - 13:12
    Third step, we'll finally involve all three points from the input sequence,
  • 13:12 - 13:16
    we compute the product with the impulse response, and the sum,
  • 13:16 - 13:18
    and we get our third output sample.
  • 13:18 - 13:20
    And the process continues like so.
  • 13:20 - 13:24
    Now, since the impulse response is an infinite sequence,
  • 13:24 - 13:26
    one-sided infinite sequence,
  • 13:26 - 13:30
    from now on, that inner product will always be non-zero
  • 13:30 - 13:37
    and will have an output that will continue to be non-zero forever and ever.
  • 13:40 - 13:42
    Finally, a few words on the convolution.
  • 13:42 - 13:45
    The convolution is, of course, linear and time-invariant
  • 13:45 - 13:48
    because it describes a linear and time-invariant operation.
  • 13:48 - 13:54
    It's also commutative, which means that if you have two filters in cascade,
  • 13:54 - 14:01
    you can safely invert their sequence, and the result will not change.
  • 14:02 - 14:05
    For absolutely- and square summable sequences,
  • 14:05 - 14:07
    the convolution is also associative,
  • 14:07 - 14:09
    which means that if you have a cascaded system,
  • 14:09 - 14:13
    you can lump their effect into a single filter
  • 14:13 - 14:18
    whose impulse response is the convolution of the individual impulse responses.
Title:
5.1 - Linear Filters
Video Language:
English
Claude Almansi commented on English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Claude Almansi edited English subtitles for 5.1 - Linear Filters
Show all

English subtitles

Incomplete

Revisions