Return to Video

5.3 - Filter stability

  • 0:00 - 0:02
    Welcome to module 5.3.
  • 0:03 - 0:08
    Filters are processing devices, processing machines and like for all machines,
  • 0:08 - 0:10
    we want to make sure that it don't blow up in our face.
  • 0:11 - 0:15
    Well, off course, this is the digital domain so nothing is going to literally blow up,
  • 0:15 - 0:18
    but we are interested in the concept of stability.
  • 0:18 - 0:24
    Namely, we want to make sure that if the input to a filter is a nice, well-behaved sequence,
  • 0:24 - 0:26
    the output will be nice and well-behaved as well.
  • 0:27 - 0:31
    It turns out that stability is related to the structure of the impulse response,
  • 0:31 - 0:36
    so lets see under which conditions we can sleep with both eyes closed.
  • 0:37 - 0:41
    Welcome to module 5.3 of Digital Signal Processing
  • 0:41 - 0:43
    in which we will talk about filter stability.
  • 0:43 - 0:45
    And before we tackle the stability problem,
  • 0:45 - 0:48
    we will also draw our first classification of filters,
  • 0:48 - 0:51
    based on their properties in the time domain.
  • 0:53 - 0:56
    According to the shape of their impulse response,
  • 0:56 - 1:01
    we can already label a filter as belonging to one of the following categories.
  • 1:01 - 1:05
    You have Finite Impulse Response filters, or FIRs, for short.
  • 1:05 - 1:08
    Infinite Impulse Response, or IIR,
  • 1:08 - 1:10
    Causal filters
  • 1:10 - 1:11
    and non causal filters.
  • 1:12 - 1:16
    FIR filters have an impulse response with a finite support.
  • 1:17 - 1:20
    Because of that, and because of how the convolution sum is computed,
  • 1:20 - 1:25
    only a finite number of samples are involved in the computation of each output sample.
  • 1:26 - 1:28
    A typical example of FIR filter,
  • 1:28 - 1:31
    is the moving average filter that we've seen before
  • 1:31 - 1:34
    where only M samples are non-zero.
  • 1:35 - 1:40
    IIR filters, on the other hand, have an infinite support for their inputs response.
  • 1:40 - 1:43
    So, potentially, an infinite number of samples are involved
  • 1:43 - 1:45
    n the computation of each output sample.
  • 1:45 - 1:49
    However, perhaps surprisingly, in many cases
  • 1:49 - 1:52
    we can still compute each output sample in a finite amount of time.
  • 1:53 - 1:57
    One notable exception is the class of Ideal Filters, which we will study shortly.
  • 1:58 - 2:01
    For instance, the leaky integrator has an infinite support impulse response,
  • 2:01 - 2:06
    but due to its algorithmic nature, we can compute each output sample
  • 2:06 - 2:08
    with only three operations.
  • 2:08 - 2:13
    A filter is said to be causal if its impulse response is 0 for n < 0.
  • 2:13 - 2:17
    Now if you remember how the convolution sample is computed
  • 2:17 - 2:21
    -- the convolution involves a time reversal of the impulse response
  • 2:21 - 2:23
    before multiplication with the input --
  • 2:23 - 2:27
    That means that only past values of the input
  • 2:27 - 2:30
    are involved in the computation of the output.
  • 2:30 - 2:34
    So, for instance, if your impulse response is like this and this is 0,
  • 2:34 - 2:37
    when you compute the convolution sum you time-reverse it,
  • 2:37 - 2:43
    and so you will only affect values in the past.
  • 2:44 - 2:49
    Causal filter, because of their dependency only on the past, can work "on line",
  • 2:49 - 2:51
    they can work in real time.
  • 2:51 - 2:55
    A [non?]causal filter, on the other hand, is a filter for which the impulse response
  • 2:55 - 2:59
    has some nonzero tabs for positive values of the index.
  • 2:59 - 3:03
    In other words, non-causal filters will need samples from the future
  • 3:03 - 3:07
    in order to be able to compute their current output value.
  • 3:07 - 3:12
    As such, they can of course, not work in real time, but there are applications in which
  • 3:12 - 3:15
    the future is indeed available to a processing system
  • 3:15 - 3:18
    in particular when you do what is called batch processing
  • 3:21 - 3:26
    namely processing for which all the data is available in advance.
  • 3:26 - 3:29
    A typical case in point is image processing
  • 3:29 - 3:33
    in which the whole picture is available for processing before you start.
  • 3:35 - 3:39
    The Moving Average filter is again, a causal filter.
  • 3:39 - 3:41
    We have developed it in a causal fashion,
  • 3:41 - 3:44
    and indeed, its impulse response is non-zero,
  • 3:44 - 3:48
    only for positive or null values of the index.
  • 3:48 - 3:52
    We could develop a non-causal moving average
  • 3:52 - 3:56
    if we center the impulse response in zero.
  • 3:56 - 4:01
    And indeed if we do so, we would eliminate the delay that you've seen
  • 4:01 - 4:04
    when computing longer and longer averages,
  • 4:04 - 4:08
    because we would compensate the delay by looking at future samples.
  • 4:08 - 4:11
    We could apply this to a set of data
  • 4:11 - 4:14
    that we've already stored in computer memory for instance.
  • 4:16 - 4:17
    Let's talk now about stability.
  • 4:17 - 4:20
    Stability is important because it guarantees that a system
  • 4:20 - 4:26
    will not behave unexpectedly if the input to the system is well-behaved.
  • 4:27 - 4:31
    In signal processing, a well-behaved input is a bounded input,
  • 4:31 - 4:36
    namely, a signal for which we know the maximum excursion.
  • 4:38 - 4:43
    We want a system to behave well when the input is bounded.
  • 4:43 - 4:47
    And the concept of bounded input bounded output stability
  • 4:47 - 4:50
    or BIBO stability for short, formalizes this
  • 4:50 - 4:57
    by requiring that a system produces a bounded output, when the input is bounded.
  • 4:57 - 5:01
    So, a well-behaved output for a well-behaved input.
  • 5:01 - 5:06
    The fundamental stability theorem for Later (?) filters states that,
  • 5:06 - 5:13
    a filter is BIBO stable if and only if its impulse response is absolutely summable.
  • 5:14 - 5:16
    So we're going to prove this.
  • 5:16 - 5:19
    And we're going to prove both the necessary and sufficient condition.
  • 5:20 - 5:22
    So we start with the sufficient condition,
  • 5:22 - 5:26
    and we say that our hypotheses are that our input is bounded,
  • 5:26 - 5:31
    and that the impulse response of the filter is absolutely summable.
  • 5:31 - 5:36
    And what we want to prove is that, in this case, the output will be bounded.
  • 5:37 - 5:40
    Now the proof is very simple.
  • 5:41 - 5:46
    We start by considering the magnitude of the output of the filter,
  • 5:46 - 5:51
    which is just the magnitude of the convolution sum written here as
  • 5:51 - 5:58
    the sum for k that goes to minus infinity to plus infinity of h[k]x[n-k].
  • 5:59 - 6:02
    Notice here that we use the commutative property of the convolution
  • 6:02 - 6:05
    and we actually time-reverse and delay the input
  • 6:05 - 6:07
    rather than the impulse response.
  • 6:07 - 6:11
    Now the magnitude of the sum is maximized
  • 6:11 - 6:14
    by the sum of the magnitude of its terms,
  • 6:14 - 6:19
    and this in turn is maximized by the following expression.
  • 6:19 - 6:25
    Where we have replaced the magnitude of the input by it's upper bound M,
  • 6:25 - 6:28
    and we are left therefore with M that multiplies
  • 6:28 - 6:34
    the sum from minus infinity to plus infinity of the magnitude of the impulse response,
  • 6:34 - 6:38
    but we know that the impulse response is absolutely summable,
  • 6:38 - 6:45
    and therefore, the output will be bounded by the product nl (?), which is finite.
  • 6:45 - 6:48
    As for the necessary part, we'll proceed as follows.
  • 6:49 - 6:57
    The hypotheses now are that both input and output are bounded,
  • 6:57 - 7:02
    and we want to prove that in this case, h[n] must be absolutely summable.
  • 7:03 - 7:09
    The proof will proceed by contradiction, and it is a little bit tricky
  • 7:09 - 7:10
    but not difficult at all.
  • 7:11 - 7:16
    So assume by contradiction that the impulse response
  • 7:16 - 7:18
    is not absolutely summable.
  • 7:18 - 7:22
    Therefore the sum of its magnitude is plus infinity
  • 7:23 - 7:28
    at which point we can build an artificial input as follows.
  • 7:28 - 7:35
    x[n] will be equal to +1 if h[- n] is positive,
  • 7:35 - 7:40
    and minus 1 if if h[- n] is negative.
  • 7:40 - 7:44
    Well clearly x[n] built as such is a bounded signal
  • 7:44 - 7:47
    because it's between +1 and -1.
  • 7:47 - 7:54
    However if we try to compute the response of the system defined by h[n] in 0,
  • 7:54 - 8:01
    we have that the output y[0] is the convolution between our artificial input x[n]
  • 8:01 - 8:03
    and the impulse response computed in 0.
  • 8:03 - 8:07
    So we write out the convolution sum explicitly.
  • 8:07 - 8:15
    So it's a sum for k that goes from minus infinity to plus infinity of h[k]x[-k],
  • 8:15 - 8:19
    because remember n=0 here in this convolution
  • 8:19 - 8:21
    and of course, by definition
  • 8:21 - 8:26
    this will be the sum of the magnitude of h[k]
  • 8:26 - 8:28
    because we built a signal like so
  • 8:28 - 8:34
    and because of our initial assumption this will be plus infinity.
  • 8:34 - 8:38
    So, we have proven that h[n] must be absolutely summable,
  • 8:38 - 8:39
    because if it were not
  • 8:39 - 8:44
    I could find a bounded input that would make the system explode.
  • 8:44 - 8:47
    So, the good news is that FIR filters are always stable,
  • 8:47 - 8:51
    because their impulse response only contains a finite number of non-zero values,
  • 8:51 - 8:55
    and therefore, the sum of their absolute values will always be finite.
  • 8:56 - 8:59
    When it comes to IIR filters, on the other hand,
  • 8:59 - 9:01
    we have to explicitly check for stability.
  • 9:01 - 9:04
    And for the time being the only way we know how to do so,
  • 9:04 - 9:09
    is by going through the sum of the absolute values of the impulse response.
  • 9:09 - 9:13
    And if we do the exercise with our friend, the Leaky Integrator,
  • 9:13 - 9:17
    we get the sum for n that goes from zero to infinity
  • 9:17 - 9:20
    of magnitude of λ to the power of n,
  • 9:20 - 9:24
    multiplied by a leading factor of 1-λ in magnitude.
  • 9:24 - 9:27
    This is just a geometric sum, and we know that it's equal to
  • 9:27 - 9:30
    the limit for n that goes to infinity
  • 9:30 - 9:36
    of 1 minus magnitude of λ to the n plus 1 divided by 1-λ.
  • 9:36 - 9:39
    -- Always multiply by this factor that doesn't really bother us --
  • 9:40 - 9:45
    This limit converges only if λ is less than one in magnitude.
  • 9:45 - 9:47
    And therefore, stability of the Leaky Integrator
  • 9:47 - 9:51
    is guaranteed only for λ less than one in magnitude.
  • 9:52 - 9:56
    Later on in this module, we will study indirect methods for filter stability
  • 9:56 - 10:01
    that will allow us to determine whether a filter is stable or not,
  • 10:01 - 10:04
    without going through the impulse response explicitly.
Title:
5.3 - Filter stability
Video Language:
English
Claude Almansi edited English subtitles for 5.3 - Filter stability
Claude Almansi edited English subtitles for 5.3 - Filter stability
Claude Almansi edited English subtitles for 5.3 - Filter stability
Claude Almansi edited English subtitles for 5.3 - Filter stability
Claude Almansi edited English subtitles for 5.3 - Filter stability
Claude Almansi added a translation

English subtitles

Incomplete

Revisions