-
Hi, and welcome to Module 5 of Digital Signal Processing.
-
In the previous modules, we concentrated on the concept of signal.
-
We looked at signals, we took them apart, and we put them back together.
-
And now, it's time to start addressing the second part of the story,
-
the processing part in Digital Signal Processing.
-
Now, when we process a signal, we no longer simply analyze it,
-
but we manipulate it, and we transform it into another signal.
-
There are very many ways to skin a cat, and probably even more ways to process a signal.
-
But of all the possible ways, we're interested in the class of processing algorithms
-
that go under the name of linear-time invariant filters.
-
Filters are simple and yet very, very powerful devices
-
that have been around for a very long time, especially in analog electronics.
-
And to think what a filter does, think of the knobs on your stereo
-
with which you boost the bass or dim the treble in the music you listen to.
-
In the digital world, filters have the added advantage
-
that they can be implemented very simply on general-purpose architectures.
-
This is a rather long module since not only do we need to define the filtering paradigm,
-
but then we also need to understand how to implement and design filters
-
that do what we want them to do.
-
And, in spite of the length,
-
this module only scratches the surface of the world of active signal processing.
-
We will leave for future classes concepts like adaptive signal processing
-
or non-linear signal processing.
-
But I'm sure that this module will whet your appetite
-
with respect to everything that you can do in the world of digital signal processing.
-
The module is structured like this:
-
The first three sub-modules will concentrate on the concept of filter
-
and will characterize the filter in the time domain.
-
But we'll illustrate the key points using a mock problem
-
where we try to remove noise from an otherwise smooth signal.
-
By now, we know that the time domain is always only half of the picture.
-
And so, we will spend modules 5.4 and 5.10
-
to extend the filtering paradigm to the frequency domain.
-
This will allow us to define a class of filters called ideal filters
-
that represent really the best-behavior filter that we can think of.
-
Unfortunately, ideal filters turn out to be too good to be true,
-
and so we will have to spend modules 5.6 to 5.10
-
to explore the kind of filters that we can implement and design in practice.
-
At first, we will try to mimic ideal filters,
-
but the method will show its limitations very soon.
-
So, we will introduce a new tool called the Z-Transform
-
that will allow us to explore the full range of filters that we can design and implement.
-
Finally, to wrap it all up, we will have a very hands-on module
-
where we will talk about real time signal processing
-
and we will show you how easy it is to implement real time guitar effects on your PC.
-
So, let's get started on Module 5
-
and let's make the acquaintance of our new friend, the linear filter.
-
Hi, and welcome to Module 5.1 of Digital Signal Processing
-
in which we will talk about linear filters.
-
We will examine the two key properties of linear filters,
-
which are linearity and time invariance.
-
And then, we will describe the convolution operator,
-
which captures both mathematically and algorithmically
-
the inner workings of a linear filter.
-
In general, when we talk about signal processing, we imagine a situation
-
where we have input signal, x [n], an output signal, y[n],
-
produced by some sort of black box here
-
that manipulates the input into the output.
-
We can write the relationship mathematically, like so
-
where y[n] is equal to some operator H that is applied to the input x[n].
-
Already, when we draw this block diagram,
-
we are making assumptions on the structure of the processing device
-
in the sense that we consider a system with a single input and a single output.
-
We could imagine a system with multiple inputs or multiple outputs.
-
But even with these limitations,
-
the possibilities for what goes inside this block here are pretty much limitless.
-
And unless we impose some structure on the kind of processing
-
that happens into this block,
-
we will not be able to say anything particularly meaningful about the filtering operation.
-
So, the first requirement that we impose on a filter is linearity.
-
Linearity means that if we have two inputs,
-
and we take a linear combination of said inputs,
-
well, the output is a linear combination of outputs
-
that could have been obtained by filtering each sequence independently.
-
This is actually a very reasonable requirement, for instance, think a situation
-
where your processing device is an amplifier
-
and you connect a guitar to your amplifier.
-
Now, if you play one note, and then you play it the same note louder,
-
you expect the amplifier to produce just a louder note. [music]
-
Similarly, if you play one note and then another note,
-
and then you play two notes together, you'll expect the amplifier to amplify the sum of two notes
-
as the sum of two independent amplifications. [music]
-
Now, note that this is not necessarily the case in all situations,
-
for instance, in some kinds of rock music, you want to introduce some distortion
-
and so you add a fuzz box that will distort the signal non-linearly
-
to create very interesting effects.
-
But they belong to a completely different category of processing. [music]
-
The second requirement that we impose on the processing device
-
is time invariance: time invariance, in layman terms,
-
simply means that the system will behave exactly in the same way,
-
independently of when it's switched on.
-
Mathematically, we can say that if y[n] is the output of the system
-
when the input is x[n], well, if we put a delayed version of the input inside the system,
-
x[n-n0], what we get is the same output delayed by n0.
-
And again, we can use a guitar amplifier as an example.
-
If I turn it on today, well, I expect it to amplify the notes
-
exactly in the same way that it amplified them yesterday.
-
But again, some types of guitar effects exploit time variance
-
to introduce a different flavor of the music that's been played.
-
For instance, the wow pedal is a time-varying effect
-
that will change the envelope of a sound
-
in ways that are not time-invariant. [music]
-
So, what are the ingredients that go into a Linear Time Invariant system?
-
Well, linear ingredients: addition, which is a linear operation,
-
Scalar multiplication, another linear operation, and delays.
-
Another requirement that is not mandatory but makes a lot of sense
-
if you want to use a Linear Time-Invariant system in real time,
-
is that the system be causal. (?)
-
By that, we mean that the system
-
can only have access to input and output values from the past.
-
In that case, you can write the input/outout relationship as follows.
-
The output is a linear functional of past values of the input and past values of the output.
-
The impulse response is the output of a filter
-
when the input is a δ function.
-
A fundamental result states that the impulse response
-
fully characterizes the behavior of an LTI system.
-
Let's see why that is so.
-
Assume that we have a filter and we can measure its impulse response
-
by inputting a δ function,
-
and it turns out that the impulse response is an exponential and decaying sequence,
-
h [n] = α to the power of n, times the unit's depth (?).
-
Now, we want to use the same filter to filter an arbitrary sequence, x[n],
-
that in this example is simply a three-point sequence
-
that is equal to 2 for n=0, is equal to 3 for n=1 and is equal to 1 for n=2,
-
and is 0 everywhere else.
-
So, we can always write our sequence
-
as a linear combination of delayed δ function.
-
So, in particular, for our example,
-
x[n] = 2δ[n] + 3δ[n-1] + δ[n-2].
-
Now, we know the impulse response, so the response to the δ,
-
and by exploiting linearity and time invariance,
-
we can compute the response of the system to the input sequence x[n],
-
just by knowing the impulse response.
-
Indeed, we apply the real filter to the linear combination of δs,
-
and by exploiting linearity first, we can split the operation of the filter
-
over the three components of the signal,
-
and by exploiting time invariance, we just sum together
-
a properly (?) scaled version of the impulse response delayed.
-
We can look at this graphically, and we see that
-
when we filter the first component, 2δ[n], we get 2h[n].
-
The second component of the signal is 3δ[n-1],
-
And this gives rise to 3h[n-1]
-
which we plot on top of the other response.
-
And finally, the last component is simply δ[n-2],
-
which, filtered (?) gives h[n-2].
-
So now, we have the three components.
-
We sum them together by linearity
-
and we obtain the response of the system to our arbitrary input.
-
In general, remember that we can always write x[n],
-
a generic discrete time sequence,
-
as the sum for k that goes from minus infinity to plus infinity
-
of a sequence of time delayed δs, scaled by the values of the sequence itself.
-
So, this probably seemed like a futile exercise in Module 3.2,
-
but now we see the usefulness of this representation,
-
because by linearity and time invariance, we can express the output
-
as the sum from k that goes from minus infinity to plus infinity
-
of the values of the sequence times the impulse response, time-reversed and delayed by n.
-
This sum here is so important in signal processing that it gets its own name,
-
and it's called the convolution of sequences x[n] and h[n].
-
The convolution which represents the output of a filter,
-
given its impulse response and an arbitrary input sequence x[n],
-
is actually an algorithmic formula to compute the output of the filter.
-
The ingredients are an input sequence x[m] and a second sequence, h[m].
-
And the recipe involves the following steps.
-
First, we time-reverse the impulse response, so we flip it in time.
-
If it goes like this, then it will look like this.
-
And at each step from minus infinity to plus infinity,
-
we center the time-reversed impulse response in the current sample n.
-
So, we shift the time-reversed impulse response by -n
-
and then, we compute the inner product
-
between this shifted replica of the impulse response and the input sequence.
-
Let's look at this graphically, using the same examples that we used before.
-
So, we have an impulse response, which is a decaying exponential sequence,
-
and we have a three-point simple input sequence.
-
We plot these three actors on a chart like this:
-
we have the input sequence on top,
-
we have the time-reversed and the delay impulse response on the second panel.
-
and here, we have the inner product between these two sequences.
-
So, at each step, as I said, we center the time-reversed impulse response
-
on the current sample, so we start at -4, and we compute the inner product.
-
Since the input signal in our example is nonzero only between 0 and 2
-
up to 0, fundamentally nothing happens and the inner product is always 0.
-
And we can see that it was 0 for values before -4,
-
and it will continue to be 0 until we hit 0.
-
At which point, we start to have an overlap between these two sequences,
-
and the inner product will not be 0.
-
In particular, on the first step,
-
it will be equal to this sample, which is equal to 1, times this sample, which is equal to 2.
-
So, the sum will be equal to 2.
-
We advance another step,
-
and then we see that the overlap involves two points now, here and here,
-
and we compute their products and their sum,
-
and we get the second point in our output sequence.
-
Third step, we'll finally involve all three points from the input sequence,
-
we compute the product with the impulse response, and the sum,
-
and we get our third output sample.
-
And the process continues like so.
-
Now, since the impulse response is an infinite sequence,
-
one-sided infinite sequence,
-
from now on, that inner product will always be non-zero
-
and will have an output that will continue to be non-zero forever and ever.
-
Finally, a few words on the convolution.
-
The convolution is, of course, linear and time-invariant
-
because it describes a linear and time-invariant operation.
-
It's also commutative, which means that if you have two filters in cascade,
-
you can safely invert their sequence, and the result will not change.
-
For absolutely- and square summable sequences,
-
the convolution is also associative,
-
which means that if you have a cascaded system,
-
you can lump their effect into a single filter
-
whose impulse response is the convolution of the individual impulse responses.