Hi, and welcome to module 9.5 of digital
signal processing.
In this module, we will touch briefly on
some topics in receiver design.
A lot of things, unfortunately, happen to
the signal while it's traveling through
the channel.
The signal picks up noise, we have seen
that already.
It also gets distorted because the
channel will act as some sort of filter
that is not necessarily all pass and
linear phase.
Interference happens too, that might be
parts of the channel that we thought were
usable, and they're actually not, so the
receiver really has to deal with a copy
of the transmitted signal that is very,
very far from the idealized version we
have seen so far.
The way receivers, especially digital
receivers, can cope with the distortions
and noise introduced by the channel, is
by implemented adaptive filtering
techniques.
Now we will not have the time to go into
very many details about adaptive signal
processing, again these are topics that
you will be able to study in more
advanced signal processing classes.
But I think it is important to give you
an overview of things that have to happen
inside a receiver, inside your ADSL
receiver for instance.
So you can enjoy this high data rates
that are available today.
And the first technique that we will look
at is adaptive equalization and then we
will look at some very simple timing
recovery that it used in practice in
receivers.
Let's begin with a blast from the past.
[NOISE] Those of you that are a little
bit older will certainly have recognized
this sound as the obligatory soundtrack
every time you used to connect to the
internet.
And indeed this is the sound made by a
V34 modem that was the standard dial up
connection device seen in the 90s until
the early 2000.
Now if you have ever used a modem, you've
heard the sound and you probably wondered
what was going on, so we're going to
analyze what we just heard from the
graphical point of view.
If we look at the block diagram for the
receiver once again, what we're going to
do is we're going to plot the base band
complex samples as points on the complex
blank.
So we're going to take B, r of n has the
as the horizontal coordinate and b, i of
n as the vertical coordinate.
And before we do so let's just look for a
second at what happens inside the
receiver when the signal at the input is
a simple sinusoid, like cosine of omega c
plus omega zero n.
We are demodulating this very simple
signal with the two carriers, the cosine
of omega c n and sine of omega c n.
And then we're filtering the result with
a low pass field.
So if we work out this formula with
standard trigonometric identities, we can
always express for instance the product
of two cosine functions as the sum.
Of the cosine of the sum of the angles
plus the cosine of the difference of the
angles.
And same for the product of cosine and
sine.
So if we do that, we get four terms, two
of which have a frequency that will fall
outside of the pass band of the filter h.
So when we apply the filter to these
terms, we're left only with cosine of
omega 0 n plus j sine of omega 0 n.
Which is of course e to the j omega 0 n.
So when the input to the receiver is a
cosine, the points in the complex
baseband sequence will be points around
the circle and a difference between two
successive points is the angle omega 0.
The reason why we might be called to
demodulate a simple sinusoid is because
the receiver will send what are called
pile tones, simple sinusoids that are
used to probe the line and gauge the
response of the channel at particular
frequencies.
So with this in mind, let's look at the
slow-motion analysis of the base band
signals.
Samples, when the input is the audio file
we just heard before.
So lets start with a part that goes like
this[SOUND].
This signal contains several sinusoids,
that we can see here in the plot.
And the sinusoids also contain abrupt
phase reversal, meaning that, at some
given points in time The phase of the
sinusoid is augmented by pi.
You can see this as this small explosions
in the circular pattern in the plot.
These phase reversals, are used by the
transmitter and the receiver, as time
markers, to estimate the propagation
delay of the signal, from source to
destination.
The next part goes like this.
[NOISE].
And this is a training sequence.
The transmitter sends a sequence of known
symbols, namely the receiver knows the
symbol that are transmitted.
And so the receiver can use this
knowledge to train an equalizer to undo
the affects of the channel.
The last part is the data transmission
proper, the noisy part if you want, of
the audio file.
And the interesting thing is that the
transmitter and receiver perform a
handshake procedure, using a very low bit
rate QAM transmission using only four
points, therefore two bits per symbol.
To exchange the parameters of the real
data transmission that is going to
follow.
The speed, the constellation size, and so
on.
Using the four point QAM constellation in
the beginning ensures that, even in very
noisy conditions, transmitter and
receiver can exchange their vital
information.
So even from this simple qualitative
description of what happens in a real
communication scenario, we can see that
the task that the receiver is saddled
with is very complicated.
So it's a dirty job, but a receiver has
to do it.
And a receiver has to cope with four
potential sources of problem.
Interference.
The propagation delay, so the delay
introduced by the channel.
The linear distortion introduced by the
channel.
And drifts in internal clocks between the
digital system inside the transmitter and
the digital system inside the receiver.
So when it comes to interference the
handshake procedure, and the line probing
pilot tones are used in clever ways to
circumvent the major sources of
interference.
We will see some example later on when we
discuss ADSL.
The propagation delay, is tackled by a
delay estimation procedure, that we will
look at in just a second.
The distortion to this by the channel is
compensated using adaptive equalization
techniques, and we will see some examples
of that as well.
And clock drifts are tackled by timing
recovery techniques that in and of
themselves are quite sophisticated and
therefore we leave them to more advanced
classes.
Graphically, if we sum up the chain of
events that occur between the
transmissions of the original digital
signal and the beginning of the
demodulation of the received signal.
We have a digital to analog converter and
a transmitter, this is transmitter part
of the chain that operates with a given
sample period Ts.
This generates an analog signal which is
sent over a channel.
We can represent the channel for the time
being as a linear filter in the
continuous time domain.
With frequency response d of j omega.
At the input of the receiver, we have a
continuous time signal, s hat of t.
Which is a distorted and delayed version
of the original analog signal.
We will neglect noise for the time being.
This signal is sampled by an a to d
converter that operates at a period t
prime of s.
And we obtain the sequence of samples
that will be input to the modulator.
So this is the receiver part of the
chain.
We have to take into account the
distortion introduced by the channel, and
we have to take into account the
potentially time varying discrepancies in
the clocks between the transmitter and
the receiver.
These two systems are geographically
remote and there is no guarantee that the
two internal clocks that're used in the A
to D and D to A converters are
synchronized or run exactly at the same
frequency.
Let's start with problem of delay
compensation.
To simplify the analysis we'll assume
that the clocks at transmitter and
receiver are synchronized and
synchronous.
So T prime of s is equal to T s, and the
channel acts as a simple delay.
So the received signal is simply a
delayed version of the transmitted
signal.
Which implies that the frequency response
of the channel is simply e to the minus j
omega d.
So, the channel introduces a delay of d
seconds.
We can express this in samples in the
following way.
We write d as the product of the sampling
period, times b plus tau.
Where b is an integer.
And tau is strictly less than one-half in
magnitude.
So b is called the bulk delay because it
gives us an integer number of samples of
delay at the receiver and tau is the
fractional delay.
So the fraction of samples introduced by
the continuous time delay of d.
So, how do we compensate for this delay?
Well, the bulk delay is rather easy to
tackle.
Imagine the transmitter begins
transmission by sending just an impulse
over the channel.
So, the discreet time signal is this one,
it's just a delta and a 0.
It gets sent to D-to-A converter.
And the converter will output a
continuous time signal that looks like an
interpolation function like the sinc.
And like all interpolation functions, it
will have a maximum peak at zero that
corresponds to the known zero sample.
This signal gets transmitted over the
channel.
And it gets to the receiver after a delay
d that we can estimate for instance by
looking at the displacement of the peak
of the interpolation function.
The receiver converts this into a
discrete time sequence.
Now in the figure here it looks as if the
sample incidence at the transmitter and
receiver are perfectly aligned.
Now this is not necessarily the case
because the starting time for the
interpolator at the transmitter and the
sampler at the receiver are not
necessarily synchronous.
But any difference in starting time can
be integrated into the propagation delay
as long as the sampling periods are the
same.
So with this, all we need to do at the
receiver is to look for the maximum value
in the sequence of samples.
Because of the shape of the interpolating
function, we know that the real maximum
will be at most half a sample in either
direction of the location, of the maximum
sample value.
So, add the receiver to offset the bulk
delay, we will just set the nominal time
n equal to 0, to coincide with the
location of the maximum value of the
sample sequence.
Now of course we need to compensate for
the fractional delay, so we need to
estimate tau.
And to do that we'll use a different
technique.
Let me add in passing, that in real
communication devices, of course we're
not using impulses to offset the bulk
delay.
Because impulses are full-band signals
and so they would be filtered out by the
passband characteristic of the channel.
The trick is to embed discontinuities in
pilot tones and to recognize this
discontinuities at the receiver.
As we have seen in the animation at the
beginning of this module, we use phase
reversals, which are abrupt
discontinuities in sinusoids, to provide
a recognizable instant in time for the
receiver to latch on.
Okay, so what about the fractional delay?
Well, for the fractional delay, we use a
sinusoid instead of a delta, so we build
a bass band signal, which is simply a
complex exponential at a known frequency,
omega 0.
This will be converted to a real signal
before being sent to the D-to-A
converter, and so what we transmit
actually is cosine of omega c, the
carrier frequency, plus the pilot
frequency omega 0, times n.
The receiver will receive a delayed
version of this, which contains the delay
now in sample, and fraction of sample, b
plus tau.
After we demodulate this cosine you
remember we got a complex exponential and
we can also compensate already for the
bulk delay which we know.
So for an integer number of sample b we
obtain a base band signal at b of n which
is e to the j omega and minus tau.
Since we know the frequency of omega 0 we
can just multiply this quantity by e to
the minus j omega 0 n and obtain e to the
minus j omega 0 tau, which is a constant.
And which we can invert given that we
know the frequency omega 0.
And so now we have an estimate for both
the bulk delay and the fractional delay.
Now we have to bring back the signal to
the original timing.
The bulk delay is really no problem.
It's just an integer number of samples.
What creates a problem is the fractional
delay because that will shift the peaks
with respect to the sampling intervals.
So if we want to compensate for the bulk
delay we need to compute subsample values
and in theory to do that we should use a
sinc fractional delay namely a filter
with impulse response sinc of n plus tau.
In practice however, we will use a local
interpolation and this is a very
practical application of the Lagrange
interpolation technique that we saw in
module 6.2.
So graphically the situation is like so,
we have a stream of samples coming in.
And for each sample, we want to compute
the subsample value with a distance of
tau from the nearest sample interval.
And we want to only use a local
neighborhood of samples to estimate this.
Now, you remember from module 6.2.
The Lagrange approximation works by
building a linear combination of Lagrange
polynomials weighed by the samples of the
function.
So, as per usual, we choose the sampling
interval equal to 1, so that we lighten
the notation.
We have a continuous time function x of
t, and we want to compute x of n plus
tau, with tau less than one half in
magnitude.
So we have samples of this function at
integers, n, and the local Lagrange
approximation around n, is given by this
linear combination of Lagrange
polynomials.
Weighted by the samples of the functions
around the approximation point.
So we use the notation x L of n and t.
N is the center point and t is the value
from the center point at which we want to
compute the approximation.
And the Lagrange polynomials are given by
this formula here, which is the same as
in module 6.2.
So the delayed compensated input signal
will be set equal to the Lagrange
approximation at tau.
So let's look at an example.
Assume that we want a second order
approximation.
So we pick N equal to 1 and we will have
three Lagrange polynomials.
And so, we will need to use three samples
of the sequence to compute interpolation.
These three polynomials will be centered
in n minus 1 and in n plus 1 and scaled
by the values of the samples at these
locations.
And finally, we will sum the poll numbers
together and compute their value in n
plus tau.
So, we start with the first one, which is
centered in n minus 1.
And, like all interpolation polynomials,
its value is 1 in n minus 1, and 0, at
other integer values of the argument.
The second polynomial will be centered in
n, and the third polynomial will be
centered in n plus 1.
When we sum them together, we obtain a
second order curve that goes through the
points, that interpolates the three
points, and then we can compute the
approximation as the value of this curve
in n plus tau.
Now the nice thing about this approach,
is that if we look at the approximation,
if we take the Lagrange approximation
around n.
We can define a set of coefficients, d
tau of k, which are the values of each
Lagrange polynomial in tau.
So d tau of k, are 2 N plus 1 values, the
form, the coefficients, of an FIR filter.
And we can compute the value of the
Lagrange approximation simply as the
convolution of the incoming sequence with
this interpolation filter.
So for example, if these are the three
Lagrange polynomials for n equal to 1, we
can compute these polynomials for t equal
to tau, where tau is the fractional delay
that we estimated before.
And we will obtain three coefficients,
like here, for instance, is an example
for tau equal to 0.2.
Three coefficients that give us an FIR
filter, and then we can just simply
filter the samples coming into the
receiver with this filter, to compensate
for the fractional delay.
So again, the algorithm is, estimate the
fractional delay.
The bulk delay is no problem, again.
Compute the 2 N plus 1 Lagrangian
coefficients and filter it with the
resulting FIR.
The added advantage of this strategy is
that if the delay changes over time for
any reason, all we need to do is to keep
the estimation running and update the FIR
coefficients as the estimation changes
over time.
Okay, now that we know how to compensate
for the propagation delay introduced by
the channel.
Let's go see the rechannel it with an
arbitrary frequency response D j of
omega.
And the transmission chain goes from the
pass band signal s of n, discreet time,
into a D-to-A converter, analog signal s
of t, it gets filtered by the channel,
gives us hat s of t, which is sampled at
the receiver to give us.
A received fast band signal, hat s of n.
But now we have seen in the previous
module that this block diagram can be
converted into an all digital scheme
where our band pass signal s of n gets
filtered by the discrete time equivalent
of the channel.
And, gives us a filtered version of the
bandpass signal, as would appear inside
the receiver.
So the problem now, is that we would like
to undo the effects of the channel, on
the transmitted signal.
And the classic way to do that, is to
filter the received signal hat s of n, by
a filter E, that compensates for the
distortion or the filtering introduced by
the channel.
So the target is that the output of the
filtering operation gives us a signal hat
s e of n, which is equal to the
transmitted signal.
How do we do that?
In theory, it would be enough to pick a
transfer function for the filter E, which
is just the reciprocal of the equivalent
transfer function of the channel.
But the problem is that we don't know the
transfer function of the channel in
advance because each time we transmit
data over the channel, this transfer
function may change.
And also, even while we're transmitting
data, the transfer function might change
because it is a physical system that
might be subject to.
Drifts and modifications.
So what do we do?
We need to use adaptive equalization.
So the filter that compensates for the
distortion introduced by the channel is
called an equalizer.
And what we want to do Is to change the
filter in time, so change the filter
coefficients in a DPS realization as a
function of the error that we obtain when
we compare the output of the filter with
the signal that we would like to obtain.
In our case the signal that we would like
to obtain is the transmitted signal.
And so we take the received signal, we
filter it with the equalizer.
We look at the result.
We take the difference, with respect to
the original signal, and we use the
error, which should be zero in the ideal
case, to drive the adaptation of the
equalizer.
But wait, how do we get the exact
transmitted signal at the receiver?
Well, we use two tricks.
The first one is boot strapping.
The transmitter will send a prearranged
sequence of symbols to the receiver.
So let's call the sequence of symbols a t
of n.
This gets modulated and generates a pass
band signal s of n.
Now at the receiver the sequence a t of n
is known.
And the receiver has an exact copy of the
modulator, of the transmitter, inside of
itself.
So the transmitter can generate locally,
an exact copy of the pass band signal s
of n.
And so, for the bootstrapping part of the
adaptation, we actually have an exact
copy of the transmitted pass band signal
that we can use to drive the adaptation
of the coefficients.
The training sequence is just long enough
to bring the equalizer to a workable
state.
For the handshake procedure that we saw
in the video before, for instance.
This would correspond to the moment where
the receiver starts demodulating the four
point QIM.
At that moment, the receiver will switch
strategy and implement a data driven
adaptation.
The thing works like this.
The received signal gets equalized, gets
demodulated and then the slicer will
recover the sequence of transmitted
symbols.
Since the receiver has a copy of the
transmitter inside of itself, it can use
the sequence of transmitted symbol.
To build a local copy of the transmitted
signal.
Now of course, errors might happen in the
slicing process, and so this local copy
is not completely error-free.
But the assumption is that the equalizer
is doing already enough of a good job to
keep the number of errors in this
sequence sufficiently low.
So that the difference, with respect to
the received signal, is enough to refine
the adaptation of the equalizer, and
especially to track the time varying
conditions of the channel.
What we have seen, is just a qualitative
overview of what happens inside of a
receiver.
And there're still so many questions that
we would have to answer to be thorough.
For instance, how do we carry out the
adaptation of the coefficients in the
equalizer?
How do we compensate for different clock
rates in geographically diverse receivers
and transmitters?
How do we recover from the interference
from other transmission devices, and how
do we improve the resilience to noise?
The answers to all those questions
require a much deeper understanding of
adaptive signal processing, and hopefully
that'll be the topic of your next signal
processing class.