Return to Video

9.3 - Controlling the power

  • 0:01 - 0:04
    Hi and welcome to Module 9.3 of Digital
    Signal Processing.
  • 0:04 - 0:08
    We are still talking about Digital
    Communication Systems.
  • 0:08 - 0:11
    In the previous module we addressed
    bandwidth constraint.
  • 0:11 - 0:14
    In this module we will tackle the powered
    constraint so first we will introduce the
  • 0:14 - 0:18
    concept of noise and probability of error
    in a communication system.
  • 0:18 - 0:22
    We will look at signaling alphabet and
    power and their related power.
  • 0:22 - 0:25
    And finally, we'll introduce QAM
    signaling.
  • 0:25 - 0:29
    So we have seen that a transmitter sends
    a sequence of symbols a of n.
  • 0:29 - 0:32
    Created by the mapper.
    Now we take the receiver into account.
  • 0:32 - 0:36
    We don't yet know how, but it's safe to
    assume that the receiver in the end will
  • 0:36 - 0:41
    obtain an estimation hat a of n.
    Of the original transmitted symbol
  • 0:41 - 0:43
    sequence.
    It's an estimation because even if there
  • 0:43 - 0:46
    is no distortion introduced by the
    channel.
  • 0:46 - 0:50
    Even if nothing bad happens.
    There will always be a certain amount of
  • 0:50 - 0:53
    noise, that will corrupt the original
    sequence.
  • 0:53 - 0:56
    When noise is very large, our estimate
    for the transmitted symbol will be off,
  • 0:56 - 1:00
    and will incur a decoding error.
    Now, this probability of error will
  • 1:00 - 1:04
    depend on the power of the noise, with
    respect to the power of the signal.
  • 1:04 - 1:08
    And will also depend on the decoding
    strategies that we've put in place, how
  • 1:08 - 1:12
    smart we are in circumventing the effects
    of the noise.
  • 1:12 - 1:16
    One we can maximize the probability of
    correctly guessing the transmit symbol,
  • 1:16 - 1:20
    is by using suitable alphabets.
    And so we will see in more detail what
  • 1:20 - 1:24
    that means.
    Remember the scheme for the transmitter.
  • 1:24 - 1:27
    We have a bitstream coming in.
    And then we have the scrambler.
  • 1:27 - 1:35
    And then the mapper.
    And here we have a sequence of symbols a
  • 1:35 - 1:37
    of n.
    These symbols will have to be sent over
  • 1:37 - 1:42
    the channel.
    And to do so, we upsample.
  • 1:42 - 1:48
    And we interpolate, and then we transmit.
    Now, how do we go from bitstreams to
  • 1:48 - 1:53
    samples in more detail?
    In other words, how does the mapper work?
  • 1:53 - 1:57
    The mapper will split the incoming
    bitstreams into chunks and will assign a
  • 1:57 - 2:02
    symbol, a of n, from a finite alphabet to
    each chunk.
  • 2:02 - 2:06
    The alphabet, we will decide later what
    it is composed of.
  • 2:06 - 2:10
    To undo the mapping operation and recover
    the bitstream, the receiver will perform
  • 2:10 - 2:14
    a slicing operation.
    So the receiver will a value, hat a of n,
  • 2:14 - 2:22
    where hat indicates the fact that noise
    has leaked into the value of the signal.
  • 2:22 - 2:26
    And the receiver will decide which symbol
    from the alphabet, which is known to the
  • 2:26 - 2:29
    receiver as well, is closest to the
    received symbol.
  • 2:29 - 2:34
    And from there, it will be extremely easy
    to piece back the original bitstream.
  • 2:34 - 2:37
    As an example, let's look at simple
    two-level signaling.
  • 2:37 - 2:40
    This generates signals of the kind we
    have seen in the example so far,
  • 2:40 - 2:45
    alternating between two levels.
    The way the mapper works is by splitting
  • 2:45 - 2:52
    the incoming bitstream into single bits.
    And the output symbol sequence uses an
  • 2:52 - 2:57
    alphabet composed of two symbols, g and
    minus g, and associates g to a bit of
  • 2:57 - 3:05
    value 1 and minus g to a bit of value 0.
    And the receiver, the slicer.
  • 3:05 - 3:10
    Looks at the sign of the incoming symbol
    sequence which has been corrupted by
  • 3:10 - 3:13
    noise.
    And decides that the nth bit will be 1 if
  • 3:13 - 3:19
    the sign of the nth symbol is positive,
    and 0 otherwise.
  • 3:19 - 3:22
    Lets look at an example, lets assume G
    equal to 1.
  • 3:22 - 3:26
    So the two-level signal will alternate
    between plus 1 and minus 1.
  • 3:26 - 3:31
    And suppose we have an input bit sequence
    that gives rise to this signal here after
  • 3:31 - 3:36
    transmission and after decoding at the
    receiver.
  • 3:36 - 3:40
    The resulting symbol sequence will look
    like this, where each symbol has been
  • 3:40 - 3:45
    corrupted by a varying amount of noise.
    If we now slice this sequence by
  • 3:45 - 3:50
    thresholding, as shown, shown before.
    We recover a simple sequence like this
  • 3:50 - 3:55
    where we have indicated in red the errors
    incurred by the slicer because of the
  • 3:55 - 3:58
    noise.
    So if you want to analyze in more detail
  • 3:58 - 4:02
    what the probability of error is, we have
    to make some hypothesis on the signals
  • 4:02 - 4:07
    involved in this toy experiment.
    Assume that each received symbol can be
  • 4:07 - 4:11
    modeled as the original symbol plus a
    noise sample.
  • 4:11 - 4:15
    Assume also that the bits in the
    bitstream are equiprobable.
  • 4:15 - 4:19
    So zero and one appear with probability
    50% each.
  • 4:19 - 4:22
    Assume that the noise and the signal are
    independent.
  • 4:22 - 4:26
    And assume that the noise is additive
    white Gaussian noise with zero mean and
  • 4:26 - 4:30
    known variance sigma 0.
    With this hypothesis, the probability of
  • 4:30 - 4:34
    error can be written out as follows.
    First of all, we split the probability of
  • 4:34 - 4:39
    errors into 2 conditional probabilities.
    Conditioned by whether the nth bit is
  • 4:39 - 4:42
    equal to 1, or the nth bit is equal to
    zero.
  • 4:42 - 4:45
    In the first case, when the nth bit is
    equal to 1.
  • 4:45 - 4:48
    Remember, the produced symbol will be
    equal to G, so the probability of error
  • 4:48 - 4:54
    is equal to the probability for the noise
    sample to be less than minus G.
  • 4:54 - 4:58
    Because only in this case the sum of the
    sample plus the noise will be negative.
  • 4:58 - 5:03
    Similarly, when the nth bit is equal to
    0, we have a negative sample.
  • 5:03 - 5:08
    And the only way for that to change sign
    is if the noise sample is greater than G.
  • 5:08 - 5:14
    Since the probability of each occurrence
    is 1 half because of the symmetry of the
  • 5:14 - 5:19
    Gaussian distribution function.
    This is equal to the probability for the
  • 5:19 - 5:23
    noise sample to be larger than G.
    And we can compute this as the integral
  • 5:23 - 5:27
    from G to infinity of the probability
    distribution function for the Gaussian
  • 5:27 - 5:31
    distribution with the known variance
    here.
  • 5:31 - 5:35
    This function has a standard name.
    It's called the error function.
  • 5:35 - 5:38
    And since this integral can not be
    computed in closed form, this function is
  • 5:38 - 5:42
    available in most numerical packages
    under this name.
  • 5:42 - 5:45
    So the important thing to notice here is
    that the probability of error is some
  • 5:45 - 5:49
    function of the ratio between the
    amplitude of the signal and the standard
  • 5:49 - 5:54
    deviation of the noise.
    And we can carry this analysis further by
  • 5:54 - 5:59
    considering the transmitted power.
    We have a bi-level signal and each level
  • 5:59 - 6:03
    occurs with 1 half probability.
    So the variance of the signal, which
  • 6:03 - 6:07
    corresponds to the power, is equal to G
    squared time the probability of the nth
  • 6:07 - 6:12
    being equal to 1.
    Plus G squared times the probability of
  • 6:12 - 6:15
    the nth bit being equal to 0, which is
    equal to G squared.
  • 6:15 - 6:18
    And so, if we rewrite the probability
    error function we can write that it is
  • 6:18 - 6:23
    equal to the error function of the ratio.
    Between the standard deviation of the
  • 6:23 - 6:27
    transmitted signal divided by the
    standard deviation of the noise, which is
  • 6:27 - 6:30
    equivalent to saying that it is the error
    function of the square root of the signal
  • 6:30 - 6:35
    to noise ratio.
    If we plot this as a function of the
  • 6:35 - 6:40
    signal to noise ratio in dBs.
    And I remind here that dBs here mean that
  • 6:40 - 6:44
    we compute 10 times the log in base 10 of
    the power of the signal divided by the
  • 6:44 - 6:50
    power of the noise.
    And since we are in a log log scale, we
  • 6:50 - 6:54
    can see that the probability of error
    decays exponentially with the signal to
  • 6:54 - 7:00
    noise ratio.
    This exponential decay is quite the norm
  • 7:00 - 7:03
    in communication systems.
    And while the absolute rate of decay
  • 7:03 - 7:08
    might change in terms of the linear
    constants involved in the curve.
  • 7:08 - 7:12
    The trend will stay the same even for
    more complex signaling schemes.
  • 7:12 - 7:15
    So the lesson that we learn from the
    simple example is that in order to reduce
  • 7:15 - 7:20
    the probability of error, we should
    increase G, the amplitude of the signal.
  • 7:20 - 7:23
    But of course, increasing G also
    increases the power of the transmitted
  • 7:23 - 7:28
    signal, and we know that we cannot go
    above the channel's power constraint.
  • 7:28 - 7:33
    And so that's how the power constraint
    limits the reliability of transmission.
  • 7:33 - 7:38
    The bilevel signalling scheme is very
    instructive, but it's also very limited
  • 7:38 - 7:41
    in the sense that we're sending just one
    bit per output symbol.
  • 7:41 - 7:44
    So to increase the throughput, to
    increase the number of bits per second
  • 7:44 - 7:48
    that we send over a channel, we can use
    multilevel signaling.
  • 7:48 - 7:51
    There are very many ways to do so and we
    will just look at a few, but the
  • 7:51 - 7:56
    fundamental idea is that we take now.
    Larger chunks of bits and therefore we
  • 7:56 - 8:00
    have alphabets that have a higher
    cardinality.
  • 8:00 - 8:04
    So more values in the alphabet means more
    bits per symbol and therefore a higher
  • 8:04 - 8:08
    data rate.
    But not to give the ending away, we will
  • 8:08 - 8:10
    see that the power of the signal will
    also be dependent on the size of the
  • 8:10 - 8:14
    alphabet.
    And so, in order not to exceed a certain
  • 8:14 - 8:17
    probability of error, given the channel's
    power of constraint, we will not be able
  • 8:17 - 8:21
    to grow the alphabet indefinitely.
    But we can be smart in the way we build
  • 8:21 - 8:24
    this alphabet and so we will look at some
    examples.
  • 8:24 - 8:28
    The first example is PAM, Pulse Amplitude
    Modulation.
  • 8:28 - 8:32
    We split the incoming bitstream into
    chunks of M bits so that each chunk
  • 8:32 - 8:37
    corresponds to an integer between 0 and 2
    to the M minus 1.
  • 8:37 - 8:40
    We can call this sequence of integers k
    of n and this sequence is mapped onto a
  • 8:40 - 8:46
    sequence of symbols a of n like so.
    There's a gain factor G, like always.
  • 8:46 - 8:51
    And then we use 2 to the n minus 1 odd
    integers around 0.
  • 8:51 - 8:58
    So for instance, if M is equal to 2, we
    have 0, 1, 2, and 3 as potential items
  • 8:58 - 9:03
    for k of n.
    And a of n will be either.
  • 9:03 - 9:11
    Let's assume G is equal to 1.
    Will be either minus 3, or minus 1, or 1,
  • 9:11 - 9:16
    or 3.
    We will see why we use the odd integers
  • 9:16 - 9:19
    in just a second.
    And the receiver the slicer will work by
  • 9:19 - 9:23
    simply associating to the received
    symbol, the closest odd integer, always
  • 9:23 - 9:29
    taking the gain into account.
    So graphically, again, PAM for M equal to
  • 9:29 - 9:32
    2 and G equal to 1, will look like this.
    Here are the odd integers.
  • 9:32 - 9:38
    The distance between two transmitted
    points, or transmitted symbols, is 2G
  • 9:38 - 9:42
    right here.
    G Is equal to 1, but it would be in
  • 9:42 - 9:48
    general 2 times the gain.
    And using odd integers creates a
  • 9:48 - 9:49
    zero-mean sequence.
    If we assume that each symbol is
  • 9:49 - 9:51
    equiprobable.
    Which is likely, given that we've used a
  • 9:51 - 9:56
    scrambler in the transmitter.
    The the resulting mean is zero.
  • 9:56 - 9:59
    The analysis of the probability of error
    for PAM is very similar to what we
  • 9:59 - 10:04
    carried out for bilateral signaling.
    As a matter of fact, binary signaling is
  • 10:04 - 10:08
    simply PAM with M equal to 1.
    The end result is very similar, and it's
  • 10:08 - 10:11
    an exponential decaying function of the
    ratio between the power of the signal and
  • 10:11 - 10:15
    the power of the noise.
    The reason why we don't analyze this
  • 10:15 - 10:19
    further is because we have an improvement
    in store.
  • 10:19 - 10:22
    And the improvement is aimed at
    increasing the throughput, increasing the
  • 10:22 - 10:26
    number of bits per symbol that we can
    send without necessarily increasing the
  • 10:26 - 10:30
    probability of error.
    So here's a wild idea.
  • 10:30 - 10:34
    Let's use complex numbers and build a
    complex valued transmission system.
  • 10:34 - 10:37
    This requires certain suspension of
    disbelief for the time being, but believe
  • 10:37 - 10:41
    me, it will work in the end.
    The name for this complex valued mapping
  • 10:41 - 10:45
    scheme is QAM.
    Which is an acronym for Quadtrature
  • 10:45 - 10:48
    Amplitude Modulation, and it works like
    so.
  • 10:48 - 10:52
    The mapper takes the income and bit
    stream, and splits it into chunks of M
  • 10:52 - 10:57
    bits, with M even.
    Then it uses half of the bits, to define
  • 10:57 - 11:01
    a PAM sequence, which we call a of r of
    n, and the reamaining, M over 2 bits, to
  • 11:01 - 11:07
    define another independent PAM sequence.
    Ai of n.
  • 11:07 - 11:12
    The final symbol sequence is a sequence
    of complex numbers, where the real part
  • 11:12 - 11:14
    is the first PAM sequence, and the
    imaginary part is the second PAM
  • 11:14 - 11:18
    sequence.
    And of course, in front we have a gain
  • 11:18 - 11:22
    factor, G.
    So the transmission alphabet, a, is given
  • 11:22 - 11:29
    by points in the complex plane, with
    odd-valued coordinates around the origin.
  • 11:29 - 11:33
    At the receiver, the slicer works by
    finding the symbol in the alphabet, which
  • 11:33 - 11:37
    is closest in Euclidean distance to the
    received symbol.
  • 11:37 - 11:42
    Let's look at this graphically.
    This is a set of points for QAM
  • 11:42 - 11:47
    transmission with M equal to 2, which
    corresponds to two bilevel PAM signals on
  • 11:47 - 11:55
    the real axis and on the imaginary axis.
    So that results into four points.
  • 11:55 - 11:59
    If we increase the number of bits per
    symbol, we set M equal to 4, that
  • 11:59 - 12:03
    corresponds to two pam signals with 2
    bits each, which makes for a
  • 12:03 - 12:08
    constellation.
    This is how these arrangement of points
  • 12:08 - 12:12
    in the complex plain are called.
    A constellation of four by four points at
  • 12:12 - 12:16
    the odd-valued coordinates in the complex
    plane.
  • 12:16 - 12:23
    If we increase M to 8, then we have a 256
    point constellation, with 16 points per
  • 12:23 - 12:27
    side.
    Lets look at what happens when a symbol
  • 12:27 - 12:31
    is received, and how we derive an
    expression for the probability of error.
  • 12:31 - 12:35
    If this is the nominal constellation, the
    transmitter will choose one of these
  • 12:35 - 12:40
    values for transmission, say this one.
    And this value will corrupted by noise in
  • 12:40 - 12:44
    the transmission and the receiving
    process.
  • 12:44 - 12:48
    And will appear somewhere in the complex
    plane, not necessarily exactly on the
  • 12:48 - 12:52
    point it originates from.
    The way the slicer operates, is by
  • 12:52 - 12:56
    defining decision regions around each
    point in the constellation.
  • 12:56 - 13:02
    So suppose for this point here, the
    transmitted point, the decision region is
  • 13:02 - 13:07
    square, of side 2G, centered around which
    is made in point.
  • 13:07 - 13:11
    So what happens is that when we receive
    symbols.
  • 13:11 - 13:14
    They will now fall on the original point.
    But as long as they fall within the
  • 13:14 - 13:17
    decision region, they will be decoded
    correctly.
  • 13:17 - 13:20
    So for instance here.
    We will decode this correctly.
  • 13:20 - 13:23
    Here we will decode this correctly.
    Same here.
  • 13:23 - 13:26
    But this point for instance falls outside
    of the decision region and therefore it
  • 13:26 - 13:30
    will be associated to a different
    constellation point, thereby causing an
  • 13:30 - 13:34
    error.
    To quantify the probability of error, we
  • 13:34 - 13:38
    assume as per usual that each received
    symbol is the sum of the transmitted
  • 13:38 - 13:42
    symbol.
    Plus a noise sample theta of n.
  • 13:42 - 13:48
    And we further assume that this noise is
    a complex value Gaussian noise of equal
  • 13:48 - 13:53
    variance in the complex and real
    components.
  • 13:53 - 13:58
    We're working on a completely digital
    system that operates.
  • 13:58 - 14:02
    With complex valued quantities.
    So we're making a new model for the
  • 14:02 - 14:05
    noise, and we will see later, how to
    translate the physical real noise, into a
  • 14:05 - 14:10
    complex variable.
    With these assumptions, the probability
  • 14:10 - 14:14
    of error, is equal to the probability
    that the real part of the noise is larger
  • 14:14 - 14:19
    than G in magnitude.
    Plus the probability that the imaginary
  • 14:19 - 14:22
    part of the noise is larger than G in
    magnitude.
  • 14:22 - 14:25
    We assume that real and imaginary
    component of the noise are independent,
  • 14:25 - 14:28
    and that's why we can split the
    probability like so.
  • 14:28 - 14:32
    Now, if you remember the shape of the
    decision region, this condition is
  • 14:32 - 14:36
    equivalent to saying that the noise is
    pushing the real part of the point,
  • 14:36 - 14:41
    outside of the decision region, in either
    direction, and same for the imaginary
  • 14:41 - 14:45
    part.
    Now if we develop this, this is equal to
  • 14:45 - 14:49
    1 minus the probability that the real
    part of the noise is less than G, and the
  • 14:49 - 14:52
    imaginary part of the noise is less than
    G.
  • 14:52 - 14:57
    This is the complimentary condition to
    what we just wrote above.
  • 14:57 - 15:01
    And so this is equal to 1 minus the
    integral over the decision region d of
  • 15:01 - 15:06
    the complex valued probability density
    function for the noise.
  • 15:06 - 15:10
    In order to compute this integral, we're
    going to approximate the shape of the
  • 15:10 - 15:13
    decision region.
    With the inbound circle.
  • 15:13 - 15:16
    So instead of using the square, we're
    going to use a circle centered around the
  • 15:16 - 15:20
    transmission point.
    When the constellation is very dense,
  • 15:20 - 15:24
    this approximation is quite accurate.
    With this approximation, we can compute
  • 15:24 - 15:28
    the integral exactly for a gaussian
    distribution.
  • 15:28 - 15:32
    And if we assume that the variance of the
    noise is sigma 0 squared over 2 in each
  • 15:32 - 15:38
    component, real or imaginary.
    It turns out that the probability of
  • 15:38 - 15:42
    error is equal to each of the minus g
    squared over sigma 0 square.
  • 15:42 - 15:45
    Now to obtain a probability of error as a
    function of the signal to noise ratio we
  • 15:45 - 15:49
    have to compute the power of the
    transmitted signal.
  • 15:49 - 15:53
    So if all symbols are equiprobable and
    independent, it turns out that the
  • 15:53 - 15:59
    variance of the signal is G squared times
    1 over 2 to the power of M.
  • 15:59 - 16:03
    Which is the probability of each symbol,
    times the sum over all symbols in the
  • 16:03 - 16:07
    alphabet of the magnitude of the symbols
    squared.
  • 16:07 - 16:11
    Now, it's a little bit tedious but we can
    solve it exactly for M.
  • 16:11 - 16:15
    And it turns out that the power to
    transmit the signal is g squared 2 rds3,
  • 16:15 - 16:21
    2 to the n to the minus 1.
    Now, if you plug this into the formula
  • 16:21 - 16:24
    for the probability of error that we seen
    before.
  • 16:24 - 16:29
    We get that the result is an exponential
    function where the argument is minus 3,
  • 16:29 - 16:33
    that multiplies 2 to the minus m plus 1,
    that multiplies the signals to noise
  • 16:33 - 16:38
    ratio.
    We can plot this probability of error in
  • 16:38 - 16:42
    a log log scale, like we did before.
    And we can paramatrize the curve, as a
  • 16:42 - 16:46
    function of the number of points in the
    constellation.
  • 16:46 - 16:50
    So here you have the curve for a four
    point constellation, Here's the curve for
  • 16:50 - 16:54
    16-points and here's the curve for
    64-points.
  • 16:54 - 16:57
    Now you can see that for a given signal
    to noise ratio the probability of error
  • 16:57 - 17:01
    increases with the number of points.
    Why is that?
  • 17:01 - 17:04
    Well if the signal to noise remains the
    same, and we assume that the noise is
  • 17:04 - 17:07
    always at the same level, then it means
    that the power of the signal remains
  • 17:07 - 17:11
    constant as well.
    In that case, if the number of points
  • 17:11 - 17:16
    increases, g has to become smaller.
    In order to accomodate a larger number of
  • 17:16 - 17:19
    points for the same power.
    But if g becomes smaller, then the
  • 17:19 - 17:24
    decision regions becomes smaller, the
    separation between points become smaller,
  • 17:24 - 17:29
    and the decision process becomes more
    vulnerable to noise.
  • 17:29 - 17:32
    So in the end here's the final recipe to
    design a QAM transmitter.
  • 17:32 - 17:35
    First you pick a probability of error
    that you can live.
  • 17:35 - 17:38
    In general, 10 to the minus 6 is an
    acceptable probability of error at the
  • 17:38 - 17:41
    symbol level.
    Then you find out the signals noise ratio
  • 17:41 - 17:45
    that is imposed by the channel's power
    constraint.
  • 17:45 - 17:50
    Once you have that, you can find the size
    of your constellation, by finding M.
  • 17:50 - 17:54
    Which, based on the previous equations,
    is the log and base 2 of 1 minus 3 over 2
  • 17:54 - 17:57
    times the signal to noise ratio, divided
    by the natural logarithm of the
  • 17:57 - 18:02
    probability of error.
    Of course, you will have to round this to
  • 18:02 - 18:05
    a suitable integer value, and potentially
    to an even power of 2 in order to have a
  • 18:05 - 18:10
    square constellation.
    The final data rate of your system will
  • 18:10 - 18:14
    be M, the number of bits per symbol,
    times W, which, if you remember, is the
  • 18:14 - 18:18
    baud rate of the system, and corresponds
    to the bandwidth allowed for by the
  • 18:18 - 18:23
    channel.
    So we know how to fit the bandwidth
  • 18:23 - 18:28
    constraint via upsampling.
    With QAM, we know how many bits per
  • 18:28 - 18:31
    symbol we can use given the power
    constraint.
  • 18:31 - 18:35
    And so we know the theoretical throughput
    of the transmit for a given reliability
  • 18:35 - 18:39
    figure.
    However, the question remains, how are we
  • 18:39 - 18:44
    going to send complex value symbols over
    a physical channel?
  • 18:44 - 18:49
    It's time, therefore, to Stop the
    suspension of this belief, and look at
  • 18:49 - 18:55
    techniques to do complex signaling over a
    real value channel.
Title:
9.3 - Controlling the power
Description:

Official note on 9.3: "please check the errata for Module 9 wrt slides 52 and 53", which said:

"slides 52 and 53: unfortunately in the video the slides and the lecture I have used erfc(x) instead of Q(x). The function erfc is defined as erfc(x)=(2/π)∫x∞e−t2dt, i.e. erfc(x) is the probability that a random variable, drawn from a Gaussian distribution of variance 1/2, is greater than x in magnitude. For a Gaussian distribution of arbitrary variance σ2, the probability becomes (by reworking the integral) (1/2)erfc(x/(2σ). For convenience, especially in communication textbooks, a function Q(x)=(1/2)erfc(x/2) is often defined, so that the error probability becomes simply Q(x/σ). The results are fundamentally the same, and so is the shape of the error curve, but the numerical values are slightly different because of the normalization factors."

From the official description of 9.. videos:

Welcome to Week 8 of Digital Signal Processing.

This week's module is about digital communication systems and this is where it all comes together; from complex-valued signals, to spectral analysis, to stochastic processing, sampling and interpolation: everything plays a role in the design and implementation of a digital modem. Digital communications is an extremely vast and fascinating topic and it is arguably the pinnacle achievement of DSP in the sense that it's the domain where the most extraordinary quantitative progress has been made thanks to the digital paradigm. The fact that MOOCs such as this one are available to such an incredibly vast audience is just one of the tangible results of digital communication systems. It is only fitting, therefore, to devote the last module of our class to this subject.

We will start with the basics of data modulation and demodulation and we will progress to describing how your ADSL box works by way of its direct predecessor, the voiceband modem that spearheaded the Internet revolution by allowing for the first time the delivery of substantial data rates in the home.

more » « less
Claude Almansi edited English subtitles for 9.3 - Controlling the power
Claude Almansi edited English subtitles for 9.3 - Controlling the power
Claude Almansi commented on English subtitles for 9.3 - Controlling the power
Claude Almansi edited English subtitles for 9.3 - Controlling the power
Claude Almansi added a translation

English subtitles

Incomplete

Revisions