[Script Info] Title: [Events] Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text Dialogue: 0,0:00:00.88,0:00:04.22,Default,,0000,0000,0000,,Hi and welcome to Module 9.3 of Digital \NSignal Processing. Dialogue: 0,0:00:04.22,0:00:07.52,Default,,0000,0000,0000,,We are still talking about Digital \NCommunication Systems. Dialogue: 0,0:00:07.52,0:00:10.79,Default,,0000,0000,0000,,In the previous module we addressed \Nbandwidth constraint. Dialogue: 0,0:00:10.79,0:00:13.96,Default,,0000,0000,0000,,In this module we will tackle the powered \Nconstraint so first we will introduce the Dialogue: 0,0:00:13.96,0:00:17.83,Default,,0000,0000,0000,,concept of noise and probability of error \Nin a communication system. Dialogue: 0,0:00:17.83,0:00:22.26,Default,,0000,0000,0000,,We will look at signaling alphabet and \Npower and their related power. Dialogue: 0,0:00:22.26,0:00:25.40,Default,,0000,0000,0000,,And finally, we'll introduce QAM \Nsignaling. Dialogue: 0,0:00:25.40,0:00:28.60,Default,,0000,0000,0000,,So we have seen that a transmitter sends \Na sequence of symbols a of n. Dialogue: 0,0:00:28.60,0:00:32.50,Default,,0000,0000,0000,,Created by the mapper. \NNow we take the receiver into account. Dialogue: 0,0:00:32.50,0:00:36.34,Default,,0000,0000,0000,,We don't yet know how, but it's safe to \Nassume that the receiver in the end will Dialogue: 0,0:00:36.34,0:00:41.20,Default,,0000,0000,0000,,obtain an estimation hat a of n. \NOf the original transmitted symbol Dialogue: 0,0:00:41.20,0:00:43.36,Default,,0000,0000,0000,,sequence. \NIt's an estimation because even if there Dialogue: 0,0:00:43.36,0:00:45.70,Default,,0000,0000,0000,,is no distortion introduced by the \Nchannel. Dialogue: 0,0:00:45.70,0:00:49.59,Default,,0000,0000,0000,,Even if nothing bad happens. \NThere will always be a certain amount of Dialogue: 0,0:00:49.59,0:00:52.78,Default,,0000,0000,0000,,noise, that will corrupt the original \Nsequence. Dialogue: 0,0:00:52.78,0:00:56.22,Default,,0000,0000,0000,,When noise is very large, our estimate \Nfor the transmitted symbol will be off, Dialogue: 0,0:00:56.22,0:01:00.14,Default,,0000,0000,0000,,and will incur a decoding error. \NNow, this probability of error will Dialogue: 0,0:01:00.14,0:01:04.47,Default,,0000,0000,0000,,depend on the power of the noise, with \Nrespect to the power of the signal. Dialogue: 0,0:01:04.47,0:01:07.62,Default,,0000,0000,0000,,And will also depend on the decoding \Nstrategies that we've put in place, how Dialogue: 0,0:01:07.62,0:01:11.90,Default,,0000,0000,0000,,smart we are in circumventing the effects \Nof the noise. Dialogue: 0,0:01:11.90,0:01:15.51,Default,,0000,0000,0000,,One we can maximize the probability of \Ncorrectly guessing the transmit symbol, Dialogue: 0,0:01:15.51,0:01:20.41,Default,,0000,0000,0000,,is by using suitable alphabets. \NAnd so we will see in more detail what Dialogue: 0,0:01:20.41,0:01:24.40,Default,,0000,0000,0000,,that means. \NRemember the scheme for the transmitter. Dialogue: 0,0:01:24.40,0:01:26.57,Default,,0000,0000,0000,,We have a bitstream coming in. \NAnd then we have the scrambler. Dialogue: 0,0:01:26.57,0:01:34.76,Default,,0000,0000,0000,,And then the mapper. \NAnd here we have a sequence of symbols a Dialogue: 0,0:01:34.76,0:01:37.20,Default,,0000,0000,0000,,of n. \NThese symbols will have to be sent over Dialogue: 0,0:01:37.20,0:01:42.30,Default,,0000,0000,0000,,the channel. \NAnd to do so, we upsample. Dialogue: 0,0:01:42.30,0:01:47.96,Default,,0000,0000,0000,,And we interpolate, and then we transmit. \NNow, how do we go from bitstreams to Dialogue: 0,0:01:47.96,0:01:52.59,Default,,0000,0000,0000,,samples in more detail? \NIn other words, how does the mapper work? Dialogue: 0,0:01:52.59,0:01:56.87,Default,,0000,0000,0000,,The mapper will split the incoming \Nbitstreams into chunks and will assign a Dialogue: 0,0:01:56.87,0:02:02.26,Default,,0000,0000,0000,,symbol, a of n, from a finite alphabet to \Neach chunk. Dialogue: 0,0:02:02.26,0:02:05.90,Default,,0000,0000,0000,,The alphabet, we will decide later what \Nit is composed of. Dialogue: 0,0:02:05.90,0:02:09.89,Default,,0000,0000,0000,,To undo the mapping operation and recover \Nthe bitstream, the receiver will perform Dialogue: 0,0:02:09.89,0:02:14.25,Default,,0000,0000,0000,,a slicing operation. \NSo the receiver will a value, hat a of n, Dialogue: 0,0:02:14.25,0:02:21.66,Default,,0000,0000,0000,,where hat indicates the fact that noise \Nhas leaked into the value of the signal. Dialogue: 0,0:02:21.66,0:02:25.54,Default,,0000,0000,0000,,And the receiver will decide which symbol \Nfrom the alphabet, which is known to the Dialogue: 0,0:02:25.54,0:02:29.36,Default,,0000,0000,0000,,receiver as well, is closest to the \Nreceived symbol. Dialogue: 0,0:02:29.36,0:02:33.88,Default,,0000,0000,0000,,And from there, it will be extremely easy \Nto piece back the original bitstream. Dialogue: 0,0:02:33.88,0:02:36.73,Default,,0000,0000,0000,,As an example, let's look at simple \Ntwo-level signaling. Dialogue: 0,0:02:36.73,0:02:40.15,Default,,0000,0000,0000,,This generates signals of the kind we \Nhave seen in the example so far, Dialogue: 0,0:02:40.15,0:02:45.16,Default,,0000,0000,0000,,alternating between two levels. \NThe way the mapper works is by splitting Dialogue: 0,0:02:45.16,0:02:51.66,Default,,0000,0000,0000,,the incoming bitstream into single bits. \NAnd the output symbol sequence uses an Dialogue: 0,0:02:51.66,0:02:57.18,Default,,0000,0000,0000,,alphabet composed of two symbols, g and \Nminus g, and associates g to a bit of Dialogue: 0,0:02:57.18,0:03:05.27,Default,,0000,0000,0000,,value 1 and minus g to a bit of value 0. \NAnd the receiver, the slicer. Dialogue: 0,0:03:05.27,0:03:09.62,Default,,0000,0000,0000,,Looks at the sign of the incoming symbol \Nsequence which has been corrupted by Dialogue: 0,0:03:09.62,0:03:13.41,Default,,0000,0000,0000,,noise. \NAnd decides that the nth bit will be 1 if Dialogue: 0,0:03:13.41,0:03:18.75,Default,,0000,0000,0000,,the sign of the nth symbol is positive, \Nand 0 otherwise. Dialogue: 0,0:03:18.75,0:03:22.11,Default,,0000,0000,0000,,Lets look at an example, lets assume G \Nequal to 1. Dialogue: 0,0:03:22.11,0:03:25.98,Default,,0000,0000,0000,,So the two-level signal will alternate \Nbetween plus 1 and minus 1. Dialogue: 0,0:03:25.98,0:03:30.94,Default,,0000,0000,0000,,And suppose we have an input bit sequence \Nthat gives rise to this signal here after Dialogue: 0,0:03:30.94,0:03:35.72,Default,,0000,0000,0000,,transmission and after decoding at the \Nreceiver. Dialogue: 0,0:03:35.72,0:03:39.94,Default,,0000,0000,0000,,The resulting symbol sequence will look \Nlike this, where each symbol has been Dialogue: 0,0:03:39.94,0:03:44.79,Default,,0000,0000,0000,,corrupted by a varying amount of noise. \NIf we now slice this sequence by Dialogue: 0,0:03:44.79,0:03:49.72,Default,,0000,0000,0000,,thresholding, as shown, shown before. \NWe recover a simple sequence like this Dialogue: 0,0:03:49.72,0:03:54.79,Default,,0000,0000,0000,,where we have indicated in red the errors \Nincurred by the slicer because of the Dialogue: 0,0:03:54.79,0:03:57.81,Default,,0000,0000,0000,,noise. \NSo if you want to analyze in more detail Dialogue: 0,0:03:57.81,0:04:01.51,Default,,0000,0000,0000,,what the probability of error is, we have \Nto make some hypothesis on the signals Dialogue: 0,0:04:01.51,0:04:07.32,Default,,0000,0000,0000,,involved in this toy experiment. \NAssume that each received symbol can be Dialogue: 0,0:04:07.32,0:04:11.23,Default,,0000,0000,0000,,modeled as the original symbol plus a \Nnoise sample. Dialogue: 0,0:04:11.23,0:04:14.52,Default,,0000,0000,0000,,Assume also that the bits in the \Nbitstream are equiprobable. Dialogue: 0,0:04:14.52,0:04:19.27,Default,,0000,0000,0000,,So zero and one appear with probability \N50% each. Dialogue: 0,0:04:19.27,0:04:21.81,Default,,0000,0000,0000,,Assume that the noise and the signal are \Nindependent. Dialogue: 0,0:04:21.81,0:04:25.84,Default,,0000,0000,0000,,And assume that the noise is additive \Nwhite Gaussian noise with zero mean and Dialogue: 0,0:04:25.84,0:04:30.37,Default,,0000,0000,0000,,known variance sigma 0. \NWith this hypothesis, the probability of Dialogue: 0,0:04:30.37,0:04:34.21,Default,,0000,0000,0000,,error can be written out as follows. \NFirst of all, we split the probability of Dialogue: 0,0:04:34.21,0:04:38.86,Default,,0000,0000,0000,,errors into 2 conditional probabilities. \NConditioned by whether the nth bit is Dialogue: 0,0:04:38.86,0:04:41.70,Default,,0000,0000,0000,,equal to 1, or the nth bit is equal to \Nzero. Dialogue: 0,0:04:41.70,0:04:44.60,Default,,0000,0000,0000,,In the first case, when the nth bit is \Nequal to 1. Dialogue: 0,0:04:44.60,0:04:48.38,Default,,0000,0000,0000,,Remember, the produced symbol will be \Nequal to G, so the probability of error Dialogue: 0,0:04:48.38,0:04:53.62,Default,,0000,0000,0000,,is equal to the probability for the noise \Nsample to be less than minus G. Dialogue: 0,0:04:53.62,0:04:58.40,Default,,0000,0000,0000,,Because only in this case the sum of the \Nsample plus the noise will be negative. Dialogue: 0,0:04:58.40,0:05:02.99,Default,,0000,0000,0000,,Similarly, when the nth bit is equal to \N0, we have a negative sample. Dialogue: 0,0:05:02.99,0:05:08.45,Default,,0000,0000,0000,,And the only way for that to change sign \Nis if the noise sample is greater than G. Dialogue: 0,0:05:08.45,0:05:13.73,Default,,0000,0000,0000,,Since the probability of each occurrence \Nis 1 half because of the symmetry of the Dialogue: 0,0:05:13.73,0:05:18.51,Default,,0000,0000,0000,,Gaussian distribution function. \NThis is equal to the probability for the Dialogue: 0,0:05:18.51,0:05:23.15,Default,,0000,0000,0000,,noise sample to be larger than G. \NAnd we can compute this as the integral Dialogue: 0,0:05:23.15,0:05:26.92,Default,,0000,0000,0000,,from G to infinity of the probability \Ndistribution function for the Gaussian Dialogue: 0,0:05:26.92,0:05:30.58,Default,,0000,0000,0000,,distribution with the known variance \Nhere. Dialogue: 0,0:05:30.58,0:05:34.96,Default,,0000,0000,0000,,This function has a standard name. \NIt's called the error function. Dialogue: 0,0:05:34.96,0:05:38.11,Default,,0000,0000,0000,,And since this integral can not be \Ncomputed in closed form, this function is Dialogue: 0,0:05:38.11,0:05:41.55,Default,,0000,0000,0000,,available in most numerical packages \Nunder this name. Dialogue: 0,0:05:41.55,0:05:45.33,Default,,0000,0000,0000,,So the important thing to notice here is \Nthat the probability of error is some Dialogue: 0,0:05:45.33,0:05:48.98,Default,,0000,0000,0000,,function of the ratio between the \Namplitude of the signal and the standard Dialogue: 0,0:05:48.98,0:05:54.21,Default,,0000,0000,0000,,deviation of the noise. \NAnd we can carry this analysis further by Dialogue: 0,0:05:54.21,0:05:59.19,Default,,0000,0000,0000,,considering the transmitted power. \NWe have a bi-level signal and each level Dialogue: 0,0:05:59.19,0:06:03.20,Default,,0000,0000,0000,,occurs with 1 half probability. \NSo the variance of the signal, which Dialogue: 0,0:06:03.20,0:06:06.90,Default,,0000,0000,0000,,corresponds to the power, is equal to G \Nsquared time the probability of the nth Dialogue: 0,0:06:06.90,0:06:11.70,Default,,0000,0000,0000,,being equal to 1. \NPlus G squared times the probability of Dialogue: 0,0:06:11.70,0:06:14.71,Default,,0000,0000,0000,,the nth bit being equal to 0, which is \Nequal to G squared. Dialogue: 0,0:06:14.71,0:06:18.12,Default,,0000,0000,0000,,And so, if we rewrite the probability \Nerror function we can write that it is Dialogue: 0,0:06:18.12,0:06:23.14,Default,,0000,0000,0000,,equal to the error function of the ratio. \NBetween the standard deviation of the Dialogue: 0,0:06:23.14,0:06:26.53,Default,,0000,0000,0000,,transmitted signal divided by the \Nstandard deviation of the noise, which is Dialogue: 0,0:06:26.53,0:06:30.13,Default,,0000,0000,0000,,equivalent to saying that it is the error \Nfunction of the square root of the signal Dialogue: 0,0:06:30.13,0:06:34.81,Default,,0000,0000,0000,,to noise ratio. \NIf we plot this as a function of the Dialogue: 0,0:06:34.81,0:06:39.88,Default,,0000,0000,0000,,signal to noise ratio in dBs. \NAnd I remind here that dBs here mean that Dialogue: 0,0:06:39.88,0:06:44.35,Default,,0000,0000,0000,,we compute 10 times the log in base 10 of \Nthe power of the signal divided by the Dialogue: 0,0:06:44.35,0:06:49.77,Default,,0000,0000,0000,,power of the noise. \NAnd since we are in a log log scale, we Dialogue: 0,0:06:49.77,0:06:54.39,Default,,0000,0000,0000,,can see that the probability of error \Ndecays exponentially with the signal to Dialogue: 0,0:06:54.39,0:06:59.80,Default,,0000,0000,0000,,noise ratio. \NThis exponential decay is quite the norm Dialogue: 0,0:06:59.80,0:07:02.88,Default,,0000,0000,0000,,in communication systems. \NAnd while the absolute rate of decay Dialogue: 0,0:07:02.88,0:07:07.58,Default,,0000,0000,0000,,might change in terms of the linear \Nconstants involved in the curve. Dialogue: 0,0:07:07.58,0:07:12.11,Default,,0000,0000,0000,,The trend will stay the same even for \Nmore complex signaling schemes. Dialogue: 0,0:07:12.11,0:07:15.20,Default,,0000,0000,0000,,So the lesson that we learn from the \Nsimple example is that in order to reduce Dialogue: 0,0:07:15.20,0:07:19.90,Default,,0000,0000,0000,,the probability of error, we should \Nincrease G, the amplitude of the signal. Dialogue: 0,0:07:19.90,0:07:23.26,Default,,0000,0000,0000,,But of course, increasing G also \Nincreases the power of the transmitted Dialogue: 0,0:07:23.26,0:07:28.14,Default,,0000,0000,0000,,signal, and we know that we cannot go \Nabove the channel's power constraint. Dialogue: 0,0:07:28.14,0:07:33.38,Default,,0000,0000,0000,,And so that's how the power constraint \Nlimits the reliability of transmission. Dialogue: 0,0:07:33.38,0:07:37.76,Default,,0000,0000,0000,,The bilevel signalling scheme is very \Ninstructive, but it's also very limited Dialogue: 0,0:07:37.76,0:07:41.35,Default,,0000,0000,0000,,in the sense that we're sending just one \Nbit per output symbol. Dialogue: 0,0:07:41.35,0:07:44.29,Default,,0000,0000,0000,,So to increase the throughput, to \Nincrease the number of bits per second Dialogue: 0,0:07:44.29,0:07:47.99,Default,,0000,0000,0000,,that we send over a channel, we can use \Nmultilevel signaling. Dialogue: 0,0:07:47.99,0:07:50.79,Default,,0000,0000,0000,,There are very many ways to do so and we \Nwill just look at a few, but the Dialogue: 0,0:07:50.79,0:07:56.38,Default,,0000,0000,0000,,fundamental idea is that we take now. \NLarger chunks of bits and therefore we Dialogue: 0,0:07:56.38,0:08:00.21,Default,,0000,0000,0000,,have alphabets that have a higher \Ncardinality. Dialogue: 0,0:08:00.21,0:08:04.43,Default,,0000,0000,0000,,So more values in the alphabet means more \Nbits per symbol and therefore a higher Dialogue: 0,0:08:04.43,0:08:07.58,Default,,0000,0000,0000,,data rate. \NBut not to give the ending away, we will Dialogue: 0,0:08:07.58,0:08:10.41,Default,,0000,0000,0000,,see that the power of the signal will \Nalso be dependent on the size of the Dialogue: 0,0:08:10.41,0:08:13.61,Default,,0000,0000,0000,,alphabet. \NAnd so, in order not to exceed a certain Dialogue: 0,0:08:13.61,0:08:16.86,Default,,0000,0000,0000,,probability of error, given the channel's \Npower of constraint, we will not be able Dialogue: 0,0:08:16.86,0:08:21.20,Default,,0000,0000,0000,,to grow the alphabet indefinitely. \NBut we can be smart in the way we build Dialogue: 0,0:08:21.20,0:08:24.48,Default,,0000,0000,0000,,this alphabet and so we will look at some \Nexamples. Dialogue: 0,0:08:24.48,0:08:27.58,Default,,0000,0000,0000,,The first example is PAM, Pulse Amplitude \NModulation. Dialogue: 0,0:08:27.58,0:08:31.65,Default,,0000,0000,0000,,We split the incoming bitstream into \Nchunks of M bits so that each chunk Dialogue: 0,0:08:31.65,0:08:36.72,Default,,0000,0000,0000,,corresponds to an integer between 0 and 2 \Nto the M minus 1. Dialogue: 0,0:08:36.72,0:08:40.25,Default,,0000,0000,0000,,We can call this sequence of integers k \Nof n and this sequence is mapped onto a Dialogue: 0,0:08:40.25,0:08:46.17,Default,,0000,0000,0000,,sequence of symbols a of n like so. \NThere's a gain factor G, like always. Dialogue: 0,0:08:46.17,0:08:50.77,Default,,0000,0000,0000,,And then we use 2 to the n minus 1 odd \Nintegers around 0. Dialogue: 0,0:08:50.77,0:08:58.30,Default,,0000,0000,0000,,So for instance, if M is equal to 2, we \Nhave 0, 1, 2, and 3 as potential items Dialogue: 0,0:08:58.30,0:09:02.68,Default,,0000,0000,0000,,for k of n. \NAnd a of n will be either. Dialogue: 0,0:09:02.68,0:09:10.97,Default,,0000,0000,0000,,Let's assume G is equal to 1. \NWill be either minus 3, or minus 1, or 1, Dialogue: 0,0:09:10.97,0:09:15.87,Default,,0000,0000,0000,,or 3. \NWe will see why we use the odd integers Dialogue: 0,0:09:15.87,0:09:19.18,Default,,0000,0000,0000,,in just a second. \NAnd the receiver the slicer will work by Dialogue: 0,0:09:19.18,0:09:23.20,Default,,0000,0000,0000,,simply associating to the received \Nsymbol, the closest odd integer, always Dialogue: 0,0:09:23.20,0:09:28.60,Default,,0000,0000,0000,,taking the gain into account. \NSo graphically, again, PAM for M equal to Dialogue: 0,0:09:28.60,0:09:32.35,Default,,0000,0000,0000,,2 and G equal to 1, will look like this. \NHere are the odd integers. Dialogue: 0,0:09:32.35,0:09:37.79,Default,,0000,0000,0000,,The distance between two transmitted \Npoints, or transmitted symbols, is 2G Dialogue: 0,0:09:37.79,0:09:42.29,Default,,0000,0000,0000,,right here. \NG Is equal to 1, but it would be in Dialogue: 0,0:09:42.29,0:09:47.73,Default,,0000,0000,0000,,general 2 times the gain. \NAnd using odd integers creates a Dialogue: 0,0:09:47.73,0:09:49.22,Default,,0000,0000,0000,,zero-mean sequence. \NIf we assume that each symbol is Dialogue: 0,0:09:49.22,0:09:51.47,Default,,0000,0000,0000,,equiprobable. \NWhich is likely, given that we've used a Dialogue: 0,0:09:51.47,0:09:55.50,Default,,0000,0000,0000,,scrambler in the transmitter. \NThe the resulting mean is zero. Dialogue: 0,0:09:55.50,0:09:58.86,Default,,0000,0000,0000,,The analysis of the probability of error \Nfor PAM is very similar to what we Dialogue: 0,0:09:58.86,0:10:03.83,Default,,0000,0000,0000,,carried out for bilateral signaling. \NAs a matter of fact, binary signaling is Dialogue: 0,0:10:03.83,0:10:08.23,Default,,0000,0000,0000,,simply PAM with M equal to 1. \NThe end result is very similar, and it's Dialogue: 0,0:10:08.23,0:10:11.43,Default,,0000,0000,0000,,an exponential decaying function of the \Nratio between the power of the signal and Dialogue: 0,0:10:11.43,0:10:15.30,Default,,0000,0000,0000,,the power of the noise. \NThe reason why we don't analyze this Dialogue: 0,0:10:15.30,0:10:18.53,Default,,0000,0000,0000,,further is because we have an improvement \Nin store. Dialogue: 0,0:10:18.53,0:10:21.74,Default,,0000,0000,0000,,And the improvement is aimed at \Nincreasing the throughput, increasing the Dialogue: 0,0:10:21.74,0:10:25.58,Default,,0000,0000,0000,,number of bits per symbol that we can \Nsend without necessarily increasing the Dialogue: 0,0:10:25.58,0:10:29.99,Default,,0000,0000,0000,,probability of error. \NSo here's a wild idea. Dialogue: 0,0:10:29.99,0:10:34.13,Default,,0000,0000,0000,,Let's use complex numbers and build a \Ncomplex valued transmission system. Dialogue: 0,0:10:34.13,0:10:37.30,Default,,0000,0000,0000,,This requires certain suspension of \Ndisbelief for the time being, but believe Dialogue: 0,0:10:37.30,0:10:41.39,Default,,0000,0000,0000,,me, it will work in the end. \NThe name for this complex valued mapping Dialogue: 0,0:10:41.39,0:10:44.79,Default,,0000,0000,0000,,scheme is QAM. \NWhich is an acronym for Quadtrature Dialogue: 0,0:10:44.79,0:10:47.74,Default,,0000,0000,0000,,Amplitude Modulation, and it works like \Nso. Dialogue: 0,0:10:47.74,0:10:52.18,Default,,0000,0000,0000,,The mapper takes the income and bit \Nstream, and splits it into chunks of M Dialogue: 0,0:10:52.18,0:10:56.84,Default,,0000,0000,0000,,bits, with M even. \NThen it uses half of the bits, to define Dialogue: 0,0:10:56.84,0:11:00.93,Default,,0000,0000,0000,,a PAM sequence, which we call a of r of \Nn, and the reamaining, M over 2 bits, to Dialogue: 0,0:11:00.93,0:11:06.85,Default,,0000,0000,0000,,define another independent PAM sequence. \NAi of n. Dialogue: 0,0:11:06.85,0:11:11.80,Default,,0000,0000,0000,,The final symbol sequence is a sequence \Nof complex numbers, where the real part Dialogue: 0,0:11:11.80,0:11:14.47,Default,,0000,0000,0000,,is the first PAM sequence, and the \Nimaginary part is the second PAM Dialogue: 0,0:11:14.47,0:11:18.20,Default,,0000,0000,0000,,sequence. \NAnd of course, in front we have a gain Dialogue: 0,0:11:18.20,0:11:21.98,Default,,0000,0000,0000,,factor, G. \NSo the transmission alphabet, a, is given Dialogue: 0,0:11:21.98,0:11:29.49,Default,,0000,0000,0000,,by points in the complex plane, with \Nodd-valued coordinates around the origin. Dialogue: 0,0:11:29.49,0:11:33.20,Default,,0000,0000,0000,,At the receiver, the slicer works by \Nfinding the symbol in the alphabet, which Dialogue: 0,0:11:33.20,0:11:37.29,Default,,0000,0000,0000,,is closest in Euclidean distance to the \Nreceived symbol. Dialogue: 0,0:11:37.29,0:11:42.17,Default,,0000,0000,0000,,Let's look at this graphically. \NThis is a set of points for QAM Dialogue: 0,0:11:42.17,0:11:47.12,Default,,0000,0000,0000,,transmission with M equal to 2, which \Ncorresponds to two bilevel PAM signals on Dialogue: 0,0:11:47.12,0:11:55.00,Default,,0000,0000,0000,,the real axis and on the imaginary axis. \NSo that results into four points. Dialogue: 0,0:11:55.00,0:11:58.86,Default,,0000,0000,0000,,If we increase the number of bits per \Nsymbol, we set M equal to 4, that Dialogue: 0,0:11:58.86,0:12:02.59,Default,,0000,0000,0000,,corresponds to two pam signals with 2 \Nbits each, which makes for a Dialogue: 0,0:12:02.59,0:12:07.55,Default,,0000,0000,0000,,constellation. \NThis is how these arrangement of points Dialogue: 0,0:12:07.55,0:12:12.14,Default,,0000,0000,0000,,in the complex plain are called. \NA constellation of four by four points at Dialogue: 0,0:12:12.14,0:12:15.90,Default,,0000,0000,0000,,the odd-valued coordinates in the complex \Nplane. Dialogue: 0,0:12:15.90,0:12:22.72,Default,,0000,0000,0000,,If we increase M to 8, then we have a 256 \Npoint constellation, with 16 points per Dialogue: 0,0:12:22.72,0:12:26.84,Default,,0000,0000,0000,,side. \NLets look at what happens when a symbol Dialogue: 0,0:12:26.84,0:12:31.42,Default,,0000,0000,0000,,is received, and how we derive an \Nexpression for the probability of error. Dialogue: 0,0:12:31.42,0:12:35.18,Default,,0000,0000,0000,,If this is the nominal constellation, the \Ntransmitter will choose one of these Dialogue: 0,0:12:35.18,0:12:40.31,Default,,0000,0000,0000,,values for transmission, say this one. \NAnd this value will corrupted by noise in Dialogue: 0,0:12:40.31,0:12:43.63,Default,,0000,0000,0000,,the transmission and the receiving \Nprocess. Dialogue: 0,0:12:43.63,0:12:47.92,Default,,0000,0000,0000,,And will appear somewhere in the complex \Nplane, not necessarily exactly on the Dialogue: 0,0:12:47.92,0:12:52.10,Default,,0000,0000,0000,,point it originates from. \NThe way the slicer operates, is by Dialogue: 0,0:12:52.10,0:12:56.50,Default,,0000,0000,0000,,defining decision regions around each \Npoint in the constellation. Dialogue: 0,0:12:56.50,0:13:01.50,Default,,0000,0000,0000,,So suppose for this point here, the \Ntransmitted point, the decision region is Dialogue: 0,0:13:01.50,0:13:07.15,Default,,0000,0000,0000,,square, of side 2G, centered around which \Nis made in point. Dialogue: 0,0:13:07.15,0:13:10.89,Default,,0000,0000,0000,,So what happens is that when we receive \Nsymbols. Dialogue: 0,0:13:10.89,0:13:14.30,Default,,0000,0000,0000,,They will now fall on the original point. \NBut as long as they fall within the Dialogue: 0,0:13:14.30,0:13:17.11,Default,,0000,0000,0000,,decision region, they will be decoded \Ncorrectly. Dialogue: 0,0:13:17.11,0:13:19.71,Default,,0000,0000,0000,,So for instance here. \NWe will decode this correctly. Dialogue: 0,0:13:19.71,0:13:22.52,Default,,0000,0000,0000,,Here we will decode this correctly. \NSame here. Dialogue: 0,0:13:22.52,0:13:26.40,Default,,0000,0000,0000,,But this point for instance falls outside \Nof the decision region and therefore it Dialogue: 0,0:13:26.40,0:13:29.99,Default,,0000,0000,0000,,will be associated to a different \Nconstellation point, thereby causing an Dialogue: 0,0:13:29.99,0:13:33.92,Default,,0000,0000,0000,,error. \NTo quantify the probability of error, we Dialogue: 0,0:13:33.92,0:13:37.57,Default,,0000,0000,0000,,assume as per usual that each received \Nsymbol is the sum of the transmitted Dialogue: 0,0:13:37.57,0:13:42.16,Default,,0000,0000,0000,,symbol. \NPlus a noise sample theta of n. Dialogue: 0,0:13:42.16,0:13:47.63,Default,,0000,0000,0000,,And we further assume that this noise is \Na complex value Gaussian noise of equal Dialogue: 0,0:13:47.63,0:13:52.64,Default,,0000,0000,0000,,variance in the complex and real \Ncomponents. Dialogue: 0,0:13:52.64,0:13:57.86,Default,,0000,0000,0000,,We're working on a completely digital \Nsystem that operates. Dialogue: 0,0:13:57.86,0:14:01.61,Default,,0000,0000,0000,,With complex valued quantities. \NSo we're making a new model for the Dialogue: 0,0:14:01.61,0:14:05.33,Default,,0000,0000,0000,,noise, and we will see later, how to \Ntranslate the physical real noise, into a Dialogue: 0,0:14:05.33,0:14:09.76,Default,,0000,0000,0000,,complex variable. \NWith these assumptions, the probability Dialogue: 0,0:14:09.76,0:14:13.60,Default,,0000,0000,0000,,of error, is equal to the probability \Nthat the real part of the noise is larger Dialogue: 0,0:14:13.60,0:14:18.61,Default,,0000,0000,0000,,than G in magnitude. \NPlus the probability that the imaginary Dialogue: 0,0:14:18.61,0:14:21.85,Default,,0000,0000,0000,,part of the noise is larger than G in \Nmagnitude. Dialogue: 0,0:14:21.85,0:14:24.64,Default,,0000,0000,0000,,We assume that real and imaginary \Ncomponent of the noise are independent, Dialogue: 0,0:14:24.64,0:14:27.60,Default,,0000,0000,0000,,and that's why we can split the \Nprobability like so. Dialogue: 0,0:14:27.60,0:14:31.74,Default,,0000,0000,0000,,Now, if you remember the shape of the \Ndecision region, this condition is Dialogue: 0,0:14:31.74,0:14:35.95,Default,,0000,0000,0000,,equivalent to saying that the noise is \Npushing the real part of the point, Dialogue: 0,0:14:35.95,0:14:40.57,Default,,0000,0000,0000,,outside of the decision region, in either \Ndirection, and same for the imaginary Dialogue: 0,0:14:40.57,0:14:45.38,Default,,0000,0000,0000,,part. \NNow if we develop this, this is equal to Dialogue: 0,0:14:45.38,0:14:48.72,Default,,0000,0000,0000,,1 minus the probability that the real \Npart of the noise is less than G, and the Dialogue: 0,0:14:48.72,0:14:52.39,Default,,0000,0000,0000,,imaginary part of the noise is less than \NG. Dialogue: 0,0:14:52.39,0:14:57.20,Default,,0000,0000,0000,,This is the complimentary condition to \Nwhat we just wrote above. Dialogue: 0,0:14:57.20,0:15:00.74,Default,,0000,0000,0000,,And so this is equal to 1 minus the \Nintegral over the decision region d of Dialogue: 0,0:15:00.74,0:15:05.68,Default,,0000,0000,0000,,the complex valued probability density \Nfunction for the noise. Dialogue: 0,0:15:05.68,0:15:09.60,Default,,0000,0000,0000,,In order to compute this integral, we're \Ngoing to approximate the shape of the Dialogue: 0,0:15:09.60,0:15:13.43,Default,,0000,0000,0000,,decision region. \NWith the inbound circle. Dialogue: 0,0:15:13.43,0:15:16.45,Default,,0000,0000,0000,,So instead of using the square, we're \Ngoing to use a circle centered around the Dialogue: 0,0:15:16.45,0:15:19.84,Default,,0000,0000,0000,,transmission point. \NWhen the constellation is very dense, Dialogue: 0,0:15:19.84,0:15:24.12,Default,,0000,0000,0000,,this approximation is quite accurate. \NWith this approximation, we can compute Dialogue: 0,0:15:24.12,0:15:27.57,Default,,0000,0000,0000,,the integral exactly for a gaussian \Ndistribution. Dialogue: 0,0:15:27.57,0:15:32.50,Default,,0000,0000,0000,,And if we assume that the variance of the \Nnoise is sigma 0 squared over 2 in each Dialogue: 0,0:15:32.50,0:15:37.82,Default,,0000,0000,0000,,component, real or imaginary. \NIt turns out that the probability of Dialogue: 0,0:15:37.82,0:15:41.80,Default,,0000,0000,0000,,error is equal to each of the minus g \Nsquared over sigma 0 square. Dialogue: 0,0:15:41.80,0:15:45.50,Default,,0000,0000,0000,,Now to obtain a probability of error as a \Nfunction of the signal to noise ratio we Dialogue: 0,0:15:45.50,0:15:49.45,Default,,0000,0000,0000,,have to compute the power of the \Ntransmitted signal. Dialogue: 0,0:15:49.45,0:15:53.42,Default,,0000,0000,0000,,So if all symbols are equiprobable and \Nindependent, it turns out that the Dialogue: 0,0:15:53.42,0:15:58.82,Default,,0000,0000,0000,,variance of the signal is G squared times \N1 over 2 to the power of M. Dialogue: 0,0:15:58.82,0:16:02.98,Default,,0000,0000,0000,,Which is the probability of each symbol, \Ntimes the sum over all symbols in the Dialogue: 0,0:16:02.98,0:16:07.22,Default,,0000,0000,0000,,alphabet of the magnitude of the symbols \Nsquared. Dialogue: 0,0:16:07.22,0:16:11.43,Default,,0000,0000,0000,,Now, it's a little bit tedious but we can \Nsolve it exactly for M. Dialogue: 0,0:16:11.43,0:16:15.15,Default,,0000,0000,0000,,And it turns out that the power to \Ntransmit the signal is g squared 2 rds3, Dialogue: 0,0:16:15.15,0:16:20.61,Default,,0000,0000,0000,,2 to the n to the minus 1. \NNow, if you plug this into the formula Dialogue: 0,0:16:20.61,0:16:24.16,Default,,0000,0000,0000,,for the probability of error that we seen \Nbefore. Dialogue: 0,0:16:24.16,0:16:28.85,Default,,0000,0000,0000,,We get that the result is an exponential \Nfunction where the argument is minus 3, Dialogue: 0,0:16:28.85,0:16:33.32,Default,,0000,0000,0000,,that multiplies 2 to the minus m plus 1, \Nthat multiplies the signals to noise Dialogue: 0,0:16:33.32,0:16:37.59,Default,,0000,0000,0000,,ratio. \NWe can plot this probability of error in Dialogue: 0,0:16:37.59,0:16:41.74,Default,,0000,0000,0000,,a log log scale, like we did before. \NAnd we can paramatrize the curve, as a Dialogue: 0,0:16:41.74,0:16:45.55,Default,,0000,0000,0000,,function of the number of points in the \Nconstellation. Dialogue: 0,0:16:45.55,0:16:49.64,Default,,0000,0000,0000,,So here you have the curve for a four \Npoint constellation, Here's the curve for Dialogue: 0,0:16:49.64,0:16:53.52,Default,,0000,0000,0000,,16-points and here's the curve for \N64-points. Dialogue: 0,0:16:53.52,0:16:56.91,Default,,0000,0000,0000,,Now you can see that for a given signal \Nto noise ratio the probability of error Dialogue: 0,0:16:56.91,0:17:01.00,Default,,0000,0000,0000,,increases with the number of points. \NWhy is that? Dialogue: 0,0:17:01.00,0:17:03.79,Default,,0000,0000,0000,,Well if the signal to noise remains the \Nsame, and we assume that the noise is Dialogue: 0,0:17:03.79,0:17:06.58,Default,,0000,0000,0000,,always at the same level, then it means \Nthat the power of the signal remains Dialogue: 0,0:17:06.58,0:17:10.90,Default,,0000,0000,0000,,constant as well. \NIn that case, if the number of points Dialogue: 0,0:17:10.90,0:17:15.81,Default,,0000,0000,0000,,increases, g has to become smaller. \NIn order to accomodate a larger number of Dialogue: 0,0:17:15.81,0:17:19.35,Default,,0000,0000,0000,,points for the same power. \NBut if g becomes smaller, then the Dialogue: 0,0:17:19.35,0:17:23.76,Default,,0000,0000,0000,,decision regions becomes smaller, the \Nseparation between points become smaller, Dialogue: 0,0:17:23.76,0:17:28.57,Default,,0000,0000,0000,,and the decision process becomes more \Nvulnerable to noise. Dialogue: 0,0:17:28.57,0:17:32.49,Default,,0000,0000,0000,,So in the end here's the final recipe to \Ndesign a QAM transmitter. Dialogue: 0,0:17:32.49,0:17:34.54,Default,,0000,0000,0000,,First you pick a probability of error \Nthat you can live. Dialogue: 0,0:17:34.54,0:17:37.60,Default,,0000,0000,0000,,In general, 10 to the minus 6 is an \Nacceptable probability of error at the Dialogue: 0,0:17:37.60,0:17:41.12,Default,,0000,0000,0000,,symbol level. \NThen you find out the signals noise ratio Dialogue: 0,0:17:41.12,0:17:44.92,Default,,0000,0000,0000,,that is imposed by the channel's power \Nconstraint. Dialogue: 0,0:17:44.92,0:17:49.72,Default,,0000,0000,0000,,Once you have that, you can find the size \Nof your constellation, by finding M. Dialogue: 0,0:17:49.72,0:17:53.63,Default,,0000,0000,0000,,Which, based on the previous equations, \Nis the log and base 2 of 1 minus 3 over 2 Dialogue: 0,0:17:53.63,0:17:57.29,Default,,0000,0000,0000,,times the signal to noise ratio, divided \Nby the natural logarithm of the Dialogue: 0,0:17:57.29,0:18:02.11,Default,,0000,0000,0000,,probability of error. \NOf course, you will have to round this to Dialogue: 0,0:18:02.11,0:18:05.21,Default,,0000,0000,0000,,a suitable integer value, and potentially \Nto an even power of 2 in order to have a Dialogue: 0,0:18:05.21,0:18:09.63,Default,,0000,0000,0000,,square constellation. \NThe final data rate of your system will Dialogue: 0,0:18:09.63,0:18:13.66,Default,,0000,0000,0000,,be M, the number of bits per symbol, \Ntimes W, which, if you remember, is the Dialogue: 0,0:18:13.66,0:18:17.82,Default,,0000,0000,0000,,baud rate of the system, and corresponds \Nto the bandwidth allowed for by the Dialogue: 0,0:18:17.82,0:18:23.34,Default,,0000,0000,0000,,channel. \NSo we know how to fit the bandwidth Dialogue: 0,0:18:23.34,0:18:27.84,Default,,0000,0000,0000,,constraint via upsampling. \NWith QAM, we know how many bits per Dialogue: 0,0:18:27.84,0:18:30.86,Default,,0000,0000,0000,,symbol we can use given the power \Nconstraint. Dialogue: 0,0:18:30.86,0:18:34.60,Default,,0000,0000,0000,,And so we know the theoretical throughput \Nof the transmit for a given reliability Dialogue: 0,0:18:34.60,0:18:38.61,Default,,0000,0000,0000,,figure. \NHowever, the question remains, how are we Dialogue: 0,0:18:38.61,0:18:44.40,Default,,0000,0000,0000,,going to send complex value symbols over \Na physical channel? Dialogue: 0,0:18:44.40,0:18:48.90,Default,,0000,0000,0000,,It's time, therefore, to Stop the \Nsuspension of this belief, and look at Dialogue: 0,0:18:48.90,0:18:54.90,Default,,0000,0000,0000,,techniques to do complex signaling over a \Nreal value channel.