WEBVTT 00:00:00.880 --> 00:00:04.220 Hi and welcome to Module 9.3 of Digital Signal Processing. 00:00:04.220 --> 00:00:07.520 We are still talking about Digital Communication Systems. 00:00:07.520 --> 00:00:10.790 In the previous module we addressed bandwidth constraint. 00:00:10.790 --> 00:00:13.964 In this module we will tackle the powered constraint so first we will introduce the 00:00:13.964 --> 00:00:17.830 concept of noise and probability of error in a communication system. 00:00:17.830 --> 00:00:22.264 We will look at signaling alphabet and power and their related power. 00:00:22.264 --> 00:00:25.400 And finally, we'll introduce QAM signaling. 00:00:25.400 --> 00:00:28.600 So we have seen that a transmitter sends a sequence of symbols a of n. 00:00:28.600 --> 00:00:32.500 Created by the mapper. Now we take the receiver into account. 00:00:32.500 --> 00:00:36.340 We don't yet know how, but it's safe to assume that the receiver in the end will 00:00:36.340 --> 00:00:41.200 obtain an estimation hat a of n. Of the original transmitted symbol 00:00:41.200 --> 00:00:43.364 sequence. It's an estimation because even if there 00:00:43.364 --> 00:00:45.700 is no distortion introduced by the channel. 00:00:45.700 --> 00:00:49.591 Even if nothing bad happens. There will always be a certain amount of 00:00:49.591 --> 00:00:52.780 noise, that will corrupt the original sequence. 00:00:52.780 --> 00:00:56.225 When noise is very large, our estimate for the transmitted symbol will be off, 00:00:56.225 --> 00:01:00.140 and will incur a decoding error. Now, this probability of error will 00:01:00.140 --> 00:01:04.470 depend on the power of the noise, with respect to the power of the signal. 00:01:04.470 --> 00:01:07.620 And will also depend on the decoding strategies that we've put in place, how 00:01:07.620 --> 00:01:11.900 smart we are in circumventing the effects of the noise. 00:01:11.900 --> 00:01:15.512 One we can maximize the probability of correctly guessing the transmit symbol, 00:01:15.512 --> 00:01:20.412 is by using suitable alphabets. And so we will see in more detail what 00:01:20.412 --> 00:01:24.400 that means. Remember the scheme for the transmitter. 00:01:24.400 --> 00:01:26.570 We have a bitstream coming in. And then we have the scrambler. 00:01:26.570 --> 00:01:34.756 And then the mapper. And here we have a sequence of symbols a 00:01:34.756 --> 00:01:37.204 of n. These symbols will have to be sent over 00:01:37.204 --> 00:01:42.305 the channel. And to do so, we upsample. 00:01:42.305 --> 00:01:47.955 And we interpolate, and then we transmit. Now, how do we go from bitstreams to 00:01:47.955 --> 00:01:52.586 samples in more detail? In other words, how does the mapper work? 00:01:52.586 --> 00:01:56.870 The mapper will split the incoming bitstreams into chunks and will assign a 00:01:56.870 --> 00:02:02.260 symbol, a of n, from a finite alphabet to each chunk. 00:02:02.260 --> 00:02:05.900 The alphabet, we will decide later what it is composed of. 00:02:05.900 --> 00:02:09.890 To undo the mapping operation and recover the bitstream, the receiver will perform 00:02:09.890 --> 00:02:14.252 a slicing operation. So the receiver will a value, hat a of n, 00:02:14.252 --> 00:02:21.660 where hat indicates the fact that noise has leaked into the value of the signal. 00:02:21.660 --> 00:02:25.536 And the receiver will decide which symbol from the alphabet, which is known to the 00:02:25.536 --> 00:02:29.355 receiver as well, is closest to the received symbol. 00:02:29.355 --> 00:02:33.880 And from there, it will be extremely easy to piece back the original bitstream. 00:02:33.880 --> 00:02:36.730 As an example, let's look at simple two-level signaling. 00:02:36.730 --> 00:02:40.150 This generates signals of the kind we have seen in the example so far, 00:02:40.150 --> 00:02:45.157 alternating between two levels. The way the mapper works is by splitting 00:02:45.157 --> 00:02:51.657 the incoming bitstream into single bits. And the output symbol sequence uses an 00:02:51.657 --> 00:02:57.175 alphabet composed of two symbols, g and minus g, and associates g to a bit of 00:02:57.175 --> 00:03:05.270 value 1 and minus g to a bit of value 0. And the receiver, the slicer. 00:03:05.270 --> 00:03:09.622 Looks at the sign of the incoming symbol sequence which has been corrupted by 00:03:09.622 --> 00:03:13.410 noise. And decides that the nth bit will be 1 if 00:03:13.410 --> 00:03:18.750 the sign of the nth symbol is positive, and 0 otherwise. 00:03:18.750 --> 00:03:22.110 Lets look at an example, lets assume G equal to 1. 00:03:22.110 --> 00:03:25.980 So the two-level signal will alternate between plus 1 and minus 1. 00:03:25.980 --> 00:03:30.944 And suppose we have an input bit sequence that gives rise to this signal here after 00:03:30.944 --> 00:03:35.715 transmission and after decoding at the receiver. 00:03:35.715 --> 00:03:39.940 The resulting symbol sequence will look like this, where each symbol has been 00:03:39.940 --> 00:03:44.790 corrupted by a varying amount of noise. If we now slice this sequence by 00:03:44.790 --> 00:03:49.724 thresholding, as shown, shown before. We recover a simple sequence like this 00:03:49.724 --> 00:03:54.790 where we have indicated in red the errors incurred by the slicer because of the 00:03:54.790 --> 00:03:57.812 noise. So if you want to analyze in more detail 00:03:57.812 --> 00:04:01.508 what the probability of error is, we have to make some hypothesis on the signals 00:04:01.508 --> 00:04:07.321 involved in this toy experiment. Assume that each received symbol can be 00:04:07.321 --> 00:04:11.230 modeled as the original symbol plus a noise sample. 00:04:11.230 --> 00:04:14.520 Assume also that the bits in the bitstream are equiprobable. 00:04:14.520 --> 00:04:19.270 So zero and one appear with probability 50% each. 00:04:19.270 --> 00:04:21.810 Assume that the noise and the signal are independent. 00:04:21.810 --> 00:04:25.842 And assume that the noise is additive white Gaussian noise with zero mean and 00:04:25.842 --> 00:04:30.370 known variance sigma 0. With this hypothesis, the probability of 00:04:30.370 --> 00:04:34.206 error can be written out as follows. First of all, we split the probability of 00:04:34.206 --> 00:04:38.860 errors into 2 conditional probabilities. Conditioned by whether the nth bit is 00:04:38.860 --> 00:04:41.700 equal to 1, or the nth bit is equal to zero. 00:04:41.700 --> 00:04:44.600 In the first case, when the nth bit is equal to 1. 00:04:44.600 --> 00:04:48.376 Remember, the produced symbol will be equal to G, so the probability of error 00:04:48.376 --> 00:04:53.616 is equal to the probability for the noise sample to be less than minus G. 00:04:53.616 --> 00:04:58.398 Because only in this case the sum of the sample plus the noise will be negative. 00:04:58.398 --> 00:05:02.991 Similarly, when the nth bit is equal to 0, we have a negative sample. 00:05:02.991 --> 00:05:08.450 And the only way for that to change sign is if the noise sample is greater than G. 00:05:08.450 --> 00:05:13.730 Since the probability of each occurrence is 1 half because of the symmetry of the 00:05:13.730 --> 00:05:18.508 Gaussian distribution function. This is equal to the probability for the 00:05:18.508 --> 00:05:23.146 noise sample to be larger than G. And we can compute this as the integral 00:05:23.146 --> 00:05:26.916 from G to infinity of the probability distribution function for the Gaussian 00:05:26.916 --> 00:05:30.582 distribution with the known variance here. 00:05:30.582 --> 00:05:34.960 This function has a standard name. It's called the error function. 00:05:34.960 --> 00:05:38.110 And since this integral can not be computed in closed form, this function is 00:05:38.110 --> 00:05:41.550 available in most numerical packages under this name. 00:05:41.550 --> 00:05:45.326 So the important thing to notice here is that the probability of error is some 00:05:45.326 --> 00:05:48.984 function of the ratio between the amplitude of the signal and the standard 00:05:48.984 --> 00:05:54.212 deviation of the noise. And we can carry this analysis further by 00:05:54.212 --> 00:05:59.187 considering the transmitted power. We have a bi-level signal and each level 00:05:59.187 --> 00:06:03.200 occurs with 1 half probability. So the variance of the signal, which 00:06:03.200 --> 00:06:06.905 corresponds to the power, is equal to G squared time the probability of the nth 00:06:06.905 --> 00:06:11.700 being equal to 1. Plus G squared times the probability of 00:06:11.700 --> 00:06:14.710 the nth bit being equal to 0, which is equal to G squared. 00:06:14.710 --> 00:06:18.120 And so, if we rewrite the probability error function we can write that it is 00:06:18.120 --> 00:06:23.136 equal to the error function of the ratio. Between the standard deviation of the 00:06:23.136 --> 00:06:26.528 transmitted signal divided by the standard deviation of the noise, which is 00:06:26.528 --> 00:06:30.132 equivalent to saying that it is the error function of the square root of the signal 00:06:30.132 --> 00:06:34.810 to noise ratio. If we plot this as a function of the 00:06:34.810 --> 00:06:39.875 signal to noise ratio in dBs. And I remind here that dBs here mean that 00:06:39.875 --> 00:06:44.348 we compute 10 times the log in base 10 of the power of the signal divided by the 00:06:44.348 --> 00:06:49.773 power of the noise. And since we are in a log log scale, we 00:06:49.773 --> 00:06:54.388 can see that the probability of error decays exponentially with the signal to 00:06:54.388 --> 00:06:59.800 noise ratio. This exponential decay is quite the norm 00:06:59.800 --> 00:07:02.882 in communication systems. And while the absolute rate of decay 00:07:02.882 --> 00:07:07.580 might change in terms of the linear constants involved in the curve. 00:07:07.580 --> 00:07:12.110 The trend will stay the same even for more complex signaling schemes. 00:07:12.110 --> 00:07:15.197 So the lesson that we learn from the simple example is that in order to reduce 00:07:15.197 --> 00:07:19.905 the probability of error, we should increase G, the amplitude of the signal. 00:07:19.905 --> 00:07:23.265 But of course, increasing G also increases the power of the transmitted 00:07:23.265 --> 00:07:28.140 signal, and we know that we cannot go above the channel's power constraint. 00:07:28.140 --> 00:07:33.380 And so that's how the power constraint limits the reliability of transmission. 00:07:33.380 --> 00:07:37.760 The bilevel signalling scheme is very instructive, but it's also very limited 00:07:37.760 --> 00:07:41.350 in the sense that we're sending just one bit per output symbol. 00:07:41.350 --> 00:07:44.290 So to increase the throughput, to increase the number of bits per second 00:07:44.290 --> 00:07:47.990 that we send over a channel, we can use multilevel signaling. 00:07:47.990 --> 00:07:50.790 There are very many ways to do so and we will just look at a few, but the 00:07:50.790 --> 00:07:56.382 fundamental idea is that we take now. Larger chunks of bits and therefore we 00:07:56.382 --> 00:08:00.210 have alphabets that have a higher cardinality. 00:08:00.210 --> 00:08:04.434 So more values in the alphabet means more bits per symbol and therefore a higher 00:08:04.434 --> 00:08:07.576 data rate. But not to give the ending away, we will 00:08:07.576 --> 00:08:10.408 see that the power of the signal will also be dependent on the size of the 00:08:10.408 --> 00:08:13.614 alphabet. And so, in order not to exceed a certain 00:08:13.614 --> 00:08:16.857 probability of error, given the channel's power of constraint, we will not be able 00:08:16.857 --> 00:08:21.200 to grow the alphabet indefinitely. But we can be smart in the way we build 00:08:21.200 --> 00:08:24.480 this alphabet and so we will look at some examples. 00:08:24.480 --> 00:08:27.580 The first example is PAM, Pulse Amplitude Modulation. 00:08:27.580 --> 00:08:31.651 We split the incoming bitstream into chunks of M bits so that each chunk 00:08:31.651 --> 00:08:36.720 corresponds to an integer between 0 and 2 to the M minus 1. 00:08:36.720 --> 00:08:40.248 We can call this sequence of integers k of n and this sequence is mapped onto a 00:08:40.248 --> 00:08:46.169 sequence of symbols a of n like so. There's a gain factor G, like always. 00:08:46.169 --> 00:08:50.770 And then we use 2 to the n minus 1 odd integers around 0. 00:08:50.770 --> 00:08:58.300 So for instance, if M is equal to 2, we have 0, 1, 2, and 3 as potential items 00:08:58.300 --> 00:09:02.685 for k of n. And a of n will be either. 00:09:02.685 --> 00:09:10.970 Let's assume G is equal to 1. Will be either minus 3, or minus 1, or 1, 00:09:10.970 --> 00:09:15.870 or 3. We will see why we use the odd integers 00:09:15.870 --> 00:09:19.180 in just a second. And the receiver the slicer will work by 00:09:19.180 --> 00:09:23.200 simply associating to the received symbol, the closest odd integer, always 00:09:23.200 --> 00:09:28.600 taking the gain into account. So graphically, again, PAM for M equal to 00:09:28.600 --> 00:09:32.350 2 and G equal to 1, will look like this. Here are the odd integers. 00:09:32.350 --> 00:09:37.790 The distance between two transmitted points, or transmitted symbols, is 2G 00:09:37.790 --> 00:09:42.288 right here. G Is equal to 1, but it would be in 00:09:42.288 --> 00:09:47.730 general 2 times the gain. And using odd integers creates a 00:09:47.730 --> 00:09:49.224 zero-mean sequence. If we assume that each symbol is 00:09:49.224 --> 00:09:51.472 equiprobable. Which is likely, given that we've used a 00:09:51.472 --> 00:09:55.500 scrambler in the transmitter. The the resulting mean is zero. 00:09:55.500 --> 00:09:58.855 The analysis of the probability of error for PAM is very similar to what we 00:09:58.855 --> 00:10:03.829 carried out for bilateral signaling. As a matter of fact, binary signaling is 00:10:03.829 --> 00:10:08.231 simply PAM with M equal to 1. The end result is very similar, and it's 00:10:08.231 --> 00:10:11.427 an exponential decaying function of the ratio between the power of the signal and 00:10:11.427 --> 00:10:15.300 the power of the noise. The reason why we don't analyze this 00:10:15.300 --> 00:10:18.530 further is because we have an improvement in store. 00:10:18.530 --> 00:10:21.743 And the improvement is aimed at increasing the throughput, increasing the 00:10:21.743 --> 00:10:25.580 number of bits per symbol that we can send without necessarily increasing the 00:10:25.580 --> 00:10:29.990 probability of error. So here's a wild idea. 00:10:29.990 --> 00:10:34.130 Let's use complex numbers and build a complex valued transmission system. 00:10:34.130 --> 00:10:37.298 This requires certain suspension of disbelief for the time being, but believe 00:10:37.298 --> 00:10:41.394 me, it will work in the end. The name for this complex valued mapping 00:10:41.394 --> 00:10:44.790 scheme is QAM. Which is an acronym for Quadtrature 00:10:44.790 --> 00:10:47.736 Amplitude Modulation, and it works like so. 00:10:47.736 --> 00:10:52.176 The mapper takes the income and bit stream, and splits it into chunks of M 00:10:52.176 --> 00:10:56.842 bits, with M even. Then it uses half of the bits, to define 00:10:56.842 --> 00:11:00.934 a PAM sequence, which we call a of r of n, and the reamaining, M over 2 bits, to 00:11:00.934 --> 00:11:06.850 define another independent PAM sequence. Ai of n. 00:11:06.850 --> 00:11:11.800 The final symbol sequence is a sequence of complex numbers, where the real part 00:11:11.800 --> 00:11:14.473 is the first PAM sequence, and the imaginary part is the second PAM 00:11:14.473 --> 00:11:18.202 sequence. And of course, in front we have a gain 00:11:18.202 --> 00:11:21.985 factor, G. So the transmission alphabet, a, is given 00:11:21.985 --> 00:11:29.490 by points in the complex plane, with odd-valued coordinates around the origin. 00:11:29.490 --> 00:11:33.195 At the receiver, the slicer works by finding the symbol in the alphabet, which 00:11:33.195 --> 00:11:37.293 is closest in Euclidean distance to the received symbol. 00:11:37.293 --> 00:11:42.170 Let's look at this graphically. This is a set of points for QAM 00:11:42.170 --> 00:11:47.120 transmission with M equal to 2, which corresponds to two bilevel PAM signals on 00:11:47.120 --> 00:11:55.000 the real axis and on the imaginary axis. So that results into four points. 00:11:55.000 --> 00:11:58.864 If we increase the number of bits per symbol, we set M equal to 4, that 00:11:58.864 --> 00:12:02.590 corresponds to two pam signals with 2 bits each, which makes for a 00:12:02.590 --> 00:12:07.547 constellation. This is how these arrangement of points 00:12:07.547 --> 00:12:12.136 in the complex plain are called. A constellation of four by four points at 00:12:12.136 --> 00:12:15.900 the odd-valued coordinates in the complex plane. 00:12:15.900 --> 00:12:22.725 If we increase M to 8, then we have a 256 point constellation, with 16 points per 00:12:22.725 --> 00:12:26.840 side. Lets look at what happens when a symbol 00:12:26.840 --> 00:12:31.420 is received, and how we derive an expression for the probability of error. 00:12:31.420 --> 00:12:35.182 If this is the nominal constellation, the transmitter will choose one of these 00:12:35.182 --> 00:12:40.310 values for transmission, say this one. And this value will corrupted by noise in 00:12:40.310 --> 00:12:43.629 the transmission and the receiving process. 00:12:43.629 --> 00:12:47.919 And will appear somewhere in the complex plane, not necessarily exactly on the 00:12:47.919 --> 00:12:52.100 point it originates from. The way the slicer operates, is by 00:12:52.100 --> 00:12:56.500 defining decision regions around each point in the constellation. 00:12:56.500 --> 00:13:01.505 So suppose for this point here, the transmitted point, the decision region is 00:13:01.505 --> 00:13:07.150 square, of side 2G, centered around which is made in point. 00:13:07.150 --> 00:13:10.890 So what happens is that when we receive symbols. 00:13:10.890 --> 00:13:14.302 They will now fall on the original point. But as long as they fall within the 00:13:14.302 --> 00:13:17.110 decision region, they will be decoded correctly. 00:13:17.110 --> 00:13:19.710 So for instance here. We will decode this correctly. 00:13:19.710 --> 00:13:22.520 Here we will decode this correctly. Same here. 00:13:22.520 --> 00:13:26.396 But this point for instance falls outside of the decision region and therefore it 00:13:26.396 --> 00:13:29.987 will be associated to a different constellation point, thereby causing an 00:13:29.987 --> 00:13:33.916 error. To quantify the probability of error, we 00:13:33.916 --> 00:13:37.574 assume as per usual that each received symbol is the sum of the transmitted 00:13:37.574 --> 00:13:42.156 symbol. Plus a noise sample theta of n. 00:13:42.156 --> 00:13:47.634 And we further assume that this noise is a complex value Gaussian noise of equal 00:13:47.634 --> 00:13:52.640 variance in the complex and real components. 00:13:52.640 --> 00:13:57.860 We're working on a completely digital system that operates. 00:13:57.860 --> 00:14:01.614 With complex valued quantities. So we're making a new model for the 00:14:01.614 --> 00:14:05.326 noise, and we will see later, how to translate the physical real noise, into a 00:14:05.326 --> 00:14:09.764 complex variable. With these assumptions, the probability 00:14:09.764 --> 00:14:13.604 of error, is equal to the probability that the real part of the noise is larger 00:14:13.604 --> 00:14:18.606 than G in magnitude. Plus the probability that the imaginary 00:14:18.606 --> 00:14:21.850 part of the noise is larger than G in magnitude. 00:14:21.850 --> 00:14:24.640 We assume that real and imaginary component of the noise are independent, 00:14:24.640 --> 00:14:27.598 and that's why we can split the probability like so. 00:14:27.598 --> 00:14:31.738 Now, if you remember the shape of the decision region, this condition is 00:14:31.738 --> 00:14:35.947 equivalent to saying that the noise is pushing the real part of the point, 00:14:35.947 --> 00:14:40.570 outside of the decision region, in either direction, and same for the imaginary 00:14:40.570 --> 00:14:45.376 part. Now if we develop this, this is equal to 00:14:45.376 --> 00:14:48.715 1 minus the probability that the real part of the noise is less than G, and the 00:14:48.715 --> 00:14:52.390 imaginary part of the noise is less than G. 00:14:52.390 --> 00:14:57.200 This is the complimentary condition to what we just wrote above. 00:14:57.200 --> 00:15:00.737 And so this is equal to 1 minus the integral over the decision region d of 00:15:00.737 --> 00:15:05.680 the complex valued probability density function for the noise. 00:15:05.680 --> 00:15:09.600 In order to compute this integral, we're going to approximate the shape of the 00:15:09.600 --> 00:15:13.430 decision region. With the inbound circle. 00:15:13.430 --> 00:15:16.454 So instead of using the square, we're going to use a circle centered around the 00:15:16.454 --> 00:15:19.838 transmission point. When the constellation is very dense, 00:15:19.838 --> 00:15:24.118 this approximation is quite accurate. With this approximation, we can compute 00:15:24.118 --> 00:15:27.570 the integral exactly for a gaussian distribution. 00:15:27.570 --> 00:15:32.498 And if we assume that the variance of the noise is sigma 0 squared over 2 in each 00:15:32.498 --> 00:15:37.820 component, real or imaginary. It turns out that the probability of 00:15:37.820 --> 00:15:41.800 error is equal to each of the minus g squared over sigma 0 square. 00:15:41.800 --> 00:15:45.496 Now to obtain a probability of error as a function of the signal to noise ratio we 00:15:45.496 --> 00:15:49.450 have to compute the power of the transmitted signal. 00:15:49.450 --> 00:15:53.415 So if all symbols are equiprobable and independent, it turns out that the 00:15:53.415 --> 00:15:58.819 variance of the signal is G squared times 1 over 2 to the power of M. 00:15:58.819 --> 00:16:02.979 Which is the probability of each symbol, times the sum over all symbols in the 00:16:02.979 --> 00:16:07.220 alphabet of the magnitude of the symbols squared. 00:16:07.220 --> 00:16:11.432 Now, it's a little bit tedious but we can solve it exactly for M. 00:16:11.432 --> 00:16:15.152 And it turns out that the power to transmit the signal is g squared 2 rds3, 00:16:15.152 --> 00:16:20.610 2 to the n to the minus 1. Now, if you plug this into the formula 00:16:20.610 --> 00:16:24.165 for the probability of error that we seen before. 00:16:24.165 --> 00:16:28.851 We get that the result is an exponential function where the argument is minus 3, 00:16:28.851 --> 00:16:33.324 that multiplies 2 to the minus m plus 1, that multiplies the signals to noise 00:16:33.324 --> 00:16:37.590 ratio. We can plot this probability of error in 00:16:37.590 --> 00:16:41.741 a log log scale, like we did before. And we can paramatrize the curve, as a 00:16:41.741 --> 00:16:45.550 function of the number of points in the constellation. 00:16:45.550 --> 00:16:49.645 So here you have the curve for a four point constellation, Here's the curve for 00:16:49.645 --> 00:16:53.520 16-points and here's the curve for 64-points. 00:16:53.520 --> 00:16:56.912 Now you can see that for a given signal to noise ratio the probability of error 00:16:56.912 --> 00:17:01.000 increases with the number of points. Why is that? 00:17:01.000 --> 00:17:03.790 Well if the signal to noise remains the same, and we assume that the noise is 00:17:03.790 --> 00:17:06.580 always at the same level, then it means that the power of the signal remains 00:17:06.580 --> 00:17:10.900 constant as well. In that case, if the number of points 00:17:10.900 --> 00:17:15.814 increases, g has to become smaller. In order to accomodate a larger number of 00:17:15.814 --> 00:17:19.354 points for the same power. But if g becomes smaller, then the 00:17:19.354 --> 00:17:23.764 decision regions becomes smaller, the separation between points become smaller, 00:17:23.764 --> 00:17:28.570 and the decision process becomes more vulnerable to noise. 00:17:28.570 --> 00:17:32.490 So in the end here's the final recipe to design a QAM transmitter. 00:17:32.490 --> 00:17:34.540 First you pick a probability of error that you can live. 00:17:34.540 --> 00:17:37.600 In general, 10 to the minus 6 is an acceptable probability of error at the 00:17:37.600 --> 00:17:41.116 symbol level. Then you find out the signals noise ratio 00:17:41.116 --> 00:17:44.920 that is imposed by the channel's power constraint. 00:17:44.920 --> 00:17:49.724 Once you have that, you can find the size of your constellation, by finding M. 00:17:49.724 --> 00:17:53.628 Which, based on the previous equations, is the log and base 2 of 1 minus 3 over 2 00:17:53.628 --> 00:17:57.288 times the signal to noise ratio, divided by the natural logarithm of the 00:17:57.288 --> 00:18:02.109 probability of error. Of course, you will have to round this to 00:18:02.109 --> 00:18:05.211 a suitable integer value, and potentially to an even power of 2 in order to have a 00:18:05.211 --> 00:18:09.632 square constellation. The final data rate of your system will 00:18:09.632 --> 00:18:13.658 be M, the number of bits per symbol, times W, which, if you remember, is the 00:18:13.658 --> 00:18:17.816 baud rate of the system, and corresponds to the bandwidth allowed for by the 00:18:17.816 --> 00:18:23.335 channel. So we know how to fit the bandwidth 00:18:23.335 --> 00:18:27.838 constraint via upsampling. With QAM, we know how many bits per 00:18:27.838 --> 00:18:30.860 symbol we can use given the power constraint. 00:18:30.860 --> 00:18:34.600 And so we know the theoretical throughput of the transmit for a given reliability 00:18:34.600 --> 00:18:38.607 figure. However, the question remains, how are we 00:18:38.607 --> 00:18:44.400 going to send complex value symbols over a physical channel? 00:18:44.400 --> 00:18:48.900 It's time, therefore, to Stop the suspension of this belief, and look at 00:18:48.900 --> 00:18:54.902 techniques to do complex signaling over a real value channel.