0:00:00.880,0:00:04.220 Hi and welcome to Module 9.3 of Digital [br]Signal Processing. 0:00:04.220,0:00:07.520 We are still talking about Digital [br]Communication Systems. 0:00:07.520,0:00:10.790 In the previous module we addressed [br]bandwidth constraint. 0:00:10.790,0:00:13.964 In this module we will tackle the powered [br]constraint so first we will introduce the 0:00:13.964,0:00:17.830 concept of noise and probability of error [br]in a communication system. 0:00:17.830,0:00:22.264 We will look at signaling alphabet and [br]power and their related power. 0:00:22.264,0:00:25.400 And finally, we'll introduce QAM [br]signaling. 0:00:25.400,0:00:28.600 So we have seen that a transmitter sends [br]a sequence of symbols a of n. 0:00:28.600,0:00:32.500 Created by the mapper. [br]Now we take the receiver into account. 0:00:32.500,0:00:36.340 We don't yet know how, but it's safe to [br]assume that the receiver in the end will 0:00:36.340,0:00:41.200 obtain an estimation hat a of n. [br]Of the original transmitted symbol 0:00:41.200,0:00:43.364 sequence. [br]It's an estimation because even if there 0:00:43.364,0:00:45.700 is no distortion introduced by the [br]channel. 0:00:45.700,0:00:49.591 Even if nothing bad happens. [br]There will always be a certain amount of 0:00:49.591,0:00:52.780 noise, that will corrupt the original [br]sequence. 0:00:52.780,0:00:56.225 When noise is very large, our estimate [br]for the transmitted symbol will be off, 0:00:56.225,0:01:00.140 and will incur a decoding error. [br]Now, this probability of error will 0:01:00.140,0:01:04.470 depend on the power of the noise, with [br]respect to the power of the signal. 0:01:04.470,0:01:07.620 And will also depend on the decoding [br]strategies that we've put in place, how 0:01:07.620,0:01:11.900 smart we are in circumventing the effects [br]of the noise. 0:01:11.900,0:01:15.512 One we can maximize the probability of [br]correctly guessing the transmit symbol, 0:01:15.512,0:01:20.412 is by using suitable alphabets. [br]And so we will see in more detail what 0:01:20.412,0:01:24.400 that means. [br]Remember the scheme for the transmitter. 0:01:24.400,0:01:26.570 We have a bitstream coming in. [br]And then we have the scrambler. 0:01:26.570,0:01:34.756 And then the mapper. [br]And here we have a sequence of symbols a 0:01:34.756,0:01:37.204 of n. [br]These symbols will have to be sent over 0:01:37.204,0:01:42.305 the channel. [br]And to do so, we upsample. 0:01:42.305,0:01:47.955 And we interpolate, and then we transmit. [br]Now, how do we go from bitstreams to 0:01:47.955,0:01:52.586 samples in more detail? [br]In other words, how does the mapper work? 0:01:52.586,0:01:56.870 The mapper will split the incoming [br]bitstreams into chunks and will assign a 0:01:56.870,0:02:02.260 symbol, a of n, from a finite alphabet to [br]each chunk. 0:02:02.260,0:02:05.900 The alphabet, we will decide later what [br]it is composed of. 0:02:05.900,0:02:09.890 To undo the mapping operation and recover [br]the bitstream, the receiver will perform 0:02:09.890,0:02:14.252 a slicing operation. [br]So the receiver will a value, hat a of n, 0:02:14.252,0:02:21.660 where hat indicates the fact that noise [br]has leaked into the value of the signal. 0:02:21.660,0:02:25.536 And the receiver will decide which symbol [br]from the alphabet, which is known to the 0:02:25.536,0:02:29.355 receiver as well, is closest to the [br]received symbol. 0:02:29.355,0:02:33.880 And from there, it will be extremely easy [br]to piece back the original bitstream. 0:02:33.880,0:02:36.730 As an example, let's look at simple [br]two-level signaling. 0:02:36.730,0:02:40.150 This generates signals of the kind we [br]have seen in the example so far, 0:02:40.150,0:02:45.157 alternating between two levels. [br]The way the mapper works is by splitting 0:02:45.157,0:02:51.657 the incoming bitstream into single bits. [br]And the output symbol sequence uses an 0:02:51.657,0:02:57.175 alphabet composed of two symbols, g and [br]minus g, and associates g to a bit of 0:02:57.175,0:03:05.270 value 1 and minus g to a bit of value 0. [br]And the receiver, the slicer. 0:03:05.270,0:03:09.622 Looks at the sign of the incoming symbol [br]sequence which has been corrupted by 0:03:09.622,0:03:13.410 noise. [br]And decides that the nth bit will be 1 if 0:03:13.410,0:03:18.750 the sign of the nth symbol is positive, [br]and 0 otherwise. 0:03:18.750,0:03:22.110 Lets look at an example, lets assume G [br]equal to 1. 0:03:22.110,0:03:25.980 So the two-level signal will alternate [br]between plus 1 and minus 1. 0:03:25.980,0:03:30.944 And suppose we have an input bit sequence [br]that gives rise to this signal here after 0:03:30.944,0:03:35.715 transmission and after decoding at the [br]receiver. 0:03:35.715,0:03:39.940 The resulting symbol sequence will look [br]like this, where each symbol has been 0:03:39.940,0:03:44.790 corrupted by a varying amount of noise. [br]If we now slice this sequence by 0:03:44.790,0:03:49.724 thresholding, as shown, shown before. [br]We recover a simple sequence like this 0:03:49.724,0:03:54.790 where we have indicated in red the errors [br]incurred by the slicer because of the 0:03:54.790,0:03:57.812 noise. [br]So if you want to analyze in more detail 0:03:57.812,0:04:01.508 what the probability of error is, we have [br]to make some hypothesis on the signals 0:04:01.508,0:04:07.321 involved in this toy experiment. [br]Assume that each received symbol can be 0:04:07.321,0:04:11.230 modeled as the original symbol plus a [br]noise sample. 0:04:11.230,0:04:14.520 Assume also that the bits in the [br]bitstream are equiprobable. 0:04:14.520,0:04:19.270 So zero and one appear with probability [br]50% each. 0:04:19.270,0:04:21.810 Assume that the noise and the signal are [br]independent. 0:04:21.810,0:04:25.842 And assume that the noise is additive [br]white Gaussian noise with zero mean and 0:04:25.842,0:04:30.370 known variance sigma 0. [br]With this hypothesis, the probability of 0:04:30.370,0:04:34.206 error can be written out as follows. [br]First of all, we split the probability of 0:04:34.206,0:04:38.860 errors into 2 conditional probabilities. [br]Conditioned by whether the nth bit is 0:04:38.860,0:04:41.700 equal to 1, or the nth bit is equal to [br]zero. 0:04:41.700,0:04:44.600 In the first case, when the nth bit is [br]equal to 1. 0:04:44.600,0:04:48.376 Remember, the produced symbol will be [br]equal to G, so the probability of error 0:04:48.376,0:04:53.616 is equal to the probability for the noise [br]sample to be less than minus G. 0:04:53.616,0:04:58.398 Because only in this case the sum of the [br]sample plus the noise will be negative. 0:04:58.398,0:05:02.991 Similarly, when the nth bit is equal to [br]0, we have a negative sample. 0:05:02.991,0:05:08.450 And the only way for that to change sign [br]is if the noise sample is greater than G. 0:05:08.450,0:05:13.730 Since the probability of each occurrence [br]is 1 half because of the symmetry of the 0:05:13.730,0:05:18.508 Gaussian distribution function. [br]This is equal to the probability for the 0:05:18.508,0:05:23.146 noise sample to be larger than G. [br]And we can compute this as the integral 0:05:23.146,0:05:26.916 from G to infinity of the probability [br]distribution function for the Gaussian 0:05:26.916,0:05:30.582 distribution with the known variance [br]here. 0:05:30.582,0:05:34.960 This function has a standard name. [br]It's called the error function. 0:05:34.960,0:05:38.110 And since this integral can not be [br]computed in closed form, this function is 0:05:38.110,0:05:41.550 available in most numerical packages [br]under this name. 0:05:41.550,0:05:45.326 So the important thing to notice here is [br]that the probability of error is some 0:05:45.326,0:05:48.984 function of the ratio between the [br]amplitude of the signal and the standard 0:05:48.984,0:05:54.212 deviation of the noise. [br]And we can carry this analysis further by 0:05:54.212,0:05:59.187 considering the transmitted power. [br]We have a bi-level signal and each level 0:05:59.187,0:06:03.200 occurs with 1 half probability. [br]So the variance of the signal, which 0:06:03.200,0:06:06.905 corresponds to the power, is equal to G [br]squared time the probability of the nth 0:06:06.905,0:06:11.700 being equal to 1. [br]Plus G squared times the probability of 0:06:11.700,0:06:14.710 the nth bit being equal to 0, which is [br]equal to G squared. 0:06:14.710,0:06:18.120 And so, if we rewrite the probability [br]error function we can write that it is 0:06:18.120,0:06:23.136 equal to the error function of the ratio. [br]Between the standard deviation of the 0:06:23.136,0:06:26.528 transmitted signal divided by the [br]standard deviation of the noise, which is 0:06:26.528,0:06:30.132 equivalent to saying that it is the error [br]function of the square root of the signal 0:06:30.132,0:06:34.810 to noise ratio. [br]If we plot this as a function of the 0:06:34.810,0:06:39.875 signal to noise ratio in dBs. [br]And I remind here that dBs here mean that 0:06:39.875,0:06:44.348 we compute 10 times the log in base 10 of [br]the power of the signal divided by the 0:06:44.348,0:06:49.773 power of the noise. [br]And since we are in a log log scale, we 0:06:49.773,0:06:54.388 can see that the probability of error [br]decays exponentially with the signal to 0:06:54.388,0:06:59.800 noise ratio. [br]This exponential decay is quite the norm 0:06:59.800,0:07:02.882 in communication systems. [br]And while the absolute rate of decay 0:07:02.882,0:07:07.580 might change in terms of the linear [br]constants involved in the curve. 0:07:07.580,0:07:12.110 The trend will stay the same even for [br]more complex signaling schemes. 0:07:12.110,0:07:15.197 So the lesson that we learn from the [br]simple example is that in order to reduce 0:07:15.197,0:07:19.905 the probability of error, we should [br]increase G, the amplitude of the signal. 0:07:19.905,0:07:23.265 But of course, increasing G also [br]increases the power of the transmitted 0:07:23.265,0:07:28.140 signal, and we know that we cannot go [br]above the channel's power constraint. 0:07:28.140,0:07:33.380 And so that's how the power constraint [br]limits the reliability of transmission. 0:07:33.380,0:07:37.760 The bilevel signalling scheme is very [br]instructive, but it's also very limited 0:07:37.760,0:07:41.350 in the sense that we're sending just one [br]bit per output symbol. 0:07:41.350,0:07:44.290 So to increase the throughput, to [br]increase the number of bits per second 0:07:44.290,0:07:47.990 that we send over a channel, we can use [br]multilevel signaling. 0:07:47.990,0:07:50.790 There are very many ways to do so and we [br]will just look at a few, but the 0:07:50.790,0:07:56.382 fundamental idea is that we take now. [br]Larger chunks of bits and therefore we 0:07:56.382,0:08:00.210 have alphabets that have a higher [br]cardinality. 0:08:00.210,0:08:04.434 So more values in the alphabet means more [br]bits per symbol and therefore a higher 0:08:04.434,0:08:07.576 data rate. [br]But not to give the ending away, we will 0:08:07.576,0:08:10.408 see that the power of the signal will [br]also be dependent on the size of the 0:08:10.408,0:08:13.614 alphabet. [br]And so, in order not to exceed a certain 0:08:13.614,0:08:16.857 probability of error, given the channel's [br]power of constraint, we will not be able 0:08:16.857,0:08:21.200 to grow the alphabet indefinitely. [br]But we can be smart in the way we build 0:08:21.200,0:08:24.480 this alphabet and so we will look at some [br]examples. 0:08:24.480,0:08:27.580 The first example is PAM, Pulse Amplitude [br]Modulation. 0:08:27.580,0:08:31.651 We split the incoming bitstream into [br]chunks of M bits so that each chunk 0:08:31.651,0:08:36.720 corresponds to an integer between 0 and 2 [br]to the M minus 1. 0:08:36.720,0:08:40.248 We can call this sequence of integers k [br]of n and this sequence is mapped onto a 0:08:40.248,0:08:46.169 sequence of symbols a of n like so. [br]There's a gain factor G, like always. 0:08:46.169,0:08:50.770 And then we use 2 to the n minus 1 odd [br]integers around 0. 0:08:50.770,0:08:58.300 So for instance, if M is equal to 2, we [br]have 0, 1, 2, and 3 as potential items 0:08:58.300,0:09:02.685 for k of n. [br]And a of n will be either. 0:09:02.685,0:09:10.970 Let's assume G is equal to 1. [br]Will be either minus 3, or minus 1, or 1, 0:09:10.970,0:09:15.870 or 3. [br]We will see why we use the odd integers 0:09:15.870,0:09:19.180 in just a second. [br]And the receiver the slicer will work by 0:09:19.180,0:09:23.200 simply associating to the received [br]symbol, the closest odd integer, always 0:09:23.200,0:09:28.600 taking the gain into account. [br]So graphically, again, PAM for M equal to 0:09:28.600,0:09:32.350 2 and G equal to 1, will look like this. [br]Here are the odd integers. 0:09:32.350,0:09:37.790 The distance between two transmitted [br]points, or transmitted symbols, is 2G 0:09:37.790,0:09:42.288 right here. [br]G Is equal to 1, but it would be in 0:09:42.288,0:09:47.730 general 2 times the gain. [br]And using odd integers creates a 0:09:47.730,0:09:49.224 zero-mean sequence. [br]If we assume that each symbol is 0:09:49.224,0:09:51.472 equiprobable. [br]Which is likely, given that we've used a 0:09:51.472,0:09:55.500 scrambler in the transmitter. [br]The the resulting mean is zero. 0:09:55.500,0:09:58.855 The analysis of the probability of error [br]for PAM is very similar to what we 0:09:58.855,0:10:03.829 carried out for bilateral signaling. [br]As a matter of fact, binary signaling is 0:10:03.829,0:10:08.231 simply PAM with M equal to 1. [br]The end result is very similar, and it's 0:10:08.231,0:10:11.427 an exponential decaying function of the [br]ratio between the power of the signal and 0:10:11.427,0:10:15.300 the power of the noise. [br]The reason why we don't analyze this 0:10:15.300,0:10:18.530 further is because we have an improvement [br]in store. 0:10:18.530,0:10:21.743 And the improvement is aimed at [br]increasing the throughput, increasing the 0:10:21.743,0:10:25.580 number of bits per symbol that we can [br]send without necessarily increasing the 0:10:25.580,0:10:29.990 probability of error. [br]So here's a wild idea. 0:10:29.990,0:10:34.130 Let's use complex numbers and build a [br]complex valued transmission system. 0:10:34.130,0:10:37.298 This requires certain suspension of [br]disbelief for the time being, but believe 0:10:37.298,0:10:41.394 me, it will work in the end. [br]The name for this complex valued mapping 0:10:41.394,0:10:44.790 scheme is QAM. [br]Which is an acronym for Quadtrature 0:10:44.790,0:10:47.736 Amplitude Modulation, and it works like [br]so. 0:10:47.736,0:10:52.176 The mapper takes the income and bit [br]stream, and splits it into chunks of M 0:10:52.176,0:10:56.842 bits, with M even. [br]Then it uses half of the bits, to define 0:10:56.842,0:11:00.934 a PAM sequence, which we call a of r of [br]n, and the reamaining, M over 2 bits, to 0:11:00.934,0:11:06.850 define another independent PAM sequence. [br]Ai of n. 0:11:06.850,0:11:11.800 The final symbol sequence is a sequence [br]of complex numbers, where the real part 0:11:11.800,0:11:14.473 is the first PAM sequence, and the [br]imaginary part is the second PAM 0:11:14.473,0:11:18.202 sequence. [br]And of course, in front we have a gain 0:11:18.202,0:11:21.985 factor, G. [br]So the transmission alphabet, a, is given 0:11:21.985,0:11:29.490 by points in the complex plane, with [br]odd-valued coordinates around the origin. 0:11:29.490,0:11:33.195 At the receiver, the slicer works by [br]finding the symbol in the alphabet, which 0:11:33.195,0:11:37.293 is closest in Euclidean distance to the [br]received symbol. 0:11:37.293,0:11:42.170 Let's look at this graphically. [br]This is a set of points for QAM 0:11:42.170,0:11:47.120 transmission with M equal to 2, which [br]corresponds to two bilevel PAM signals on 0:11:47.120,0:11:55.000 the real axis and on the imaginary axis. [br]So that results into four points. 0:11:55.000,0:11:58.864 If we increase the number of bits per [br]symbol, we set M equal to 4, that 0:11:58.864,0:12:02.590 corresponds to two pam signals with 2 [br]bits each, which makes for a 0:12:02.590,0:12:07.547 constellation. [br]This is how these arrangement of points 0:12:07.547,0:12:12.136 in the complex plain are called. [br]A constellation of four by four points at 0:12:12.136,0:12:15.900 the odd-valued coordinates in the complex [br]plane. 0:12:15.900,0:12:22.725 If we increase M to 8, then we have a 256 [br]point constellation, with 16 points per 0:12:22.725,0:12:26.840 side. [br]Lets look at what happens when a symbol 0:12:26.840,0:12:31.420 is received, and how we derive an [br]expression for the probability of error. 0:12:31.420,0:12:35.182 If this is the nominal constellation, the [br]transmitter will choose one of these 0:12:35.182,0:12:40.310 values for transmission, say this one. [br]And this value will corrupted by noise in 0:12:40.310,0:12:43.629 the transmission and the receiving [br]process. 0:12:43.629,0:12:47.919 And will appear somewhere in the complex [br]plane, not necessarily exactly on the 0:12:47.919,0:12:52.100 point it originates from. [br]The way the slicer operates, is by 0:12:52.100,0:12:56.500 defining decision regions around each [br]point in the constellation. 0:12:56.500,0:13:01.505 So suppose for this point here, the [br]transmitted point, the decision region is 0:13:01.505,0:13:07.150 square, of side 2G, centered around which [br]is made in point. 0:13:07.150,0:13:10.890 So what happens is that when we receive [br]symbols. 0:13:10.890,0:13:14.302 They will now fall on the original point. [br]But as long as they fall within the 0:13:14.302,0:13:17.110 decision region, they will be decoded [br]correctly. 0:13:17.110,0:13:19.710 So for instance here. [br]We will decode this correctly. 0:13:19.710,0:13:22.520 Here we will decode this correctly. [br]Same here. 0:13:22.520,0:13:26.396 But this point for instance falls outside [br]of the decision region and therefore it 0:13:26.396,0:13:29.987 will be associated to a different [br]constellation point, thereby causing an 0:13:29.987,0:13:33.916 error. [br]To quantify the probability of error, we 0:13:33.916,0:13:37.574 assume as per usual that each received [br]symbol is the sum of the transmitted 0:13:37.574,0:13:42.156 symbol. [br]Plus a noise sample theta of n. 0:13:42.156,0:13:47.634 And we further assume that this noise is [br]a complex value Gaussian noise of equal 0:13:47.634,0:13:52.640 variance in the complex and real [br]components. 0:13:52.640,0:13:57.860 We're working on a completely digital [br]system that operates. 0:13:57.860,0:14:01.614 With complex valued quantities. [br]So we're making a new model for the 0:14:01.614,0:14:05.326 noise, and we will see later, how to [br]translate the physical real noise, into a 0:14:05.326,0:14:09.764 complex variable. [br]With these assumptions, the probability 0:14:09.764,0:14:13.604 of error, is equal to the probability [br]that the real part of the noise is larger 0:14:13.604,0:14:18.606 than G in magnitude. [br]Plus the probability that the imaginary 0:14:18.606,0:14:21.850 part of the noise is larger than G in [br]magnitude. 0:14:21.850,0:14:24.640 We assume that real and imaginary [br]component of the noise are independent, 0:14:24.640,0:14:27.598 and that's why we can split the [br]probability like so. 0:14:27.598,0:14:31.738 Now, if you remember the shape of the [br]decision region, this condition is 0:14:31.738,0:14:35.947 equivalent to saying that the noise is [br]pushing the real part of the point, 0:14:35.947,0:14:40.570 outside of the decision region, in either [br]direction, and same for the imaginary 0:14:40.570,0:14:45.376 part. [br]Now if we develop this, this is equal to 0:14:45.376,0:14:48.715 1 minus the probability that the real [br]part of the noise is less than G, and the 0:14:48.715,0:14:52.390 imaginary part of the noise is less than [br]G. 0:14:52.390,0:14:57.200 This is the complimentary condition to [br]what we just wrote above. 0:14:57.200,0:15:00.737 And so this is equal to 1 minus the [br]integral over the decision region d of 0:15:00.737,0:15:05.680 the complex valued probability density [br]function for the noise. 0:15:05.680,0:15:09.600 In order to compute this integral, we're [br]going to approximate the shape of the 0:15:09.600,0:15:13.430 decision region. [br]With the inbound circle. 0:15:13.430,0:15:16.454 So instead of using the square, we're [br]going to use a circle centered around the 0:15:16.454,0:15:19.838 transmission point. [br]When the constellation is very dense, 0:15:19.838,0:15:24.118 this approximation is quite accurate. [br]With this approximation, we can compute 0:15:24.118,0:15:27.570 the integral exactly for a gaussian [br]distribution. 0:15:27.570,0:15:32.498 And if we assume that the variance of the [br]noise is sigma 0 squared over 2 in each 0:15:32.498,0:15:37.820 component, real or imaginary. [br]It turns out that the probability of 0:15:37.820,0:15:41.800 error is equal to each of the minus g [br]squared over sigma 0 square. 0:15:41.800,0:15:45.496 Now to obtain a probability of error as a [br]function of the signal to noise ratio we 0:15:45.496,0:15:49.450 have to compute the power of the [br]transmitted signal. 0:15:49.450,0:15:53.415 So if all symbols are equiprobable and [br]independent, it turns out that the 0:15:53.415,0:15:58.819 variance of the signal is G squared times [br]1 over 2 to the power of M. 0:15:58.819,0:16:02.979 Which is the probability of each symbol, [br]times the sum over all symbols in the 0:16:02.979,0:16:07.220 alphabet of the magnitude of the symbols [br]squared. 0:16:07.220,0:16:11.432 Now, it's a little bit tedious but we can [br]solve it exactly for M. 0:16:11.432,0:16:15.152 And it turns out that the power to [br]transmit the signal is g squared 2 rds3, 0:16:15.152,0:16:20.610 2 to the n to the minus 1. [br]Now, if you plug this into the formula 0:16:20.610,0:16:24.165 for the probability of error that we seen [br]before. 0:16:24.165,0:16:28.851 We get that the result is an exponential [br]function where the argument is minus 3, 0:16:28.851,0:16:33.324 that multiplies 2 to the minus m plus 1, [br]that multiplies the signals to noise 0:16:33.324,0:16:37.590 ratio. [br]We can plot this probability of error in 0:16:37.590,0:16:41.741 a log log scale, like we did before. [br]And we can paramatrize the curve, as a 0:16:41.741,0:16:45.550 function of the number of points in the [br]constellation. 0:16:45.550,0:16:49.645 So here you have the curve for a four [br]point constellation, Here's the curve for 0:16:49.645,0:16:53.520 16-points and here's the curve for [br]64-points. 0:16:53.520,0:16:56.912 Now you can see that for a given signal [br]to noise ratio the probability of error 0:16:56.912,0:17:01.000 increases with the number of points. [br]Why is that? 0:17:01.000,0:17:03.790 Well if the signal to noise remains the [br]same, and we assume that the noise is 0:17:03.790,0:17:06.580 always at the same level, then it means [br]that the power of the signal remains 0:17:06.580,0:17:10.900 constant as well. [br]In that case, if the number of points 0:17:10.900,0:17:15.814 increases, g has to become smaller. [br]In order to accomodate a larger number of 0:17:15.814,0:17:19.354 points for the same power. [br]But if g becomes smaller, then the 0:17:19.354,0:17:23.764 decision regions becomes smaller, the [br]separation between points become smaller, 0:17:23.764,0:17:28.570 and the decision process becomes more [br]vulnerable to noise. 0:17:28.570,0:17:32.490 So in the end here's the final recipe to [br]design a QAM transmitter. 0:17:32.490,0:17:34.540 First you pick a probability of error [br]that you can live. 0:17:34.540,0:17:37.600 In general, 10 to the minus 6 is an [br]acceptable probability of error at the 0:17:37.600,0:17:41.116 symbol level. [br]Then you find out the signals noise ratio 0:17:41.116,0:17:44.920 that is imposed by the channel's power [br]constraint. 0:17:44.920,0:17:49.724 Once you have that, you can find the size [br]of your constellation, by finding M. 0:17:49.724,0:17:53.628 Which, based on the previous equations, [br]is the log and base 2 of 1 minus 3 over 2 0:17:53.628,0:17:57.288 times the signal to noise ratio, divided [br]by the natural logarithm of the 0:17:57.288,0:18:02.109 probability of error. [br]Of course, you will have to round this to 0:18:02.109,0:18:05.211 a suitable integer value, and potentially [br]to an even power of 2 in order to have a 0:18:05.211,0:18:09.632 square constellation. [br]The final data rate of your system will 0:18:09.632,0:18:13.658 be M, the number of bits per symbol, [br]times W, which, if you remember, is the 0:18:13.658,0:18:17.816 baud rate of the system, and corresponds [br]to the bandwidth allowed for by the 0:18:17.816,0:18:23.335 channel. [br]So we know how to fit the bandwidth 0:18:23.335,0:18:27.838 constraint via upsampling. [br]With QAM, we know how many bits per 0:18:27.838,0:18:30.860 symbol we can use given the power [br]constraint. 0:18:30.860,0:18:34.600 And so we know the theoretical throughput [br]of the transmit for a given reliability 0:18:34.600,0:18:38.607 figure. [br]However, the question remains, how are we 0:18:38.607,0:18:44.400 going to send complex value symbols over [br]a physical channel? 0:18:44.400,0:18:48.900 It's time, therefore, to Stop the [br]suspension of this belief, and look at 0:18:48.900,0:18:54.902 techniques to do complex signaling over a [br]real value channel.