Optimal RX PDF
Optimal RX PDF
Optimal RX PDF
Baseband /
Channel Encoder Mapper Bandpass
modulation
Note:
• We have not included multiplexing from other users, power amplifier, up-converter in the block diagram.
• The Digital Modulation block includes GSOP for signal constellation.
• The original message is embedded in the output of channel encoder (we have added (channel encoder) and
removed bits (source encoder) in the process)
• Each bit is of equal duration, Tb
Incoming bit sequence Convert bits into symbols by Bits are mapped into
from output of Channel grouping (eg 1symb=2bits symbols where
encoder (say) i.e., 2m = M, m=2, M=4) Si Є {S1, S2, S3, S4}, i
= 1 to M=4
100110010111000101010 01 00 11 00 10 11 10 00 10 10 10 S2 S1 S4 S1 S3 S4 S 3 S1 S3 S3 S3
Note:
• We have considered m = 2 thereby number of symbols M = 4
• [S1 S2 S3 S4 ] Ξ [00 01 10 11]
OBJECTIVE: To design a receiver, based on 𝑟 𝑡 that is optimum in the sense that it minimizes the
probability of making an error
. 0 𝑟 𝑡 𝜑𝑘 𝑡 𝑑𝑡 = [𝑠𝑖 𝑡 + 𝑛 𝑡 ]𝜑𝑘 𝑡 𝑑𝑡 = 𝑠𝑖 𝑡 𝜑𝑘 𝑡 𝑑𝑡 + 𝑛 𝑡 𝜑𝑘 𝑡 𝑑𝑡
0 0 0 0
.
∅𝑘 (𝑡)
𝑇
𝑠𝑖𝑘 = 𝑠𝑖 𝜑𝑘 𝑑𝑡
0 Sampling at • Hence this is called a vector
0
𝑡=𝑇 receiver.
• Guess why you sample the output of
integrator
.
𝑇
𝑟𝑁 Thus the joint conditional probability of having received the signal
𝑑𝑡 𝑟 𝑡 given that 𝑠𝑖 𝑡 was transmitted (also called the likelihood
0 Sampling at function) is
𝑁
𝑡 = 𝑇𝑠 1 1
𝑝 𝑟 𝑠𝑖 = exp − 𝑟 − 𝑠𝑖 2
𝜋𝑁0 𝑁0
• At the output of the correlator demodulator we have obtained scalar voltages 𝑟𝑘 , k = 1, 2, .. N from each
𝑇
correlator branch which is represented as 𝐫 = 𝑟1 𝑟2 … . . 𝑟𝑁 (same output is obtained from a matched
filter demodulator)
• In other words we have obtained all the basis coefficients 𝑠𝑖𝑘 , k = 1, 2, .. N , for the transmitted signal 𝑠𝑖 𝑡 .
However, these coefficients are corrupted by noise 𝑛𝑘 , and hence there are chances of these 𝑠𝑖𝑘 to take
erroneous values, resulting in the incorrect detection of 𝑠𝑖 𝑡 [since 𝑠𝑖 𝑡 = 𝑠𝑖1 𝜑1 𝑡 + ⋯ + 𝑠𝑖𝑁 𝜑𝑁 𝑡 ].
Objective:
To design a signal detector that makes a decision on the transmitted signal in each signal interval (𝟎 ≤
𝒕 ≤ 𝑻 ), based on the observation of the vector 𝐫 = 𝒓𝟏 𝒓𝟐 … . . 𝒓𝑵 𝑻 , such that the probability of correct
decision is maximized (equivalently minimizing the probability of error).
where 𝑝 𝑟 𝑠𝑖 = Prior conditional probability of receiving signal vector 𝒓 given that signal 𝒔𝒊 was transmitted.
𝑃 𝑠𝑖 = Apriori probability of the ith signal being transmitted from the set of M signals (=1/M if equiprobable)
𝑀
𝑝 𝑟 = 𝑖=1 𝑝 𝑟 𝑠𝑖 𝑃 𝑠𝑖
• Note that 𝑃 𝑠𝑖 is only a number and does not affect our calculation and 𝑝 𝑟 is independent of which signal is being
transmitted.
• Hence maximizing 𝑃 𝑠𝑖 𝑟 boils down to maximizing 𝑝 𝑟 𝑠𝑖 , i.e., maximizing the likelihood function.
• As seen in the previous slide, maximizing 𝑃 𝑠𝑖 𝑟 boils down to maximizing 𝑝 𝑟 𝑠𝑖 , i.e., maximizing the
likelihood function. In other words, to compute 𝑃 𝑠𝑖 𝑟 we need to know 𝑝 𝑟 𝑠𝑖 for 𝑖 = 1,2, … 𝑀 .
• The decision criterion based on maximum 𝑝 𝑟 𝑠𝑖 over M signals is called Maximum-likelihood (ML) criterion
• For ease of mathematical understanding we consider a binary signal which we can generalized later for an M-ary
signal.
• Therefore transmitted signal in the symbol interval 0 ≤ 𝑡 ≤ 𝑇 will be (i =1, 2, i.e., M = 2)
𝑠1 𝑡 , 0 ≤ 𝑡 ≤ 𝑇, for a binary 1
𝑠𝑖 𝑡 =
𝑠2 𝑡 , 0 ≤ 𝑡 ≤ 𝑇, for a binary 0
2
1 𝑟 − 𝑎1
𝑝 𝑟 𝑠1 = exp − 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓𝑠1
𝐼1 𝜋𝑁0 𝑁0
1 𝑟 − 𝑎0 2
𝑝 𝑟 𝑠0 = exp − 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓𝑠0
𝜋𝑁0 𝑁0
𝐼2
𝐻1, , 𝑟 𝑇𝑠 > 𝛾
𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
𝐻2, , 𝑟 𝑇𝑠 < 𝛾
𝑟 𝑇𝑠 𝑝 𝑟 𝑠1 𝑃 𝑠0 Maximum
𝑎0 𝛾 𝑎1 𝐻1 , >
𝑝 𝑟 𝑠0 𝑃 𝑠1 A priori
𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
𝑟𝑎 𝑇𝑠 𝑝 𝑟 𝑠1 𝑃 𝑠0 (MAP)
𝐻2 , < Detector
𝑝 𝑟 𝑠0 𝑃 𝑠1
2𝑟 𝑎1 − 𝑎0 𝑎12 − 𝑎02 𝑃 𝑠0
𝐻1 , − > ln
𝐼2 𝑁0 𝑁0 𝑃 𝑠1
𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
2𝑟 𝑎1 − 𝑎0 𝑎12 − 𝑎02 𝑃 𝑠0
𝐻2 , − > ln
𝑎0 𝛾 𝑎1 𝑟 𝑇𝑠 𝑁0 𝑁0 𝑃 𝑠1
𝑁0 𝑃 𝑠0 𝑎12 − 𝑎02
𝐻1 , 𝑟 > ln + 𝑁0
𝑟𝑎 𝑇𝑠 2 𝑎1 − 𝑎0 𝑃 𝑠1 2. 𝑁0 𝑎1 − 𝑎0
𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
𝑁0 𝑃 𝑠0 𝑎12 − 𝑎02
𝐻2 , 𝑟 < ln + 𝑁0
2 𝑎1 − 𝑎0 𝑃 𝑠1 2. 𝑁0 𝑎1 − 𝑎0
𝐼1 𝑁0 𝑃 𝑠0 𝑎12 − 𝑎02
𝐻1 , 𝑟 > ln + =𝛾
2 𝑎1 − 𝑎0 𝑃 𝑠1 𝑁0
𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
𝑁0 𝑃 𝑠0 𝑎12 − 𝑎02
𝐻2 , 𝑟 < ln + =𝛾
2 𝑎1 − 𝑎0 𝑃 𝑠1 𝑁0
𝐼2
𝛾 is known as the MAP threshold
𝑟 𝑇𝑠 1
𝑎0 𝛾 𝑎1 If we take equiprobable signals then 𝑃 𝑠0 = 𝑃 𝑠1 = 2 therefore above
equation becomes
𝑟𝑎 𝑇𝑠 𝛾0 is known as the ML 𝑎1 + 𝑎2
𝐻1 , 𝑟 > = 𝛾0
threshold 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 2
𝑎1 + 𝑎2
𝐻2 , 𝑟 < = 𝛾0
ML DETECTOR 2
1
Solution: We have 𝑎1 = +1V and 𝑎0 = -1V, 𝜎𝑛2 = 𝑁0 = 0.1V2 , i.e, 𝑁0 = 0.2V2
2
𝑁0 𝑃 𝑠0 𝑎12 −𝑎02
Threshold of MAP detector is 𝛾= ln +
2 𝑎1 −𝑎0 𝑃 𝑠1 𝑁0
0.2 𝑃 𝑠0 0.1 𝑃 𝑠0
= ln +0 = ln
2(1− −1 ) 𝑃 𝑠1 2 𝑃 𝑠1
For (a) P(𝑠1 ) = 0.5; P(𝑠0 ) = 1-0.5=0.5; 𝛾 = 0 V
𝑃 𝑠0 0.3
(b) P(𝑠1 ) = 0.7; P(𝑠0 ) = 0.3 : 𝛾 = 0.05 ln =0.05ln =-0.04 V
𝑃 𝑠1 0.7
• Based on the previous computations MAP detector or ML detector has decided in favor of a symbol, say ‘0’
or ‘1’.
• However, due to presence of AWGN, we cannot say with certainty that the decision taken was correct.
• We introduce a figure of merit called the probability of error with the help of which we can compute a
statistical average of the error performance.
• In this discussion we consider the case of binary signal to continue with our binary ML detector.
• We will denote 𝑃 0 𝑠1 = Probability that a ‘0’ has been detected when a ‘1’ was sent
𝑃 1 𝑠0 = Probability that a ‘1’ has been detected when a ‘0’ was sent
𝑃𝑏𝑒 = 𝑃 𝑒, 𝑠𝑖 = 𝑃 𝑒 𝑠𝑖 𝑃 𝑠𝑖 = 𝑃 0 𝑠1 𝑃 𝑠1 + 𝑃 1 𝑠0 𝑃 𝑠1
𝑖=1 𝑖=1
• 𝑃𝑏𝑒 =
𝐼1 ∞ ∞ 2
1 𝑟 − 𝑎0
𝑃𝑏𝑒 = 𝑃 𝑟 𝑠0 𝑑𝑟 = exp − 𝑑𝑟
𝛾0
𝑎1 +𝑎0 𝜋𝑁0 𝑁0
2
𝑟−𝑎0
𝐼2 • Substituting 𝑢 = , then 2 𝑁0 𝑑𝑢 = 𝑑𝑟
2 𝑁0
𝑟 𝑇𝑠 𝑎1 +𝑎0 2
𝑎0 𝛾 𝑎1 • When 𝑟 = , 𝑢= 𝑎1 − 𝑎0 ; 𝑟 = ∞, 𝑢 = ∞
2 𝑁0
𝑟𝑎 𝑇𝑠
∞
1 𝑢2 2
𝑃𝑏𝑒 = exp − 𝑑𝑢 = 𝑄 𝑎 − 𝑎0
2
𝑎 −𝑎 2𝜋 2 𝑁0 1
𝑁0 1 0
2
𝑃𝑏𝑒 = 𝑄 𝑎1 − 𝑎0
𝑁0
𝑇𝑠 2 𝑑𝑡
• 𝐸𝑑 = 0
𝑠1 𝑡 −𝑠0 𝑡 = 𝑎1 − 𝑎0 2 𝑇𝑠
2 2𝐸𝑑
𝑃𝑏𝑒 = 𝑄 𝑎1 − 𝑎0 =𝑄 =𝑄 𝑆𝑁𝑅
𝑁0 𝑁0
- 3𝑇 3𝑇
𝝋1
𝒔0 𝒔1
𝑑12
Solution: The inter-symbol distance is 𝑑12= 2 3𝑇 = 𝐸𝑏 . Hence the probability of error will be given by :
2𝐸𝑑 2
𝑑12 14.6𝑇
𝑃𝑏𝑒 = 𝑄 =𝑄 =𝑄 =𝑄 28 = 𝑄 5.29 = 5.79 X 10-8
𝑁0 2𝑁0 3𝑡
Thank You!!
Dr D Adhikari, School of Electrical Engineering
Last version available at www.eng.tau.ac.il/∼jo/teaching
1
Q(−∞) = 1 ; Q(0) = ; Q(∞) = 0 ; Q(−x) = 1 − Q(x)
2
Matlab does not have a build-in function for Q(·). Instead, we use its erf function:
Z α
2 2
e−x dx
∆
erf(α)= √
π 0
erf(0) = 0 ; erf(∞) = 1
Now, if we want to know the probability of X to be away from its expectation µ by at least a
(either to the left or to the right) we have:
µ ¶
a
Pr{X > µ + a} = Pr{X < µ − a} = Q
σ
The probability to be away from the center where we don’t matter in which direction is 2 · Q( σa ).