Digital 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Digital Communication

EE-330

1 / 38
Gram Schmidt Orthogonalization I

This permits the representation of the si (t) as a linear combination


of N orthonormal basis functions (N M), for i = 1, 2, . . . , M.
Represent a set of real valued energy signals (signals with finite
energy) s1 (t), . . . , sM (t) each of duration T seconds (i.e. each
having support [0, T )) in the form
N
X
si (t) = sij j (t), 0 t < T, i = 1, . . . , M
j=1

where the function 1 (t), . . . , N (t), N M, form an orthonormal


basis over [0, T )
Z T (
1 if i = j
i (t)j (t)dt =
0 0 if i 6= j

2 / 38
Gram Schmidt Orthogonalization II
The coefficient sij can be obtained using the orthonormal property as
Z T
sij = si (t)j (t)dt, i = 1, 2, . . . , M, j = 1, 2, . . . , N
0

Denote
si 1
si = ... ,

i = 1, 2, . . . , M
siN
Thus si is an N-dimensional representation of si (t).
Generate 1 (t), . . . , N (t) from s1 (t), . . . , sm (t) using G-S
orthogonalization. Let
Z T
Ei = si2 (t)dt
0

s1 (t) p
1 (t) = s1 (t) = E1 1 (t) = s11 (t)1 (t)
E1
3 / 38
Gram Schmidt Orthogonalization III

Therefore, s11 = E1
Z T
s21 = s2 (t)1 (t)dt
0

s2 (t) s21 1 (t)


2 (t) = qR
T
0
(s2 (t) s21 1 (t))2 dt

s2 (t) s21 1 (t)


2 (t) = p
2
E2 s21
In general
Pi 1
si (t) j=1 sij j (t)
i (t) = qR Pi 1
T
0
[si (t) j=1 sij j (t)]2 dt
Z T
where sij = si (t)j (t)dt, j = 1, 2, . . . , i 1
0

4 / 38
Gram Schmidt Orthogonalization IV

Now N M
1 If the signals s1 (t), . . . , sM (t) form a linearly independent set, then
N =M
2 If the signals s1 (t), . . . , sM (t) are not linearly independent,then
Pi 1
N < M,and si (t) j=1 sij j (t) = 0 for i > N

5 / 38
Gram Schmidt Orthogonalization V

Example: BPSK (M = 2)
s1 (t) = A cos(2fc t), A > 0, and 0 t < T

s2 (t) = A cos(2fc t + ) = A cos(2fc t) = s1 (t)


For N = 1
A cos(2fc t) cos(2fc t)
1 (t) = qR = q , 0t<T
T 2 T
0 A cos2 (2f t)dt
c 2 (1 + sinc(4f c T ))

If 4fc T is a natural number (positive integer)


r
2
1 (t) = cos(2fc t), 0 t < T
T

6 / 38
Gram Schmidt Orthogonalization VI

BFSK
s1 (t) = A cos(2f1 t), A > 0, 0t<T
s2 (t) = A cos(2f2 t), f1 6= f2
A2 T A2 T
E1 = (1 + sinc(4f1 T )), E2 = (1 + sinc(4f2 T ))
2 2
cos(2f1 t)
1 (t) = q , 0t<T
T
2 (1 + sinc(4f1 T ))
Z T
A2
s21 = cos(2f1 t) cos(2f2 t)dt
E1 0
 
A2 sin(2(f1 f2 )T ) sin(2(f1 + f2 )T )
= +
2 E1 2(f1 f2 ) 2(f1 + f2 )
A2 T
= [sinc(2T (f1 f2 )) + sinc(2T (f1 + f2 ))]
2 E1

7 / 38
Gram Schmidt Orthogonalization VII

s2 (t) s21 1 (t)


A2 T
= A cos(2f2 t) 2 E1
[sinc(2T (f1 f2 )) + sinc(2T (f1 + f2 ))] A cos(2f

E
1 t)
1

A2 T A2 T
 
2
E2 s21 = 1 + sinc(4f2 T ) {sinc(2T (f1 f2 )) + sinc(2T (f1 + f2 ))}2
2 2E1

cos(2f2 t) [sinc(2T (f11+sinc(4f


f2 ))+sinc(2T (f1 +f2 ))]
1T )
cos(2f1 t)
2 (t) = q
T {sinc(2T (f1 f2 ))+sinc(2T (f1 +f2 ))}2
2 [1 + sinc(4f2 T ) 1+sinc(4f1 T ) ]

When 2f1 , 2f2 are both integral multiple of T 1


r r
2 2
1 (t) = cos(2f1 t), 2 (t) = cos(2f2 t), 0t<T
T T

8 / 38
Geometric Interpretation of Signals I
1 (t), . . . , N (t) are N orthonormal functions over [0,T).
N
X
si (t) = sij j (t), 0 t < T , i = 1, 2, . . . , M, N M
j=1

Z T
sij = si (t)j (t)dt, i = 1, 2, . . . , M, j = 1, 2, . . . , N
0
Signal vector
si 1
si = ... ,

i = 1, 2, . . . , M
siN
si 1 , . . . , siN are the coordinates of si in an N-dimensional euclidean space,
called the signal space.
Energy of signal si (t) is
Z T N
X
Ei = si2 (t)dt = ksi k2 = siT si = sij2
0 j=1
9 / 38
Geometric Interpretation of Signals II

Euclidean distance between si and sk is ksi sk k

N
X Z T
ksi sk k2 = (sij skj )2 = [si (t) sk (t)]2 dt
j=1 0

10 / 38
Coherent Detection for M-ary Signaling I

Hi : x(t) = si (t) + w (t), 0 t < T, i = 1, . . . , M


where Hi is Hypothesis Hi and w (t) is AWGN of mean zero and psd
N0 /2.
Under Hi
Z T
xj = x(t)j (t)dt
0
= sij + wj

where Z T
wj = w (t)j (t)dt
0

wj is a Gaussian r.v.
E [wj ] = 0

11 / 38
Coherent Detection for M-ary Signaling II
Z T Z T
E [wj wk ] = E [w (t1 )w (t2 )]j (t1 )k (t2 )dt1 dt2
0 0
Z T
N0
= j (t1 )k (t1 )dt1
2 0

N0 /2 if j=k
=
0 if j 6= k

w1 , . . . , wN are iid N (0, N0 /2) random variables.


Now

Hi : x(t) = si (t) + w (t)


N
X N
X
= sij j (t) + wj j (t) + w (t)
j=1 j=1
| {z }
w (t)

w (t) = remainder term.


From x(t) we have to make a decision as to which si (t) was transmitted.
12 / 38
Coherent Detection for M-ary Signaling III

Since w (t) does not affect the decision process, we make a decision
based on
N
X
x(t) w (t) = xj j (t)
j=1
N
X
= (sij + wj )j (t)
j=1

Let
x1
x = ...

xN
Given Hi
xj N (sij , N0 /2)
x1 , . . . , xN independent.

13 / 38
Coherent Detection for M-ary Signaling IV
Given Hi
x N (si , (N0 /2)IN )
where si is mean vector and (N0 /2)IN is covariance matrix.
Joint pdf
!2
N N
Y Y 1 1 xj sij
fx (x|mi ) = fxj (xj |mi ) = p exp p
2(N0 /2) 2 (N0 /2)
j=1 j=1

1 X N
1 2
= exp (xj s ij )
(N0 )N/2 N0
j=1

N
X

w (t) = w (t) wj j (t)
j=1

is a zero mean Gaussian random process, and

E [xj w (t)] = 0, j = 1, . . . , N, 0t<T


14 / 38
Coherent Detection for M-ary Signaling V

{xj } and w (t) are independent


w (t) is irrelevant to the decision making process.
Thus we make the decision based on the N correlator outputs
Z T
xj = x(t)j (t)dt, j = 1, . . . , N
0

Thus, x = [ x1 . . . xN ] represents sufficient statistics for the


decision.
The Hypothesis
Hi : x(t) = si (t) + w (t)
is converted to
Hi : x = si + w , i = 1, . . . , M
where Z T
x = [ x1 . . . xN ] , xj = x(t)j (t)dt
0

15 / 38
Coherent Detection for M-ary Signaling VI
Z T
si = [ si 1 . . . s i N ] , sij = si (t)j (t)dt
0
Z T
w = [ w1 . . . w N ] , wj = w (t)j (t)dt
0

Note that the noise vector w is completely characterized by the noise


process w (t), however, the reverse is not true.
Given x, we have to perform a mapping from x to an estimate m of the
transmitted symbol in a way that would minimize the probability of error
in the decision making process.
Suppose that given the observation vector x, we make the decision
m = mi . The probability of error in this decision is

Pe (mi , x) = Pr (mi not transmitted |x) = 1 Pr (mi transmitted |x)

The optimum decision rule which minimize Pe (mi , x), maximizes


Pr (mi transmitted|x) is

16 / 38
Coherent Detection for M-ary Signaling VII

set m = mi if Pr (mi transmitted|x) Pr (mk transmitted|x) k 6= i

that is
m = arg max {Pr (mk transmitted|x)}
mk ,1kM

This decision rule is called the maximum aposteriori probability decision


or MAP decision rules.

17 / 38
Probability of symbol error I

Probability of symbol error

Pe = 1 Probability of correct decision


XM XM Z
= 1 pi Pci = 1 pi fx (x|mi )dx
i =1 i =1 Zi

where Pci =probability of correct decision given mi is transmitted.


Given mi transmitted and mk is decided
Z T Z T
x(t)sk (t)dt = [si (t) + w (t)]sk (t)dt
0 0
Z T Z T
= si (t)sk (t)dt + w (t)sk (t)dt
0 0

Z T Z T N
X
x(t)sk (t)dt N (siT sk , (N0 /2)Ek ), where Ek = sk2 = skj2
0 0 j=1

18 / 38
Probability of symbol error II

Let
Z T Z T
N0 Ek
Dk (i) = si (t)sk (t)dt + ln pk + w (t)sk (t)dt
0 2 2 0

Given mi is transmitted and mk is decided


 
T N0 E k N0
Dk (i) N si sk + ln pk , Ek
2 2 2

Probability that mi is transmitted and is correctly decided

Pci = Pr (Di (i) > max(D1 (i), . . . , Di 1 (i), Di +1 (i), . . . , DM (i)))


= Pr (D1 (i) < Di (i), . . . , Di 1 (i) < Di (i), Di +1 (i) < Di (i),
. . . , DM (i) < Di (i))

19 / 38
Probability of symbol error III
"(Z ) (Z )#
T T
Cov (Dk (i), D (i)) = E w (t1 )sk (t1 )dt1 w (t1 )s (t2 )dt2
0 0
Z
N0 T
= sk (t)s (t)dt
2 0
N0 T
= s s
2 k
Let

1 (i)
.. N0 Ek
k (i) = E [Dk (i)] = siT sk +

(i ) = . , ln pk , k = 1, . . . , M
2 2
M (i)

..
.
K =
K k
= [Kk ]M
k,=1
..
. MM

20 / 38
Probability of symbol error IV
N0 T
Kk = Cov (Dk (i), D (i)) = s s
2 k
N0 T
(Note that Kkk = 2 sk sk = N20 ||sk ||2 = N0
2 Ek )

D1 (i)
..
D(i) = . N ((i ) , K )
DM (i)

fD(i ) (v ) = fD1 (i ),...,DM (i ) (v1 , . . . , vM )


 
1 1 T 1
= exp (v (i ) ) K (v (i ) ) ,
(2)M/2 (det(K ))1/2 2
< v1 , . . . , vM <
Note v = [v1 , , vM ] .
Z (Z Z Z Z
vi vi vi vi
Pci =
vi = v1 = vi 1 = vi +1 = vM =

fD(i ) (v )dv1 . . . dvi 1 dvi +1 . . . dvM dvi
21 / 38
Probability of symbol error: Binary signaling I

x is received and mk is decided


Z T
N0 Ek
Dk = x(t)sk (t)dt + ln pk , k = 1, 2
0 2 2
N0 Ek
= x T sk + ln pk , k = 1, 2
2 2
m1
D1 D2
m2

m1 N 0 p2 E1 E2
x T (s1 s2 ) ln +
m2 2 p1 2
H1 : x = s1 + w
H2 : x = s2 + w
   
0 N0 1 0
w N ,
0 2 0 1

22 / 38
Probability of symbol error: Binary signaling II
Suppose m1 was transmitted
 
N0
x = s1 + w N s1 , I2
2

If Y N (, K ), then aT Y = Y T a N (aT , aT Ka)


 
T T T N0 2
x (s1 s2 ) = (s1 s2 ) x N (s1 s2 ) s1 , ||s1 s2 ||
2

Pc1 = Pr (x T (s1 s2 ) > |m1 )



T T T
x (s 1 s 2 ) s 1 (s 1 s 2 ) s (s 1 s 2 )
> q 1

= Pr q m1
N0 2 N0 2
2 ||s 1 s 2 || 2 ||s 1 s 2 ||

s T (s1 s2 )
= Q q 1
N0 2
2 ||s1 s2 ||

23 / 38
Probability of symbol error: Binary signaling III

Suppose m2 was transmitted


 
N0
x = s2 + w N s2 , I2
2
 
T T N0 2
x (s1 s2 ) N (s1 s2 ) s2 , ||s1 s2 ||
2

Pc2 = Pr (x T (s1 s2 ) < |m2 )



T
s (s 1 s 2 )
= 1 Q q 2
N0 2
2 ||s1 s2 ||

Note that
Pe1 = 1 Pc1 , Pe2 = 1 Pc2

24 / 38
Probability of symbol error: Binary signaling IV
Pe = 1 (p1 Pc1 + p2 Pc2 )
= p1 + p2 (p1 Pc1 + p2 Pc2 )
= p1 (1 Pc1 ) + p2 (1 Pc2 )
= p1 Pe1 + p2 Pe2

N 0 p2 E1 E2
s1T (s1 s2 ) = ln + E1 + s1T s2
2 p1 2
N 0 p2 E1 + E2 2s1T s2
= ln
2 p1 2
N 0 p2 ||s1 s2 ||2
= ln
2 p1 2

N 0 p2 E1 E2
s2T (s1 s2 ) = ln + + E2 s1T s2
2 p1 2
N 0 p2 ||s1 s2 ||2
= ln +
2 p1 2
25 / 38
Probability of symbol error: Binary signaling V

1 Q(x) = Q(x)

||s1 s2 ||2
2 N20 ln pp21
Pe1 = 1 Pc1 = Q q
N0 2
2 ||s 1 s 2 ||

||s1 s2 ||2 N0 p2
2 + 2 ln p1
Pe2 = 1 Pc2 = Q q
N0 2
2 ||s1 s2 ||

Noting that p2 = 1 p1 , we get


p  
1p1
||s1 s2 || N 0 /2 ln p1
Pe = p 1 Q
2N0 ||s1 s2 ||
p  
1p1
||s1 s2 || N 0 /2 ln p1
+(1 p1 )Q +
2N0 ||s 1 s 2 ||

26 / 38
Probability of symbol error: Binary signaling VI
where s
Z T
||s1 s2 || = (s1 (t) s2 (t))2 dt
0

When p1 = p2 = 1/2, ln(p2 /p1 ) = ln((1 p1 )/p1 ) = 0, and therefore


s R
  T 2 dt
||s1 s2 || 0
(s 1 (t) s 2 (t))
Pe = Q =Q
2N0 2N0

Higher the value of ||s1 s2 ||, lower the Pe and better the performance.
Let the signals be of equal energy, Es (energy per symbol), that is

||s1 ||2 = ||s2 ||2 = Es

or E1 = E2 = Es and let , s1T s2 /Es .


Then

||s1 s2 ||2 = ||s1 ||2 + ||s2 ||2 2s1T s2 = 2Es 2Es = 2Es (1 )

27 / 38
Probability of symbol error: Binary signaling VII

s
Es (1 )
Pe = Q
N0

Note that in the case of binary signaling, Es = Eb (energy per bit), and
we can write s
E b (1 )
Pe = Q
N0

For BPSK, = 1 and for orthogonal BFSK, = 0.

28 / 38
Probability of symbol error: Orthogonal signaling I
In case of orthogonal signaling

skT s = 0 for k 6=

In addition, if we have equal energies (E1 = E2 = = EM = E ) and


equal apriori probabilities (p1 = p2 = = pM = 1/M), then the
decision rule is
(Z )
T
m = arg max x(t)sk (t)dt
mk ,1kM 0

= arg max {skT x}


mk ,1kM

Given mi (that is, si (t) is transmitted), we get


T
D = D(i) = [D1 (i) . . . DM (i)]

with Dk (i) = skT (si + w ) where w N (0, (N0 /2)IN ).

D(i) N ((i ) , K )
29 / 38
Probability of symbol error: Orthogonal signaling II

where (i ) = [0 0 E 0 0]T , K = (N0 /2)EIM . Therefore



M
1 1 X
fD(i ) (v ) = exp (vi E )2 + vk2
(N0 E )M/2 N0 E
k=1,k6=i

< v1 , . . . , vM <

30 / 38
Probability of symbol error: Orthogonal signaling III
Pci =Pr (D1 (i) < Di (i), , Di 1 (i) < Di (i), Di +1 (i) < Di (i), , DM (i) < Di (i))
Z (Z Z vi Z vi Z vi
vi
=
vi = v1 = vi 1 = vi +1 = vM =

fD(i ) (v )dv1 . . . dvi 1 dvi +1 . . . dvM dvi
Z  
1 (vi E )2
= 1/2
exp
v = (N0 E ) N0 E
i
M Z vi  2

Y 1 v

1/2
exp k dvk dvi
k=1,k6=i vk = (N0 E ) N0 E
Z   !!M1

1 (vi E )2 vi
= 1/2
exp 1Q p dvi
vi =(N0 E ) N0 E N0 E /2
p
Putting u = vi /
N0 E /2, we get
Z ( p )
1 (u 2E /N0)2
Pci = exp (1Q(u))M1 du, for i = 1, . . . , M
2 2
31 / 38
Probability of symbol error: Orthogonal signaling IV
Therefore, Pei is independent of i
M
X 1
Pe = 1 Pc = 1 Pci
M i
i =1

Z (p )

1 2E /N0 )2
(u
Pe = 1 exp (1 Q(u))M1 du
2 2
p
When M = 2, we know that Pe = Q( E /N0 ). Putting M = 2 in above
equation, we obtain the identity
r ! Z ( p )
E 1 (u 2E /N0 )2
Q =1 exp (1 Q(u))du
N0 2 2

That is
r ! Z ( p )

E 1 (u 2E /N0)2
Q = exp Q(u)du
N0 2 2

32 / 38
Union bound on the probability of symbol error I

Computation of Pe can be a difficult task for certain cases of M-ary


signaling. We sometimes resort to the use of bounds (upper and/ or
lower bounds) on Pe , so that we can predict the signal-to-noise ratio
(SNR) to maintain a prescribed error rate or error probability.
One simple upper bound on Pe in the union bound.

Consider M-ary signaling in an AWGN channel, with messages


m1 , . . . , mM , message mi having an apriori probability of pi ,
i = 1, . . . , M, that is

Pr (mi ) = pi , i = 1, . . . , M

Let Ai ,k = event that ||x sk ||2 N0 ln pk < ||x si ||2 N0 ln pi , when


message mi is transmitted. Then

symbol error/mi = Ai ,1 Ai ,2 . . . Ai ,i 1 Ai ,i +1 . . . Ai ,M
= M
k=1 Ai ,k
i 6=k

33 / 38
Union bound on the probability of symbol error II
M
X
Pei = Pr (symbol error|mi ) Pr (Ai ,k )
k=1,k6=i

Now
p !
||si sk || N0 /2 ln(pk /pi )
Pr (Ai ,k ) = Q
2N0 ||si sk ||
p !
di ,k N0 /2 ln(pk /pi )
= Q
2N0 di ,k

where di ,k = ||si sk || is the Euclidean distance between si and sk . Since


 
1 y
Q(y ) = erfc
2 2
we can write
p !
1 di ,k N0 /2 ln(pk /pi )
Pr (Ai ,k ) = erfc
2 2 N0 di ,k
34 / 38
Union bound on the probability of symbol error III
Therefore
M p !
X di ,k N0 /2 ln(pk /pi )
Pei Q
2N0 di ,k
k=1,k6=i

PM
But Pe = i =1 pi Pei

M M p !
X X di ,k N0 /2 ln(pk /pi )
Pe pi Q
2N0 di ,k
i =1 k=1,k6=i

Union bound on probability of error.


1 If the signal set has a symmetric geometry, then
{di ,k : k = 1, . . . , M, k 6= i} is the same for all i, and we get for
p1 = p2 = = pM = 1/M.
M  
X di ,k
Pe Q
k=1,k6=i
2N0
35 / 38
Union bound on the probability of symbol error IV

2 Now  
2 2
e u 1 e u
1 2 < erfc(u) <
u 2u
holds.
This implies
1 2
Q(v ) < e v /2
2
Therefore, we get from the union bound
di2,k
 
M M (N0 /4)(ln(pk /pi ))2 pk
1 X X + 21 ln
N0 d2 pi
Pe < pi e i ,k
2
i =1 k=1,k6=i

36 / 38
Union bound on the probability of symbol error V

3 If dmin denotes the minimum distance of a signal constellation, given


by
1
dmin = min di ,k , and p1 = p2 = = pM =
k6=i ,k,i {1,...,M} M

then the fact that the Q-function is an monotonically decreasing


function of its argument implies
   
di ,k dmin
Q Q k 6= i, k = 1, . . . , m, i = 1, . . . , M
2N0 2N0

Therefore
 
dmin (M 1) dmin
2
/(4N0 )
Pe (M 1)Q < e
2N0 2

37 / 38
Different approximation of Q function

38 / 38

You might also like