ECE4313 Worksheet Endsem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

School of Electrical Engineering and Computing

Department of Electronics and Communication Engineering


Worksheet on Digital Communications
Target group :4th Year Undergraduate ECE Students
Part A: Workout Part
1. Consider a digital communication system that transmits information via QAM over a
voice-band telephone channel at a rate of 2400 symbols/s. The additive noise is assumed
to be white and Gaussian.
a. Determine the Eb/N0 required to achieve an error probability of 10−5 at 4800 bits/s
b. Repeat part (a) for a rate of 9600 bits/s.
c. Repeat part (a) for a rate of 19,200 bits/s.
d. What conclusions do you reach from these results?
2. Digital information is to be transmitted by carrier modulation through an additive Gaussian
noise channel with a bandwidth of 100 kHz and N0 = 10−10 W/Hz. Determine the
maximum rate that can be transmitted through the channel for four-phase PSK, binary
FSK, and four-frequency orthogonal FSK, which is detected noncoherently.
3. Find a generator polynomial g(x) for a (7,4) cyclic code, and find code vectors for the
following vector: 1010, 1111, 0001,1000.
4. The outputs x1, x2, and x3 of a DMS with corresponding probabilities p1 = 0.45, p2 =

0.35, and p3 = 0.20 are transformed by the linear transformation Y = aX + b, where


a and b are constants. Determine the entropy H (Y) and comment on what effect the
transformation has had on the entropy of X.
5. A discrete memoryless source produces outputs {a1, a2, a3, a4, a5, a6}. The corresponding
output probabilities are 0.7, 0.1, 0.1, 0.05, 0.04, and 0.01.
a. Design a binary Huffman code for the source. Find the average code word length.
Compare it to the minimum possible average code word length.
b. Is it possible to transmit this source reliably at a rate of 1.5 bits per source symbol? Why?
c. Is it possible to transmit the source at a rate of 1.5 bits per source symbol employing the
Huffman code designed in part (a)?
6. A (6, 3) systematic linear block code encodes the information sequence x = (x1, x2, x3)

1|Page
into code word c = (c1, c2, c3, c4, c5, c6), such that c4 is a parity check on c1 and c2, to
make the overall parity even (i.e., c1 ⊕ c2 ⊕ c4 = 0). Similarly, c5 is a parity check on c2
and c3, and c6 is a parity check on c1 and c3.
a) Determine the generator matrix of this code.
b) Find the parity check matrix for this code.
c) Using the parity check matrix, determine the minimum distance of this code.
d) How many errors is this code capable of correcting?
e) If the received sequence (using hard decision decoding) is y = 100000, what is the
transmitted sequence using a maximum-likelihood decoder? (Assume that the crossover
probability of the channel is less than 1⁄2.
7. The generator matrix for a linear binary code is

a) Express G in systematic [I|P] form.


b) Determine the parity check matrix H for the code.
c) Construct the table of syndromes for the code.
d) Determine the minimum distance of the code.
e) Demonstrate that the code word c corresponding to the information sequence 101
satisfies 𝑐𝐻 𝑇 = 0.
8. Find the bandwidth of a PSK wave form used to transmit an alternating series of zeros and
ones. The frequency is 1MHz and the phase varies sinusoidally from 0 and 180 degrees.
The bitrate is 2000 bps.
9. Derive the expression for FSK modulation probability of error.

2|Page
𝐴2 𝑇
Where 𝐸𝑏 = is the average signal energy per bit.
2

10. A bipolar binary signal s1(t) is a +1V or –V pulse during the interval (0, T). Additive white
𝜂
noise with power spectral density 2 = 10−5 W/Hz is added to the signal.

a. Determine the maximum bitrate that can be sent a bit error probability of 𝑃𝑒 ≤ 10−4 .
b. Find the output of the matched filter and determine the maximum value of SNR if the
input S(t) is a rectangular pulse of amplitude A and duration T.

11. Consider a binary memoryless source X with two symbols x1 and x2. Show that the entropy
H(X) is maximum when both x1 and x2 are equiprobable.
12. Drive the channel capacity of the binary source symmetric channel.
13. Let the differential entropy of a random variable X defined by the equation
+∞
H(x) given as ∫−∞ 𝑓(𝑥)𝑙𝑜𝑔2 𝑓(𝑥)𝑑𝑥
Where f(x) is the probability density function(pdf) of the random variable Find the pdf of
X for which H(X) is maximum.
14. An analog signal having 4 kHz bandwidth is sampled at 1.25 times the Nyquist rate, and
each sample is quantized into one of 256 equally likely levels. Assume that the successive
samples are statistically independent.
a. What is the information rate of this source?
b. Can the output of this source be transmitted without error over an AWGN channel with
a bandwidth of 10kHz and SNR ratio of 20dB?
c. Find the SNR ratio required for error-free transmission for part (b).
d. Find the bandwidth required for an AWGN channel for error-free transmission of the
output of this source if the SNR ratio is 20dB.

3|Page
15. A high-resolution black-and-white TV picture consists of about 2 x 106 picture elements
and 16 different brightness levels. Pictures are required at the rate of 32 per second. All
picture elements are assumed to be independent, and all levels have equal likelihood of
occurrence. Calculate the average rate of information conveyed by this TV picture source.
16. It is required to transmit 2.08 x 106 binary digits per second with 𝑃𝑏 ≤ 10−6 . Three
possible schemes are considered:
a. Binary b. 16-ary ASK c. 16-ary PSK

The channel noise PSD is 𝑆𝑛 (𝜔) = 10−8 . Determine the transmission bandwidth and
the signal power required at the receiver input in each case.

17. Consider a binary communications system that receives equally likely signals s1(t) and s2(t)
plus AWGN. See figure below. Assume that receives equally filter is a matched filter and

that the noise power spectral density No is equal to 10−12 Watt/Hz.Use the values of

received signal voltage and time shown on figure given below to compute the bit-error

Baseband-antipodal

18. Binary data at 9600bps are transmitted using 8-ary modulation with a system using a
raised cosine roll-off filter characteristic. The system has a frequency response out to 2.4
kHz.
a) What is the symbol rate?
b) What is the roll-off factor of the filter characteristic?

Part B: Discussion Part


1. With complete diagram, explain the generation and reception of FSK BPSK signal.
2. Drive the expression of error probability of FSK.

4|Page
3. Give the expression for optimum demodulation of DPSK signals.
4. Define the BER. Give the expression for BER of binary PSK, 4-ary PAM, and QPSK.
5. With neat diagram and expressions, explain the noncoherent demodulation of FSK.
6. Explain why the matched filter is called as an optimum filter. Drive the probability of
error of matched filter.
7. What is channel equalization? With a neat diagram, explain the concept of equalization.
8. Obtain the expression for average probability of symbol error for BPSK using coherent
detection.
9. Draw the block diagram of PAM modulator.
10. With neat diagram and truth table, explain the QPSK modulator with each blocks.
11. Explain the correlation detector with neat block diagrams.
12. Explain the matched filter receiver. Obtain the expression for the impulse response of
the matched filter.
13. Drive the expression for error probability of binary PSK.
14. With neat diagram and expressions, explain the binary FSK generation and coherent
detection scheme
15. What are the advantages of M-ary QAM over M-ary PSK system?
16. Define inter symbol interference and explain ideal solution for zero ISI.
17. Write a note on (i) adaptive equalization (ii) syndrome decoding.
18. Explain with a neat diagram working of (i) coherent detector (ii) BPSK transmitter
19. What are discrete memoryless channels with channel matrix?
20. What is Hamming code? Explain the procedure for generation of Hamming code for 7-
bit ASCII code.
21. Explain the concept of entropy and its properties. With suitable example, explain the
entropy of a binary source.
22. Distinguish between coherent and non-coherent detection in five points.
23. What is difference between Source coding and Channel coding? State the Source
Coding Theorem and Channel Coding Theorem clearly with formulas.
24. Define mutual information and information rate
25. Explain with block diagram the two steps in the demodulation/detection of digital
signals.

5|Page

You might also like