Information Theory and Coding Sample Question 2021

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Topic-1

Transmission Error 1. Discuss the reasons of transmission error in wireless media.


2. Discuss the reasons of transmission error in wired media.
3. Discuss the main ways for controlling transmission error.
4. Define single-bit error and burst error.
5. Define ARQ and FEC.
5. Major advantages of ARQ over FEC.
6. Why FEC is desirable in place of, or in addition to error detection.
7. Draw and discuss the operational diagram of different types of ARQ procedures.
Topic-2
1. Define channel coding, waveform coding and structure sequences.
2. Define antipodal and orthogonal signal.
3. Determine weather of the following signal sets are either antipodal signal set or
orthogonal signal set.
𝑠1 (𝑡) = 𝑝(𝑡) 𝑠3 (𝑡) = sin 𝜔0 𝑡
Waveform Coding

𝑇 0 ≤ 𝑡 ≤ 𝑇 and 0≤𝑡≤𝑇
𝑠2 (𝑡) = 𝑝 (𝑡 − ) 𝑠4 (𝑡) = − sin 𝜔0 𝑡
2
“Distance property of orthogonal signals is pretty good compared to antipodal signals”-
justify the statement considering the above signals.
4. Define cross-corelation coefficient of waveform coding. Write the cross-corelation
coefficient value of orthogonal code, biorthogonal code and transorthogonal code.
Error Control

5. Compute orthogonal, biorthogonal and transorthogonal codeword set for 2/3/4-bit data
set.
6. Mention the advantage of biorthogonal code over orthogonal code.
7. Construct Hadamard matrix for 3-bit data set.
Topic-3
1. Define even parity, odd parity, parity code and rectangular code.
Parity-Check

2. Find the detectable and undetectable error of (4, 3) even-parity code.


Code

3. If the received codeword is 111100110111000011011000100001110101, determine


whether received codeword is corruptedor not? and determine the corrected message bits
and code rate. Assuming that codeword is rectangular code, message dimension is 5×5
and parity policy is even.
Topic-4
1. Find the order of message vector, code vector, error vector, received vector, syndrome
vector, coefficient matrix, generator matrix and parity matrix for linear block code with
notation (n, k).
Define code rate and redundancy of (n, k) linear block code.
Linear Block Code

2. Write the fundamental properties of linear block code.


3. Define vector space and vector subspace of linear block code.
4. When and why a table look-up implementation of the linear block encoder becomes
prohibitive.
5. Draw a decoder circuit for (7, 3) block code for the following parity array:
1 1 0 0
𝑃 = [0 1 1 0]
1 1 1 1

6. Draw a decoder circuit for (6, 3) block code for the following parity array:
1 1 0
𝑃 = [0 1 1]
1 0 1
7. Define standard array and syndrome look-up table of linear block code.
8. Consider a (7, 4) code whose parity array is
1 1 1
1 0 1
𝑃=[ ]
0 1 1
1 1 0
i. Find all the codewords of the code.
ii. Find parity-check matrix of the code.
iii. Compute the syndrome for the received vector 1 1 0 1 1 0 1. Is this a valid code
vactor?
9. Compute the syndrome for the received vector 0 0 1 1 1 0. If this is not a valid codeword
vector, then calculate the correct vector using following syndrome look up table. Assume
that the parity array of the encoder is P.

Error pattern syndrome


000000 000
000001 101
000010 011 1 1 0
𝑃 = [0 1 1]
000100 110
1 0 1
001000 001
010000 010
100000 100
010001 111
Topic-5
1. Write the fundamental properties of cyclic code.
2. Derive the properties of generator polynomial 𝑔(𝑋) of cyclic code.
3. Derive the codeword polynomial 𝑈(𝑋) of cyclic code.
4. Find the quotient and remainder if (0001101)2is divided by (10001)2 in polynomial
form.
5. Design a dividing circuit and find the quotient and remainder if (0001101)2is divided
by (10001).
Cyclic Code

6. Use a feedback shift register, encode the message vector 𝑚 = 1011 into a (7, 4)
codeword using the generator polynomial𝑔(𝑋) = 1 + 𝑋 2 + 𝑋 3 .
7. If the received vector 𝟏 𝟎 𝟎 𝟏 𝟎 𝟏 𝟏, then calculate the syndrome with (n-k)-stage shift
register using the generator polynomial g(X) = 1 + X + X 3. Has error in received
vector?
8. What is syndrom? Design a syndrom calculation circuit for (7,4) cyclic code using a
generator polynimial g(X) = X 3 + X + 1. If the received codeword is 𝟏 𝟎 𝟏 𝟎 𝟎 𝟏 𝟏,
determine whether received codeword is corrupted or not?
9. Draw a (𝑛, 𝑘) cyclic encoder. Describe the encoding procedure of cyclic encoder.
10 Design a dividing circuit to divide 𝑉(𝑋) = 𝑋 3 + 𝑋 5 + 𝑋 6 by g(𝑋) = 1 + 𝑋 + 𝑋 3 .
Find the remainder and quotient terms. Compare the remainder and quotient of the
dividing circuit to division performed by polynomial division.
Topic-6
1. Draw the state diagram, tree diagram and trellis diagram for the K = 3, rate 1/3 code
generated by
𝑔1 (𝑋) = 𝑋 + 𝑋 2
𝑔2 (𝑋) = 1 + 𝑋
𝑔3 (𝑋) = 1 + 𝑋 + 𝑋 2
2. What is flushing bit of convolutional code? Why it is required?
3. Mention the methods are used for representing a convolutional encoder.
4. Draw the state diagram, tree diagram and trellis diagram for the convolutional encoder
Convolutional Code

characterized by the following block diagram.

5. Design a convolutional encoder with constraint length K and rate k/n.


6. Define (n, k) convolutional encoder with constraint length K.
7. A (2, 1) convolutional encoder with constraint length 3 and connection vector 𝑔1 =
111 and 𝑔2 = 101 encode the input sequence 𝑚 = 11011. Determine the codeword
sequence of the encoder using connection pictorial, connection vectors or polynomials,
state diagram, tree diagram and trellis diagram.
8. Write the difference between cyclic code and convolutional code.
Topic-7
1. Define information theory. Why information are always converted into binary form
(symbol) prior to transmission?
2. What does information theory deal with in the context of communications?
3. Draw the communication channel model. Define different types of encoder-decoders
that are used in communication channel.
Information Measurement
Information Theory

4. Define bits, bit rate, symbol, uncertainty, information, message, word, baud, alphabet
and data.
5. Define discrete memoryless source and memoryless source with memory.
6. Determine the amount of information of a symbol.
7. Discuss the following conditions in terms of probability.
i. 𝐼(𝑆𝐾 ) = 0
ii. 𝐼(𝑆𝐾 ) ≥ 0
iii. 𝐼(𝑆𝐾 ) > 𝐼(𝑆𝐾 )
iv. 𝐼(𝑆𝐾 𝑆𝐼 ) = 𝐼(𝑆𝐾 ) + 𝐼(𝑆𝐼 )
v. 𝐻(𝜉) = log 2 𝐾
8. Define entropy. State and prove the property of entropy.
9. Derive the entropy function of binary memoryless source.
10. Define block of discrete memoryless source. Write the relation between extended
source and original source in terms of entropy.
11. Define second-order extended source and third-order extended source.
12. Consider a discrete memoryless source with source alphabet 𝜉 = {𝑠0 , 𝑠1 , 𝑠2 } with
respective probabilities 𝑝0 = 0.25, 𝑝1 = 0.25 and 𝑝2 = 0.5. Prove the relationship
between entropy of an ordinary source and entropy of an extended source for the
source by assuming the source is second-order.
14. Define conditional entropy and redundancy.
15. Find the entropy, redundancy and information rate of a five symbol source (𝐴, 𝐵, 𝐶, 𝐷, 𝐸)
with a baud rate of 1024 symbols/second and symbol selection probabilities of 0.5, 0.2,
0.15, 0.1, 0.05 under the following conditions:
i. The source is memoryless.
ii. The source has a one symbol memory such that no two consecutively selected
symbols can be the same.
16. Define noiseless and noisy channel.
17. Define unconditional entropy, conditional entropy, effective entropy, equivocation,
transition matrix.
18. “An alternative interpretation of equivocation is as negative information added by
noise” Justify your statement by defining each part of effective entropy.
19. Consider a three symbol source (𝐴, 𝐵, 𝐶) with the following transition matrix
𝐴 𝑇𝑋 𝐵𝑇𝑋 𝐶𝑇𝑋
𝐴𝑅𝑋 0.6 0.5 0
𝐵𝑅𝑋 0.2 0.5 0.333
𝐶𝑅𝑋 0.2 0 0.667
For the specific a prior probabilities 𝑃(𝐴) = 0.5, 𝑃(𝐵) = 0.2, 𝑃(𝐶) = 0.3. Find (i) the
unconditional symbol reception probabilities 𝑃(𝑖𝑅𝑋 ) from the conditional and a prior
probabilities, (ii) the equivocation, (iii) the source entropy and (iv) the effective
entropy.
Topic-8
1. Define source coding, maximum entropy, code efficiency, source efficiency.
2. Write the functional requirements of efficient source encoder.
3. Mention the drawbacks of Huffman code and where is the Lempel-Ziv code suitable to
use?
4. Write the advantages of Lempel-Ziv code over prefix code and Huffman code.
Source Coding

5. Consider the three codes listed below:


Source symbol Probability Code I Code II Code III
𝑆0 0.5 0 0 0
𝑆1 0.25 1 10 01
𝑆2 0.125 00 110 011
𝑆3 0.125 11 111 0111

i. Identify which one is the prefix code and construct its decision tree.
ii. “Meeting Kraft-McMillan Inequality does not give the guarantee of being a
prefix code”-justify the statement considering all three codes.
6. Show that the minimum variance of Huffman code is obtained by moving the probability
of a combined symbol as high as possible for a source having five symbols whose
probabilities of occurrence are given.

Symbol Probability
𝑆0 0.4
𝑆1 0.2
𝑆2 0.2
𝑆3 0.1
𝑆4 0.1
7. Determine the codeword using Lempel-Ziv coding algorithm for the following input
binary sequence.
00010111 0010100101
8. Determine the prefix code and Huffman code for the following source.
Symbol Probability
𝑆0 0.4
𝑆1 0.2
𝑆2 0.2
𝑆3 0.1
𝑆4 0.60
𝑆5 0.04
Topic-9
1. Define discrete memoryless channel.
Discrete Memoryless Channel

2. Derive the general channel matrix or transition matrix of a discrete memoryless


channel.
3. Write short note about binary symmetric channel.
4. Determine the mutual information of a discrete memoryless channel.
5. State the properties of mutual information.
6. Prove that the mutual information of a channel is symmetric.
7. Prove that the mutual information is always nonnegative.
8. Prove that the mutual information of a channel is related to the joint entropy of the
channel input and output by 𝐼(𝒳; 𝒴) = 𝐻(𝒳) + 𝐻(𝒴) − 𝐻(𝒳, 𝒴).
9. Define channel capacity.

You might also like