Dgital Communication Lectures

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 202

DIGITAL COMMUNICATIONS

BASICS OF COMMUNICATION SYSTEMS


Lecture 1

Introduction
Electronic Communication
The transmission, reception, and processing of

information with the use of electronic circuits

Information
Knowledge or intelligence that is

communicated (i.e., transmitted or received) between two or more points

Introduction
Digital Modulation
The transmittal of digitally modulated analog

signals (carriers) between two or more points in a communications systems Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earths atmosphere and used in wireless communications systems

Introduction
Digital Communications
Include systems where relatively high-

frequency analog carriers are modulated by relatively low-frequency digital signals (digital radio) and systems involving the transmission of digital pulses (digital transmission)

Introduction

ASK

FSK

PSK

QAM

Applications
1
Relatively low-speed voice-band data communications modems such as those found in most personal computers High-speed data transmission systems, such as broadband digital subscriber lines (DSL)

Digital microwave and satellite communications systems

Cellular telephone Personal Communications Systems (PCS)

Basic Telecommunication System

Attenuation

Source

Transducer

Transducer

Sink

Transmission Medium In an electrical communication system, at the transmitting side, a transducer converts the real life information into an electrical signal. At the receiving side, a transducer converts the electrical signal back into real-life information

Basic Telecommunication System

Source

Transducer

NOISE!!! Transducer

Sink

Transmission Medium

Note: As the electrical signal passes through the transmission medium, the signal gets attenuated. In addition, the transmission medium introduces noise and, as a result, the signal gets distorted.

Basic Telecommunication System


The objective of designing a communication system is to reproduce the electrical signal at the receiving end with minimal distortion.

Basic Telecommunication System

Channel

RS 232 Port

RS 232 Port

Note: The serial ports of two computers can be connected directly using a copper cable. However, due to the signal attenuation, the distance cannot be more than 100 meters.

Basic Telecommunication System

Two computers can communicate with each other through the telephone network, using a modem at each end. The modem converts the digital signals generated by the computer into analog form for transmission over the medium at the transmitting end and the reverse at the receiving end.

Basic Telecommunication System


Source Baseband Signal Processing Medium Access Processing Transmitter Medium

(a) Transmitting Side

Sink

Baseband Signal Processing

Decoding of Data

Receiver

(a) Receiving Side

Basic Telecommunication System


Depending on the type of communication, the distance to be covered, etc., a communication system will consist of a number of elements, each element carrying out a specific function. Some important elements are:
1 2 3

Multiplexer Multiple access Error detection and correction Source coding Signaling

4
5

Basic Telecommunication System


Note: Two voice signals cannot be mixed directly because it will not be possible to separate them at the receiving end. The two voice signals can be transformed into different frequencies to combine them and send over the medium.

Types of Communication
1 2 3 4 5 6

Point-to-point communication Point-to-multipoint communication Broadcasting

Simplex communication Half-duplex communication


Full-duplex communication

Transmission Impairments
1 2
Attenuation The amplitude of the signal wave decreases as the signal travels through the medium.

Delay distortion Occurs as a result of different frequency components arriving at different times in the guided media such as copper wire or coaxial cable

Noise Thermal noise, intermodulation, crosstalk, impulse noise

Transmission Impairments
Thermal Noise occurs due to the

thermal agitation of electrons in a conductor. (white noise), N = kTB Intermodulation Noise When two signals of different frequencies are sent through the medium, due to nonlinearity of the transmitters, frequency components such as f1 + f2 and f1 f2 are produced, which are unwanted components and need to be filtered out.

Transmission Impairments
Crosstalk Unwanted coupling between

signal paths Impulse Noise occurs due to external electromagnetic disturbances such as lightning. This also causes burst of errors.

Analog Versus Digital Transmission


Analog Communication Digital Communication
1s and 0s are transmitted as voltage pulses. So, even if the pulse s distorted due to noise, it is not very difficult to detect the pulses at the receiving end.

The signal, whose amplitude varies continuously, is transmitted over the medium.

Reproducing the analog signal at the receiving end is very difficult due to transmission impairments

Much more immune to noise

Advantages of Digital Transmission


More reliable transmission
Because only discrimination between ones and zeros is required

Less costly implementation


Because of the advances in digital logic chips

Ease of combining various types of signals (voice, video, etc.,) Ease of developing secure communication systems

Information theory
Lecture 2

Claude Shannon
-Laid

the foundation of information theory in 1948. His paper A Mathematical Theory of Communication published in Bell System Technical Journal is the basis for the entire telecommunications developments that have taken place during the last five decades. A good understanding of the concepts proposed by Shannon is a must for every budding telecommunication professional.

Requirements of a Communication System

The requirement of a communication system is to transmit the information from the source to the sink without errors, in spite of the fact that noise is always introduced in the communication medium.

The Communication System


Channel Information Source

Transmitter

Receiver

Information Sink

Noise Source

Generic Communication System

Symbols Produced Bit stream produced Bit stream received

A 1 1

B 0 0

B 0 0

A 1 1

A 1 1

A 1 1

B 0 1

A 1 1

B 0 0

A 1 1

In a digital communication system, due to the effect of noise, errors are introduced. As a result, 1 may become a 0 and 0 may become a 1.

Information Source

Source Encoder

Channel Encoder

Modulator Modulate d Signal

Modulating Signal
Demodulating Signal Information Sink Source Decoder Channel Decoder

Demodulator

Generic Communication System as proposed by Shannon

Explanation of Each Block


Information Source: Produces the symbols

Source encoder: converts the signal produced by the information source into a data stream
Channel Encoder: add bits in the sourceencoded data Modulation: process of transforming the signal Demodulator: performs the inverse operation of the modulator

Explanation of Each Block


Channel Decoder: analyzes the received bit stream and detects and corrects the error Source Decoder: converts the bit stream into the actual information Information Sink: absorbs the information

Types of Source Encoding


Source encoding is done to reduce the

redundancy in the signal.

1. Lossless coding 2. Lossy coding

The compression utilities we use to compress data files use lossless encoding techniques. JPEG image compression is a lossy technique because some information is lost.

Channel Encoding
Redundancy is introduced so that at the

receiving end, the redundant bits can be used for error detection or error correction

Entropy of an Information Source

What is information?

How do we measure information

???

???

Information Measure

Ii = - log2 P(i) bits


Where: P(i) = probability of the ith message

Entropy of an Information Source

H = log2 N bits per symbol Where: N = number of symbols Note: This applies to symbols with equal probability.

Entropy of an Information Source

Example: Assume that a source produces the English letters (from A to Z, including space), and all these symbols will be produced with equal probability. Determine the entropy.

Ans. H = 4.75 bits/symbol

Entropy of an Information Source

Where: H = entropy in bits per symbol If a source produces (i)th symbol with a probability of P(i)

Entropy of an Information Source


Example:

Consider a source that produces four symbols with probabilities of , , 1/8, and 1/8, and all symbols are independent of each other. Determine the entropy.

Ans. 7/4 bits/symbol

Entropy of an Information Source


Example A telephone touch-tone keypad has the digits 0 to 9, plus the * and # keys. Assume the probability of sending * or # is 0.005 and the probability of sending 0 to 9 is 0.099 each. If the keys are pressed at a rate of 2 keys/s, compute the entropy and data rate for this source. Ans: H = 3.38 bits/key; R = 6.76 bps

Channel Capacity
The limit at which data can be transmitted

through a medium

Where: C = channel capacity (bps) W = bandwidth of the channel (Hz) S/N = signal-to-noise ratio (SNR) (unitless)

Channel Capacity
Example:

Consider a voice-grade line for which W = 3100 Hz, SNR = 30 dB (i.e., the signal-tonoise ratio is 1000:1). Determine the channel capacity.

Ans: 30.898 kbps

Shannons Theorems
In digital communication system, the aim

of the designer is to convert any information into a digital signal, pass it through the transmission medium and, at the receiving end, reproduce the digital signal exactly.

Shannons Theorems
Requirements:

To code any type of information into digital format


To ensure that the data sent over the channel is not corrupted.

Source Coding Theorem


States that the number of bits required to

uniquely describe an information source can be approximated to the information content as closely as desired.

NOTE: Assigning short code words to highprobability symbols and long code words to low-probability symbols results in efficient coding.

Channel Coding Theorem


States

that the error rate of data transmitted over a bandwidth limited noisy channel can be reduced to an arbitrary small amount if the information rate is lower than the channel capacity.

Example: Consider the example of a source producing the symbols A and B. A is coded as 1 and B as 0.

Symbols Produced Bit Stream

A 1

B 0

B 0

A 1

B 0

Transmitting111000000111000 101000010111000 Received

NOTE
Source coding is used mainly to reduce

the redundancy in the signal, whereas channel coding is used to introduce redundancy to overcome the effect of noise.

Review of Probability
Lesson 3

Probability Theory
Rooted in situations that involve performing

an experiment with an outcome that is subject to chance. Random Experiment the outcome can differ because of the influence of an underlying random phenomenon or chance mechanisms (if the experiment is repeated)

Features of Random Experiment


The experiment is repeatable under identical

conditions. On any trial of experiment, the outcome is unpredictable. For a large number of trials of the experiment, the outcomes exhibit statistical regularity. That is, a definite average pattern of outcomes is observed if the experiment is repeated a large number of times.

Axioms of Probability
Sample point, sk Sample space, S totality of sample points

corresponding to the aggregate of all possible outcomes of the experiment (sure event) Null set null or impossible event Elementary event single sample point

Consider an experiment, such as rolling a die, with a number of possible outcomes. The sample space S of the experiment
S = { 1, 2, 3, 4, 5, 6 }
Event subset of S and may consist of any number of

consists of the set of all possible outcomes.

sample points

A={2,4}

Probability System consists of the triple:


A sample space S of elementary events (outcomes) A class E of events that are subsets of S A probability measure P( * ) assigned to each event A in the class E, which has the following properties:
P(S) = 1 0 P(A) 1 If A + B is the union of two mutually exclusive events

in the class E, then P(A + B) = P(A) + P(B)

Elementary Properties of Probabilities


Property 1: P(A) = 1 P(A)

The use of this property helps us investigate the nonoccurrence of an event. Property 2: If M mutually exclusive events A1 , A2, AM have the exhaustive property A1 + A2 + + AM = S Property 3: When events A and B are not mutually exclusive, then the probability of the union event A or B equals P(A + B) = P(A) + P(B) P(AB) Where P(AB) is the joint probability

Example:
1. Consider an experiment in which two

coins are thrown. What is the probability of getting one head and one tail?

Principles of Probability
Probability of an event

Suppose that we now consider two different events, A and B, with probabilities
Disjoint events if A and B cannot possibly

occur at the same time

Expresses the additivity concept. That is, if two events are disjoint, the probability of their sum is the sum of the probabilities
mambunquin 3/17/2013

55

Principles of Probability
Example:

Consider the experiment of flipping a coin twice. List the outcomes, events, and their respective probabilities.

Answers:

Outcomes: HH, HT, TH, and TT Events: {HH}, {HT}, {TH}, {TT} {HH, HT}, {HH, TH}, {HH, TT}, {HT, TH}, {HT, TT}, {TH, TT} {HH, HT, TH}, {HH, HT, TT}, {HH, TH, TT}, {HT, TH, TT} {HH, HT, TH, TT}, and {0}

Probabilities: Pr{HH}, = Pr{HT} = Pr{TH} = Pr{TT} = 1/4 Pr{HH, HT} = Pr{HH, TH} = Pr{HH, TT} = Pr{HT, TH} = Pr{HT, TT} = Pr{TH, TT} = 1/2 Pr{HH, HT, TH} = Pr{HH, HT, TT} = Pr{HH, TH, TT} = Pr{HT, TH, TT} = 3/4 Pr{HH, HT, TH, TT} = 1 Pr{0} = 0

Note: the comma within the curly brackets is read as or.

mambunquin

3/17/2013

56

Principles of Probability
Random Variables the mapping (function)

that assigns a number to each outcome Conditional Probabilities


occurred

The probability of event A given that event B has


In set theory, this is known as the intersection

Two events, A and B, are said to be independent if

mambunquin

3/17/2013

57

Principles of Probability
Example:
A coin is flipped twice. Four different events are defined. A is the event of getting a head on the first flip. B is the event of getting a tail on the second flip. C is the event of a match between the two flips. D is the elementary event of a head on both flips. Find Pr{A}, Pr{B}, Pr{C}, Pr{D}, Pr{A|B}, and Pr{C|D}. Are A and B independent? Are C and D independent?

mambunquin

3/17/2013

58

Principles of Probability
Answers:

The events are defined by the following combination of outcomes. A = HH, HT B = HT, TT C = HH, TT D = HH Therefore, Pr{A} = Pr{B} = Pr{C} = 1/2 and Pr{D} = 1/4 Pr{A|B} = 0.5 and Pr{C|D} = 1 Since Pr{A|B} = Pr{A} , the event of a head on the first flip is independent of that of a tail on the second flip.

Since Pr{C|D} Pr{C} , the event of a match and that of two heads are not independent.

mambunquin

3/17/2013

59

Coding
Lecture 4

M1 = 1, M2 = 10,

M3 = 01, Rx = 101

M4 = 101

Uniquely Decipherability
No code word forms the starting sequence (known as prefix) of any other code word.

M1 = 1, M2 = 01,

M3 = 001, M4 = 0001

Note: The prefix restriction property is sufficient but not necessary for unique decipherability.

M1 = 1, M2 = 10, M3 = 100, M4 = 1000 not instantaneous

Example 3.1
Which of the following codes are uniquely decipherable? For those that are uniquely decipherable, determine whether they are instantaneous. (a) 0, 01, 001, 0011, 101 (b) 110, 111, 101, 01 (c) 0, 01, 011, 0110111

Entropy Coding
A fundamental theorem exists in noiseless coding theory. The theorem states that: For binary-coding alphabets, the average code word length is greater than, or equal to, the entropy.

Example 3.2
Find the minimum average length of a code with four messages with probabilities 1/8, 1/8, 1/4, and 1/2, respectively.

Variable-length Codes
One way to derive variable-length codes is to start with constant-length codes and expand subgroups.
Ex. 0, 1 (Expanding this to five code words by taking 1) 0 100 101 110 111

Ex. 00, 01, 10, 11 (Expanding any one of these four words into two words, say we chose 01) 00 010 011 10 11

2 Techniques for finding efficient variablelength codes


1. Huffman codes provide an organized

technique for finding the best possible variable-length code for a given set of messages 2. Shannon-Fano codes similar to the Huffman, a major difference being that the operations are performed in a forward direction.

Huffman Codes
Suppose that we wish to code five words, s1,

s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.

Procedure: 1. Arrange the messages in order of decreasing probability. 2. Combine the bottom two entries to form a new entry with probability that is the sum of the original probabilities. 3. Continue combining in pairs until only two entries remain. 4. Assign code words by starting at right with the most significant bit. Move to the left and assign bit if a split occurred.

Example 3.3
Find the Huffman code for the following seven messages with probabilities as indicated:

S1 S2 S3 0.05 0.15 0.2

S4 0.05

S5 0.15

S6 0.3

S7 0.1

Shannon-Fano Codes
1. Suppose that we wish to code five words, s1, s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.

2. Find the Shannon-Fano code for the following seven messages with probabilities as indicated:
S1 0.05 S2 S3 0.15 0.2 S4 0.05 S5 0.15 S6 0.3 S7 0.1

Digital Transmission
Lecture 6

Information Capacity
It is a measure of how much information can

be propagated through a communications system and is a function of bandwidth and transmission time

Information Theory a highly theoretical study

of the efficient use of bandwidth to propagate information through electronic communications systems

mambunquin

3/17/2013

73

Information Capacity
In 1928, R. Hartley of Bell Telephone Laboratories developed a useful relationship

among bandwidth, transmission time, and information capacity Hartleys Law:


Where: I = information capacity (bits per second) B = bandwidth (hertz) t = transmission time (seconds)

mambunquin

3/17/2013

74

Information Capacity
Shannon limit for information capacity

or Where: I = information capacity (bps) B = bandwidth (hertz) S/N = signal-to-noise power ratio (unitless)

mambunquin

3/17/2013

75

M-ary Encoding
M-ary is a term derived from the word

binary. M represents a digit that corresponds to the number of conditions, levels, or combinations possible for a given number of binary variables. Advantageous to encode at a level higher than binary where there are more than two conditions possible
Beyond binary or higher-than-binary encoding

mambunquin

3/17/2013

76

M-ary Encoding

Where: N = number of bits necessary M = number of conditions, levels, or combinations possible with N bits Rearranging the above expression

mambunquin

3/17/2013

77

Example
Calculate the number of levels if the

number of bits per sample is: (a) 8 (as in telephony) (b) 16 (as in compact disc audio systems)

Ans. (a) 256 levels (b) 65 536 levels

Information Capacity
Shannon-Hartley Theorem:

Where: C = Information capacity in bits per second B = the channel bandwidth in hertz M = number of levels transmitted

Example
A telephone line has a bandwidth of 3.2

kHz and a signal-to-noise ratio of 35 dB. A signal is transmitted down this line using a four-level code. What is the maximum theoretical data rate?

Ans. 12.8 kbps

Advantages of Digital Transmission


Noise Immunity
More resistant to analog systems to additive noise because they use signal regeneration

rather than signal amplification Easier to compare the error performance of one digital system to another digital system Transmission errors can be detected and corrected more easily and more accurately

Disadvantages of Digital Transmission


Requires significantly more bandwidth than simply transmitting the original analog signal Analog signals must be converted to digital pulses prior to transmission and converted back to their original analog form at the receiver Requires precise time synchronization between the clocks in the transmitters and receivers Incompatible with older analog transmission systems

Pulse Modulation
Consists

of sampling analog information signals and then converting those discrete pulses and transporting the pulses from a source to a destination over a physical medium

Sampling
In 1928, Harry Nyquist showed

mathematically that it is possible to reconstruct a band-limited analog signal from periodic samples, as long as the sampling rate is at least twice the frequency of the highest frequency component of the signal.

Sampling
Natural Sampling

Flat-topped Sampling
Aliasing foldover distortion distortion created by using too low a sampling rate when coding an

analog signal for digital transmission

fa = the frequency of the aliasing distortion fs = the sampling rate fm = the modulating (baseband) frequency

Example
An attempt is made to transmit a baseband

frequency of 30 kHz using a digital audio system with a sampling rate of 44.1 kHz. What audible frequency would result?

14.1 kHz

Pulse Modulation
Methods of Pulse Modulation Pulse Width Modulation (PWM) Pulse Position Modulation (PPM) Pulse Amplitude Modulation (PAM) Pulse Code Modulation (PCM)

Pulse Code Modulation (PCM)


Dynamic Range (DR) of a system is the

ratio of the strongest possible signal that can be transmitted and the weakest discernible signal DR = 1.76 + 6.02 M dB D = fs M

Where DR = dynamic range in dB M = number of bits per sample D= data rate in bits per second fs = sample rate in samples per second

Example
Find the maximum dynamic range for a

linear PCM system using 16-bit quantizing. Calculate the minimum data rate needed to transmit audio with a sampling rate of 40 kHz and 14 bits per sample.

Ans. 98.08 dB, 560 kbps

Alternative Formula DR

Where: DR = dynamic range (unitless ratio) Vmin = the quantum value (resolution) Vmax = the maximum voltage magnitude that can be discerned by the DAC in the receiver

Resolution

Quantization error

Relationship between DR and N in a PCM code


For minimum number of bits

Where: N = number of bits in a PCM code, excluding the sign bit DR = absolute value of dynamic range

Example
For a PCM system with the following

parameters, determine (a) minimum sample rate, (b) minimum number of bits used in the PCM code, (c) resolution, and (d) quantization error. Maximum analog input frequency = 4 kHz Maximum decoded voltage at the receiver = 2.55 V Minimum dynamic range = 46 dB

Companding
Combination of compression at the transmitter and expansion at the receiver of a communications system The transmission bandwidth varies directly with the bit rate. In order to keep the bit rate

and thus required bandwidth low, companding is used. Involves using a compressor amplifier at the input, with greater gain for low-level than for high-level signals. The compressor reduces the quantizing error for small signals.

Law (mu law)


Characteristic applied to the system used

by the North American telephone system

Where: vo = output voltage from the compressor Vo = maximum output voltage Vi = maximum input voltage vi = actual input voltage = a parameter that defines the amount of compression (contemporary systems use = 255)

A Law
Characteristic applied to the system used

by the European telephone system

Example
A signal at the input to a mu-law compressor is positive with its voltage one-half the

maximum value. What proportion of the maximum output voltage is produced?

Ans. 0.876 Vo

Coding and Decoding


The process of converting an analog

signal into a PCM signal is called coding and the inverse operation, converting back from digital to analog, is known as decoding. Both procedures are often accomplished in a single IC device called a codec.

Mu-Law Compressed PCM Coding


Segment 0
1 2 3 4 5 6 7

Voltage Range (mV) 0 7.8


7.8 15.6 15.6 31.25 31.25 62.5 62.5 125 125 250 250 500 500 - 1000

Step Size (mV) 0.488


0.488 0.9772 1.953 3.906 7.813 15.625 31.25

Example
Code a positive-going signal with

amplitude 30% of the maximum allowed as a PCM sample.


into an 8-bit compressed code.

Convert the 12-bit sample 100110100100

Ans: 11100011 Ans: 11011010

Example
1. Suppose an input signal to a -law compressor has a positive voltage and

amplitude 25% of the maximum possible. Calculate the output voltage as a percentage of the maximum output. 2. How would a signal with 50% of the maximum input voltage be coded in 8-bit PCM, using digital compression? 3. Convert a sample coded (using mu-law compression) as 11001100 to a voltage with the maximum sample voltage normalized at 1 V.

4. Convert the 12-bit PCM sample 110011001100 to an 8-bit compressed sample. 5. Suppose a composite video signal with a baseband frequency range from dc to 4 MHz is transmitted by linear PCM, using eight bits per sample and a sampling rate of 10 MHz.
How many quantization levels are there? Calculate the bit rate, ignoring overhead. What would be the maximum signal-to-noise

ratio, in decibels? What type of noise determines the answer to part (c)?

Example
The compact disc system of digital audio uses two channels with TDM. Each channel is sampled at 44.1 kHz and coded using linear PCM with sixteen bits per sample. Find:
the maximum audio frequency that can be

recorded (assuming ideal filters) the maximum dynamic range in decibels the bit rate, ignoring error correction and framing bits the number of quantizing levels

Digital Modulation/demodulation

mambunquin

3/17/2013

105

Digital Modulation
The transmittal of digitally modulated analog

signals (carriers) between two or more points in a communications systems Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earths atmosphere and used in wireless communications systems

mambunquin

3/17/2013

106

Introduction

ASK

FSK

PSK

QAM

Information Capacity, Bits, Bit Rate, Baud, and M-ary Encoding

mambunquin

3/17/2013

108

Baud and Minimum Bandwidth


Baud rate of change of a signal on the

transmission medium after encoding and modulation have occurred Unit of transmission rate, modulation rate, or symbol rate Symbols per second Reciprocal of the time of one output signaling element Where: Baud = symbol rate (baud per second) ts = time of one signaling element (seconds)
mambunquin 3/17/2013

109

Baud and Minimum Bandwidth


Signaling element symbol that could be

encoded as a change in amplitude, frequency, or phase

Note: Bit rate and baud rate will be equal only if timing is uniform throughout and all pulses are used to send information (i.e. no extra pulses are used for other purposes such as forward error correction.)

mambunquin

3/17/2013

110

Baud and Minimum Bandwidth


According to H. Nyquist, binary digital

signals can be propagated through an ideal noiseless transmission medium at a rate equal to two times the bandwidth of the medium The minimum theoretical bandwidth necessary to propagate a signal is called the minimum Nyquist bandwidth or minimum Nyquist frequency

mambunquin

3/17/2013

111

Baud and Minimum Bandwidth


Nyquist formulation of channel capacity:

Where: fb = channel capacity (bps) B = minimum Nyquist bandwidth (hertz) M = number of discrete signal or voltage levels

mambunquin

3/17/2013

112

Baud and Minimum Bandwidth


With digital modulation, the baud and the

ideal minimum Nyquist bandwidth have the same value and are equal to :

This is true for all forms of digital

modulation except FSK.

mambunquin

3/17/2013

113

Example 1:
A modulator transmits symbols, each of which has sixty-four different possible states, 10, 000 times per second. Calculate the baud rate and bit rate. Given: M = 64 Baud = 10 000 times per second Required: Baud rate and Bit rate Solution: Baud rate = 10 000 baud or 10 kbaud fb = baud x N = 10 000 x log2 64 = 60 kbps
mambunquin 3/17/2013

114

Simplest digital modulation technique A binary information signal directly modulates the amplitude of an analog carrier Sometimes called digital amplitude modulation (DAM)

Amplitude-Shift Keying

Where: vask (t) = amplitude-shift keying wave vm (t) = digital information (modulating) signal (volts) A/2 = unmodulated carrier amplitude (volts) c = analog carrier radian frequency (radians per second, 2fc t)
mambunquin 3/17/2013

115

Amplitude-Shift Keying

For logic 1, vm (t) = + 1 V

mambunquin

3/17/2013

116

Amplitude-Shift Keying

For logic 0, vm (t) = - 1 V

mambunquin

3/17/2013

117

Amplitude-Shift Keying
The modulated wave is either

or 0 The carrier is either on or off which is why ASK is sometimes referred to as on-off keying (OOK)

mambunquin

3/17/2013

118

Amplitude-Shift Keying

Binary Input

DAM output

mambunquin

3/17/2013

119

Amplitude-Shift Keying
ASK waveform (baud) is the same as the

rate of change of the binary input (bps)

mambunquin

3/17/2013

120

Example 2
Determine the baud and minimum bandwidth necessary to pass a 10 kbps

binary signal using amplitude-shift keying. Given: fb = 10 000 bps N = 1 (for ASK) Required: Baud and B Solution: B = fb / N = 10 000 / 1 = 10 000 Hz Baud = fb / N = 10 000 / 1 = 10 000 baud per second
mambunquin 3/17/2013

121

Frequency-Shift Keying
Low-performance type of digital

modulation A form of constant-amplitude angle modulation similar to standard frequency modulation (FM) except the modulating signal is a binary signal that varies between two discrete voltage levels rather than a continuously changing analog waveform Sometimes called binary FSK (BFSK)

mambunquin

3/17/2013

122

Frequency-Shift Keying

Where: vfsk (t) = binary FSK waveform Vc = peak analog carrier amplitude (volts) fc = analog carrier center frequency (hertz) f = peak change (shift) in the analog carrier frequency (hertz) vm (t) = binary input (modulating) signal (volts)
mambunquin 3/17/2013

123

Frequency-Shift Keying
Frequency-shift keying (FSK) is the oldest and

simplest form of modulation used in modems. In FSK, two sine-wave frequencies are used to represent binary 0s and 1s. binary 0, usually called a space binary 1, referred to as a mark

mambunquin

3/17/2013

124

Frequency-Shift Keying
For Vm(t) = + 1 V

For Vm(t) = - 1 V

mambunquin

3/17/2013

125

Frequency-Shift Keying

Logic 1

Logic 0

mambunquin

3/17/2013

126

Frequency-Shift Keying
Frequency deviation is defined as the

difference between either the mark or space frequency and the center frequency, or half the difference between the mark and space frequencies.

Where: f = frequency deviation (hertz) |fm fs | = absolute difference between the mark and space frequencies (hertz)
mambunquin 3/17/2013

127

Frequency-Shift Keying

Frequency-shift keying. (a) Binary signal. (b) FSK signal.


mambunquin 3/17/2013

128

Frequency-Shift Keying
FSK Bit Rate, Baud, and Bandwidth

If N = 1, then Baud = fb

Where: B = minimum Nyquist bandwidth (hertz) f = frequency deviation (hertz) fb = input bit rate (bps)

mambunquin

3/17/2013

129

Frequency-Shift Keying
Example: Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c)

baud for a binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps. Solution: (a)
(b) (c)

mambunquin

3/17/2013

130

Frequency-Shift Keying
Gaussian Minimum-Shift Keying
Special case of FSK used in the GSM cellular radio

and PCS systems In a minimum shift system, the mark and space frequencies are separated by half the bit rate

Where: fm = frequency transmitted for mark (binary 1) fs = frequency transmitted for space (binary 0) fb = bit rate

mambunquin

3/17/2013

131

Frequency-Shift Keying
If we use the conventional FM

terminology, we see that GMSK has a deviation each way from the center (carrier) frequency, of

Which corresponds to a modulation index

of

mambunquin

3/17/2013

132

Frequency-Shift Keying
Example 4: The GSM cellular radio system

uses GMSK in a 200-kHz channel, with a channel data rate of 270.833 kb/s. Calculate: (a) the frequency shift between mark and space (b) the transmitted frequencies if the carrier (center) frequency is exactly 880 MHz (c) the bandwidth efficiency of the scheme in b/s/Hz
mambunquin 3/17/2013

133

Frequency-Shift Keying
Solution: (a) fm fs = 0.5 fb = 0.5 x 270.833 kb/s = 135.4165 kHz (b) fmax = fc + 0.25fb = 880 MHz + 0.25 x 270.833 kHz = 880.0677 MHz

fmin = fc 0.25fb = 880 MHz 0.25 x 270.833 kHz = 879.93229 MHz (c) The GSM system has a bandwidth efficiency of 270.833 / 200 = 1.35 b/s/Hz, comfortably under the theoretical maximum of 2 b/s/Hz for a two-level code.
mambunquin 3/17/2013

134

Phase-Shift Keying
Used when somewhat higher data rates are

required in a band-limited channel than can be achieved with FSK Another form of angle-modulated, constantamplitude digital modulation An M-ary digital modulation scheme similar to conventional phase modulation except with PSK the input is a binary digital signal and there are limited number of output phases possible
135

mambunquin

3/17/2013

Phase-Shift Keying
The input binary information is encoded into

groups of bits before modulating the carrier. The number of bits in a group ranges from 1 to 12 or more. The number of output phases is defined by M (as described previously) and determined by the number of bits in the group (n).

mambunquin

3/17/2013

136

Phase-Shift Keying
Binary PSK Quaternary PSK
Offset QPSK

8-PSK

16-PSK

mambunquin

3/17/2013

137

Binary Phase-Shift Keying


Simplest form of PSK, where N =1 and M = 2. Two phases are possible (21 = 2) for the carrier As the input digital signal changes state, the phase of the output carrier shifts between two angles that are separated by 180.

Other terms: Phase reversal keying (PRK) and biphase modulation


Form of square-wave modulation of a

continuous wave (CW) signal

mambunquin

3/17/2013

138

Delta Phase-Shift Keying


Most modems use a four-phase system

(QPSK or DQPSK) Each symbol represents two bits and the BIT rate is TWICE the BAUD rate. (dibit system) A system can carry twice as much data in the same bandwidth as can a single-bit system like FSK, provided the SNR is high enough

mambunquin

3/17/2013

139

Delta Phase-Shift Keying


DQPSK Quadrature 01 01 00 DQPSK Coding Phase Shift (Deg) 0 +90 -90 180 Symbol 00 01 10 11

Pi/4 DQPSK
Quadrature

11

00

11
10

10

Pi/4 DQPSK Coding Phase Shift (Deg) 45 135 - 45 - 135


mambunquin 3/17/2013

Symbol 00 01 10 11
140

Error Control
Lecture 8

Background: Simple Codes


word to every message drawn from a dictionary of acceptable messages. The code words must consist of symbols from an acceptable alphabet. 1. Baudot Code 2. ASCII Code 3. Selectric Code
Code a set of rules that assigns a code

mambunquin

3/17/2013

142

Baudot Code
One of the earliest, and now essentially

obsolete, paper-tape codes used in Teletype machines Assigns a 5-bit binary number to each letter of the alphabet Shift instruction provided to circumvents the shortcomings of the primitive codes (i.e. 26 capital letters plus space, line feed, and carriage return, including the digits)

mambunquin

3/17/2013

143

ASCII Code
American Standard Code for Information Interchange Has become the standard for digital communication of individual alphabet symbols Also used for very short range communications, such as from the keyboard to the processor of a computer Consists of code words of 7-bit length, thus providing 128 dictionary words An eighth bit is often added as a parity-check bit for error detection

mambunquin

3/17/2013

144

Selectric Code
One of many specialized codes that have

been widely used in the past The Selectric typewriter was the standard of the industry before the days of electronic typewriters.
Uses a 7-bit code to control the position of the

typing ball Although this permits 128 distinct code symbols, only 88 of these are used.

mambunquin

3/17/2013

145

Example
Write the ASCII codes for the characters

below.
B b

Answer: 1000010 1100010

Asynchronous Transmission
Synchronizing the transmitter and receiver

clocks at the start of each character Simpler but less efficient than synchronous communication, in which the transmitter and receiver clocks are continuously locked together

TRANSMISSION MODES
The transmission of binary data across a link can be accomplished in either parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick. In serial mode, 1 bit is sent with each clock tick. While there is only one way to send parallel data, there are three subclasses of serial transmission: asynchronous, synchronous, and isochronous.

4.14 8

Data transmission and modes

4.14 9

Parallel transmission

4.15 0

Serial transmission

4.15 1

Note

In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or more stop bits (1s) at the end of each byte. There may be a gap between each byte.

4.15 2

Example
For the following sequence of bits, identify

the ASCII-encoded character, the start and stop bits, and the parity bits (assume even parity and two stop bits).

11111101000001011110001000

AD

Note

Asynchronous here means asynchronous at the byte level, but the bits are still synchronized; their durations are the same.

4.15 4

Asynchronous transmission

4.15 5

Note

In synchronous transmission, we send bits one after another without start or stop bits or gaps. It is the responsibility of the receiver to group the bits.

4.15 6

Example
For the following string of ASCII-encoded

characters, identify each character (assume odd parity):

01001111010101000001011011

OT

Synchronous transmission

4.15 8

Parallel and Serial Transmission


There are two ways to move binary bits

from one place to another:

1. Transmit all bits of a word simultaneously

(parallel transfer). 2. Send only 1 bit at a time (serial transfer).

Parallel and Serial Transmission


Parallel Transfer
Parallel data transmission is extremely fast

because all the bits of the data word are transferred simultaneously. Parallel data transmission is impractical for long-distance communication because of:
cost. signal attenuation.

Parallel and Serial Transmission


Serial Transfer
Data transfers in communication systems are

made serially; each bit of a word is transmitted one after another. The least significant bit (LSB) is transmitted first, and the most significant bit (MSB) last. Each bit is transmitted for a fixed interval of time t.

Parallel and Serial Transmission

Serial data transmission.

Parallel and Serial Transmission


Serial-Parallel Conversion
Serial data can typically be transmitted faster

over longer distances than parallel data. Serial buses are now replacing parallel buses in computers, storage systems, and telecommunication equipment where very high speeds are required. Serial-to-parallel and parallel-to-serial data conversion circuits are also referred to as serializer-deserializers (serdes).

Parallel and Serial Transmission

Parallel-to-serial and serial-to-parallel data transfers with shift registers.

The Channel

mambunquin

3/17/2013

165

Introduction
Channel what separates the transmitter

from the receiver in a communication system. Affects communication in two ways


Can alter the form of a signal during its

movement from transmitter to receiver Can add noise waveforms to the original transmitted signal

mambunquin

3/17/2013

166

The Memoryless Channel


A channel is memoryless if each element of

the output sequence depends only upon the corresponding input sequency element and upon the channel characteristics

Channel

mambunquin

3/17/2013

167

The Memoryless Channel


Can be characterized by a transition matrix

composed of conditional probabilities Example:

Consider the binary channel, where sin can

take on either of two values, 0 or 1. For a particular input, sout can equal either 0 or 1.

mambunquin

3/17/2013

168

The Memoryless Channel


The transition probability matrix is then

Note: In the absence of noise and distortion,

one would expect [T] to be the identity matrix. The sum of the entries of any column of the transition matrix must be unity, since, given the value of the input, the output must be one of the two probabilities.
mambunquin 3/17/2013

169

The Memoryless Channel


Example:
A digital communication system has a symbol alphabet

composed of four entries, and a transition matrix given by the following:

a. b. c.

Find the probability of a single transmitted symbol being in error assuming that all four input symbols are equally probable at any time Find the probability of a correct symbol transmission. If the symbols are denoted as A, B, C, and D, find the probability that the transmitted sequence BADCAB will be received as DADDAB.

mambunquin

3/17/2013

170

The Memoryless Channel


Solution: a. Pe | 0 sent = P10 + P20 + P30 = + + =

Pe | 1 sent = P01 + P21 + P31 = + 1/6 + 1/6 = 5/6 Pe | 2 sent = P02 + P12 + P32 = 1/6 + + 1/6 = 5/6 Pe | 3 sent = P03 + P13 + P23 = 1/6 + 1/6 + 1/3 = 2/3

mambunquin

3/17/2013

171

The Memoryless Channel


b.

c. P(DADDAB) = P31 P00 P33 P32 P00 P11

mambunquin

3/17/2013

172

The Memoryless Channel


An alternative way of displaying transition

probabilities is by use of the transition diagram. The summation of probabilities leaving any node must be unity.
0 P00 P10 P01 1 P11 1 1 p 1p 0 0 1p p 0

Binary Symmetric Channel (BSC)

A special case of the binary memoryless 1 channel, one in which the two conditional error probabilities are equal.
3/17/2013

mambunquin

173

The Memoryless Channel


The probability of error, or bit error rate

(BER) following one hop is given by:

Shorthand notation

mambunquin

3/17/2013

174

The Memoryless Channel


Overall probability of correct transmission

A transmitted 1 will be received as 1

provided that no errors occur in either hop. If an error occurs in each of the two hops, the 1 will be correctly received.

mambunquin

3/17/2013

175

The Memoryless Channel


Probability of correct transmission:

Probability of error:

mambunquin

3/17/2013

176

The Memoryless Channel


In general, the probability of error goes up

linearly with the number of hops Thus, for n binary symmetric channels in tandem, the overall probability of error is n times the bit error rate for a single BSC.

mambunquin

3/17/2013

177

The Memoryless Channel


Example:
Suppose you were to design a transmission system to cover a distance of 500 km. You decide to install a repeater station every 10km, so you require 50 such segments in your overall transmission path. You find that the bit error rate for each segment is p = 10-6 . Therefore, the overall bit error rate is:

mambunquin

3/17/2013

178

The Memoryless Channel


Example Consider a binary symmetric channel for which the conditional probability of error p = 10-4 , and symbols 0 and 1 occur with equal probability. Calculate the following probabilities: a. The probability of receiving symbol 0 b. The probability of receiving symbol 1 c. The probability that symbol 0 was sent, given that symbol 0 is received d. The probability that symbol 1 was sent, given that symbol 0 is received

mambunquin

3/17/2013

179

Answers: a. P(B0) = b. P(B1) = c. P(A0|B0)=1-10-4 d. P(A0|B1) = 10-4

Distance between code words


The distance between two equal-length

binary code words is defined as the number of bit positions in which the two words differ.
Example: The distance between 000 and 111 is

3. The distance between 010 and 011 is 1.

Distance between code words


Suppose that the dictionary of code words

is such that the distance between any two words is at least 2. 0000,
1010,

0011,
1100

0101,
1111

0110,

mambunquin

3/17/2013

182

Distance Relationships for 3-bit code

010 011 111 110

000
001 100 101

mambunquin

3/17/2013

183

Minimum Distance Between Codes, Dmin


Dmin 1 bits can be detected Dmin (even) = (Dmin / 2) 1 can be

corrected

Dmin (odd) = (Dmin / 2) 1/2 can be

corrected

mambunquin

3/17/2013

184

Example:
Find the minimum distance for the

following code consisting of four code words: 0111001, 1100101, 0010111,

1011100
How many bit errors can be detected? How many bit errors can be corrected?

mambunquin

3/17/2013

185

Code Length

Where

mambunquin

3/17/2013

186

Algebraic Codes
One simple form of this is known as singleparity-bit check code.
Message 000
001 010 011 100 101 110 111

Code Word 0000


0011 0101 0110 1001 1010 1100 1111

mambunquin

3/17/2013

187

Error Detection
Redundancy Checking
VRC (character parity) LRC (message parity) Checksum CRC

Consider code words that add n parity bits to the m message bits to end up with code words of length m + n bits. ai = original message; ci = parity check bit: Code word = a1 a2 a3 . . . am c1 c2 c3 . . . cn Note: out of 2m+n code words, 2m are used

[H]

=0

mambunquin

3/17/2013

189

Linear Block Codes


Generator Matrix: Code Word

Syndrome

Where

Linear Block Codes

Linear Block Codes

CRC
Determine the BCS for the following data

and CRC generating sequence: Data, G = 10110111 CRC P = 110011

Answer: BCS = 1011011101001

Cyclic Codes

LRC and VRC


Determine the VRCs and LRC for the

following ASCII-encoded message: THE CAT. Use odd parity for the VRCs and even parity for the LRC

Solution
Char HEX B6 B5 B4 B3 B2 B1 B0 VRC T 54 1 0 1 0 1 0 0 0 H 48 1 0 0 1 0 0 0 1 E 45 1 0 0 0 1 0 1 0 SP 20 0 1 0 0 0 0 0 0 C 43 1 0 0 0 0 1 1 0 A 41 1 0 0 0 0 0 1 1 T 54 1 0 1 0 1 0 0 0 LRC 2F 0 1 0 1 1 1 1 0

Checksum

Sender site: The message is divided into 16-bit words. The value of the checksum word is set to 0. All words including the checksum are added using ones complement addition. The sum is complemented and becomes the checksum. The checksum is sent with the data.

Receiver site: The message (including checksum) is divided into 16-bit words. All words are added using ones complement addition. The sum is complemented and becomes the new checksum. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.

Error Correction
Retransmission
ARQ

FEC
Hamming Code

Example
For a 12-bit data string of 101100010010,

determine the number of Hamming bits required, arbitrarily place the Hamming bits into the data string, determine the logic condition of each Hamming bit, assume an arbitrary single-bit transmission error, and prove that the Hamming code will successfully detect the error.

4, 8, 9, 13, 17

Example
Determine the Hamming bits for the ASCII

character B. Insert the Hamming bits into every other bit location starting from the left. Determine the Hamming bits for the ASCII character C (use odd parity and two stop bits). Insert the Hamming bits into every 0010 other location starting from at the right.

You might also like