Dgital Communication Lectures
Dgital Communication Lectures
Dgital Communication Lectures
Introduction
Electronic Communication
The transmission, reception, and processing of
Information
Knowledge or intelligence that is
Introduction
Digital Modulation
The transmittal of digitally modulated analog
signals (carriers) between two or more points in a communications systems Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earths atmosphere and used in wireless communications systems
Introduction
Digital Communications
Include systems where relatively high-
frequency analog carriers are modulated by relatively low-frequency digital signals (digital radio) and systems involving the transmission of digital pulses (digital transmission)
Introduction
ASK
FSK
PSK
QAM
Applications
1
Relatively low-speed voice-band data communications modems such as those found in most personal computers High-speed data transmission systems, such as broadband digital subscriber lines (DSL)
Attenuation
Source
Transducer
Transducer
Sink
Transmission Medium In an electrical communication system, at the transmitting side, a transducer converts the real life information into an electrical signal. At the receiving side, a transducer converts the electrical signal back into real-life information
Source
Transducer
NOISE!!! Transducer
Sink
Transmission Medium
Note: As the electrical signal passes through the transmission medium, the signal gets attenuated. In addition, the transmission medium introduces noise and, as a result, the signal gets distorted.
Channel
RS 232 Port
RS 232 Port
Note: The serial ports of two computers can be connected directly using a copper cable. However, due to the signal attenuation, the distance cannot be more than 100 meters.
Two computers can communicate with each other through the telephone network, using a modem at each end. The modem converts the digital signals generated by the computer into analog form for transmission over the medium at the transmitting end and the reverse at the receiving end.
Sink
Decoding of Data
Receiver
Multiplexer Multiple access Error detection and correction Source coding Signaling
4
5
Types of Communication
1 2 3 4 5 6
Transmission Impairments
1 2
Attenuation The amplitude of the signal wave decreases as the signal travels through the medium.
Delay distortion Occurs as a result of different frequency components arriving at different times in the guided media such as copper wire or coaxial cable
Transmission Impairments
Thermal Noise occurs due to the
thermal agitation of electrons in a conductor. (white noise), N = kTB Intermodulation Noise When two signals of different frequencies are sent through the medium, due to nonlinearity of the transmitters, frequency components such as f1 + f2 and f1 f2 are produced, which are unwanted components and need to be filtered out.
Transmission Impairments
Crosstalk Unwanted coupling between
signal paths Impulse Noise occurs due to external electromagnetic disturbances such as lightning. This also causes burst of errors.
The signal, whose amplitude varies continuously, is transmitted over the medium.
Reproducing the analog signal at the receiving end is very difficult due to transmission impairments
Ease of combining various types of signals (voice, video, etc.,) Ease of developing secure communication systems
Information theory
Lecture 2
Claude Shannon
-Laid
the foundation of information theory in 1948. His paper A Mathematical Theory of Communication published in Bell System Technical Journal is the basis for the entire telecommunications developments that have taken place during the last five decades. A good understanding of the concepts proposed by Shannon is a must for every budding telecommunication professional.
The requirement of a communication system is to transmit the information from the source to the sink without errors, in spite of the fact that noise is always introduced in the communication medium.
Transmitter
Receiver
Information Sink
Noise Source
A 1 1
B 0 0
B 0 0
A 1 1
A 1 1
A 1 1
B 0 1
A 1 1
B 0 0
A 1 1
In a digital communication system, due to the effect of noise, errors are introduced. As a result, 1 may become a 0 and 0 may become a 1.
Information Source
Source Encoder
Channel Encoder
Modulating Signal
Demodulating Signal Information Sink Source Decoder Channel Decoder
Demodulator
Source encoder: converts the signal produced by the information source into a data stream
Channel Encoder: add bits in the sourceencoded data Modulation: process of transforming the signal Demodulator: performs the inverse operation of the modulator
The compression utilities we use to compress data files use lossless encoding techniques. JPEG image compression is a lossy technique because some information is lost.
Channel Encoding
Redundancy is introduced so that at the
receiving end, the redundant bits can be used for error detection or error correction
What is information?
???
???
Information Measure
H = log2 N bits per symbol Where: N = number of symbols Note: This applies to symbols with equal probability.
Example: Assume that a source produces the English letters (from A to Z, including space), and all these symbols will be produced with equal probability. Determine the entropy.
Where: H = entropy in bits per symbol If a source produces (i)th symbol with a probability of P(i)
Consider a source that produces four symbols with probabilities of , , 1/8, and 1/8, and all symbols are independent of each other. Determine the entropy.
Channel Capacity
The limit at which data can be transmitted
through a medium
Where: C = channel capacity (bps) W = bandwidth of the channel (Hz) S/N = signal-to-noise ratio (SNR) (unitless)
Channel Capacity
Example:
Consider a voice-grade line for which W = 3100 Hz, SNR = 30 dB (i.e., the signal-tonoise ratio is 1000:1). Determine the channel capacity.
Shannons Theorems
In digital communication system, the aim
of the designer is to convert any information into a digital signal, pass it through the transmission medium and, at the receiving end, reproduce the digital signal exactly.
Shannons Theorems
Requirements:
uniquely describe an information source can be approximated to the information content as closely as desired.
NOTE: Assigning short code words to highprobability symbols and long code words to low-probability symbols results in efficient coding.
that the error rate of data transmitted over a bandwidth limited noisy channel can be reduced to an arbitrary small amount if the information rate is lower than the channel capacity.
Example: Consider the example of a source producing the symbols A and B. A is coded as 1 and B as 0.
A 1
B 0
B 0
A 1
B 0
NOTE
Source coding is used mainly to reduce
the redundancy in the signal, whereas channel coding is used to introduce redundancy to overcome the effect of noise.
Review of Probability
Lesson 3
Probability Theory
Rooted in situations that involve performing
an experiment with an outcome that is subject to chance. Random Experiment the outcome can differ because of the influence of an underlying random phenomenon or chance mechanisms (if the experiment is repeated)
conditions. On any trial of experiment, the outcome is unpredictable. For a large number of trials of the experiment, the outcomes exhibit statistical regularity. That is, a definite average pattern of outcomes is observed if the experiment is repeated a large number of times.
Axioms of Probability
Sample point, sk Sample space, S totality of sample points
corresponding to the aggregate of all possible outcomes of the experiment (sure event) Null set null or impossible event Elementary event single sample point
Consider an experiment, such as rolling a die, with a number of possible outcomes. The sample space S of the experiment
S = { 1, 2, 3, 4, 5, 6 }
Event subset of S and may consist of any number of
sample points
A={2,4}
The use of this property helps us investigate the nonoccurrence of an event. Property 2: If M mutually exclusive events A1 , A2, AM have the exhaustive property A1 + A2 + + AM = S Property 3: When events A and B are not mutually exclusive, then the probability of the union event A or B equals P(A + B) = P(A) + P(B) P(AB) Where P(AB) is the joint probability
Example:
1. Consider an experiment in which two
coins are thrown. What is the probability of getting one head and one tail?
Principles of Probability
Probability of an event
Suppose that we now consider two different events, A and B, with probabilities
Disjoint events if A and B cannot possibly
Expresses the additivity concept. That is, if two events are disjoint, the probability of their sum is the sum of the probabilities
mambunquin 3/17/2013
55
Principles of Probability
Example:
Consider the experiment of flipping a coin twice. List the outcomes, events, and their respective probabilities.
Answers:
Outcomes: HH, HT, TH, and TT Events: {HH}, {HT}, {TH}, {TT} {HH, HT}, {HH, TH}, {HH, TT}, {HT, TH}, {HT, TT}, {TH, TT} {HH, HT, TH}, {HH, HT, TT}, {HH, TH, TT}, {HT, TH, TT} {HH, HT, TH, TT}, and {0}
Probabilities: Pr{HH}, = Pr{HT} = Pr{TH} = Pr{TT} = 1/4 Pr{HH, HT} = Pr{HH, TH} = Pr{HH, TT} = Pr{HT, TH} = Pr{HT, TT} = Pr{TH, TT} = 1/2 Pr{HH, HT, TH} = Pr{HH, HT, TT} = Pr{HH, TH, TT} = Pr{HT, TH, TT} = 3/4 Pr{HH, HT, TH, TT} = 1 Pr{0} = 0
mambunquin
3/17/2013
56
Principles of Probability
Random Variables the mapping (function)
mambunquin
3/17/2013
57
Principles of Probability
Example:
A coin is flipped twice. Four different events are defined. A is the event of getting a head on the first flip. B is the event of getting a tail on the second flip. C is the event of a match between the two flips. D is the elementary event of a head on both flips. Find Pr{A}, Pr{B}, Pr{C}, Pr{D}, Pr{A|B}, and Pr{C|D}. Are A and B independent? Are C and D independent?
mambunquin
3/17/2013
58
Principles of Probability
Answers:
The events are defined by the following combination of outcomes. A = HH, HT B = HT, TT C = HH, TT D = HH Therefore, Pr{A} = Pr{B} = Pr{C} = 1/2 and Pr{D} = 1/4 Pr{A|B} = 0.5 and Pr{C|D} = 1 Since Pr{A|B} = Pr{A} , the event of a head on the first flip is independent of that of a tail on the second flip.
Since Pr{C|D} Pr{C} , the event of a match and that of two heads are not independent.
mambunquin
3/17/2013
59
Coding
Lecture 4
M1 = 1, M2 = 10,
M3 = 01, Rx = 101
M4 = 101
Uniquely Decipherability
No code word forms the starting sequence (known as prefix) of any other code word.
M1 = 1, M2 = 01,
M3 = 001, M4 = 0001
Note: The prefix restriction property is sufficient but not necessary for unique decipherability.
Example 3.1
Which of the following codes are uniquely decipherable? For those that are uniquely decipherable, determine whether they are instantaneous. (a) 0, 01, 001, 0011, 101 (b) 110, 111, 101, 01 (c) 0, 01, 011, 0110111
Entropy Coding
A fundamental theorem exists in noiseless coding theory. The theorem states that: For binary-coding alphabets, the average code word length is greater than, or equal to, the entropy.
Example 3.2
Find the minimum average length of a code with four messages with probabilities 1/8, 1/8, 1/4, and 1/2, respectively.
Variable-length Codes
One way to derive variable-length codes is to start with constant-length codes and expand subgroups.
Ex. 0, 1 (Expanding this to five code words by taking 1) 0 100 101 110 111
Ex. 00, 01, 10, 11 (Expanding any one of these four words into two words, say we chose 01) 00 010 011 10 11
technique for finding the best possible variable-length code for a given set of messages 2. Shannon-Fano codes similar to the Huffman, a major difference being that the operations are performed in a forward direction.
Huffman Codes
Suppose that we wish to code five words, s1,
s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.
Procedure: 1. Arrange the messages in order of decreasing probability. 2. Combine the bottom two entries to form a new entry with probability that is the sum of the original probabilities. 3. Continue combining in pairs until only two entries remain. 4. Assign code words by starting at right with the most significant bit. Move to the left and assign bit if a split occurred.
Example 3.3
Find the Huffman code for the following seven messages with probabilities as indicated:
S4 0.05
S5 0.15
S6 0.3
S7 0.1
Shannon-Fano Codes
1. Suppose that we wish to code five words, s1, s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.
2. Find the Shannon-Fano code for the following seven messages with probabilities as indicated:
S1 0.05 S2 S3 0.15 0.2 S4 0.05 S5 0.15 S6 0.3 S7 0.1
Digital Transmission
Lecture 6
Information Capacity
It is a measure of how much information can
be propagated through a communications system and is a function of bandwidth and transmission time
of the efficient use of bandwidth to propagate information through electronic communications systems
mambunquin
3/17/2013
73
Information Capacity
In 1928, R. Hartley of Bell Telephone Laboratories developed a useful relationship
mambunquin
3/17/2013
74
Information Capacity
Shannon limit for information capacity
or Where: I = information capacity (bps) B = bandwidth (hertz) S/N = signal-to-noise power ratio (unitless)
mambunquin
3/17/2013
75
M-ary Encoding
M-ary is a term derived from the word
binary. M represents a digit that corresponds to the number of conditions, levels, or combinations possible for a given number of binary variables. Advantageous to encode at a level higher than binary where there are more than two conditions possible
Beyond binary or higher-than-binary encoding
mambunquin
3/17/2013
76
M-ary Encoding
Where: N = number of bits necessary M = number of conditions, levels, or combinations possible with N bits Rearranging the above expression
mambunquin
3/17/2013
77
Example
Calculate the number of levels if the
number of bits per sample is: (a) 8 (as in telephony) (b) 16 (as in compact disc audio systems)
Information Capacity
Shannon-Hartley Theorem:
Where: C = Information capacity in bits per second B = the channel bandwidth in hertz M = number of levels transmitted
Example
A telephone line has a bandwidth of 3.2
kHz and a signal-to-noise ratio of 35 dB. A signal is transmitted down this line using a four-level code. What is the maximum theoretical data rate?
rather than signal amplification Easier to compare the error performance of one digital system to another digital system Transmission errors can be detected and corrected more easily and more accurately
Pulse Modulation
Consists
of sampling analog information signals and then converting those discrete pulses and transporting the pulses from a source to a destination over a physical medium
Sampling
In 1928, Harry Nyquist showed
mathematically that it is possible to reconstruct a band-limited analog signal from periodic samples, as long as the sampling rate is at least twice the frequency of the highest frequency component of the signal.
Sampling
Natural Sampling
Flat-topped Sampling
Aliasing foldover distortion distortion created by using too low a sampling rate when coding an
fa = the frequency of the aliasing distortion fs = the sampling rate fm = the modulating (baseband) frequency
Example
An attempt is made to transmit a baseband
frequency of 30 kHz using a digital audio system with a sampling rate of 44.1 kHz. What audible frequency would result?
14.1 kHz
Pulse Modulation
Methods of Pulse Modulation Pulse Width Modulation (PWM) Pulse Position Modulation (PPM) Pulse Amplitude Modulation (PAM) Pulse Code Modulation (PCM)
ratio of the strongest possible signal that can be transmitted and the weakest discernible signal DR = 1.76 + 6.02 M dB D = fs M
Where DR = dynamic range in dB M = number of bits per sample D= data rate in bits per second fs = sample rate in samples per second
Example
Find the maximum dynamic range for a
linear PCM system using 16-bit quantizing. Calculate the minimum data rate needed to transmit audio with a sampling rate of 40 kHz and 14 bits per sample.
Alternative Formula DR
Where: DR = dynamic range (unitless ratio) Vmin = the quantum value (resolution) Vmax = the maximum voltage magnitude that can be discerned by the DAC in the receiver
Resolution
Quantization error
Where: N = number of bits in a PCM code, excluding the sign bit DR = absolute value of dynamic range
Example
For a PCM system with the following
parameters, determine (a) minimum sample rate, (b) minimum number of bits used in the PCM code, (c) resolution, and (d) quantization error. Maximum analog input frequency = 4 kHz Maximum decoded voltage at the receiver = 2.55 V Minimum dynamic range = 46 dB
Companding
Combination of compression at the transmitter and expansion at the receiver of a communications system The transmission bandwidth varies directly with the bit rate. In order to keep the bit rate
and thus required bandwidth low, companding is used. Involves using a compressor amplifier at the input, with greater gain for low-level than for high-level signals. The compressor reduces the quantizing error for small signals.
Where: vo = output voltage from the compressor Vo = maximum output voltage Vi = maximum input voltage vi = actual input voltage = a parameter that defines the amount of compression (contemporary systems use = 255)
A Law
Characteristic applied to the system used
Example
A signal at the input to a mu-law compressor is positive with its voltage one-half the
Ans. 0.876 Vo
signal into a PCM signal is called coding and the inverse operation, converting back from digital to analog, is known as decoding. Both procedures are often accomplished in a single IC device called a codec.
Example
Code a positive-going signal with
Example
1. Suppose an input signal to a -law compressor has a positive voltage and
amplitude 25% of the maximum possible. Calculate the output voltage as a percentage of the maximum output. 2. How would a signal with 50% of the maximum input voltage be coded in 8-bit PCM, using digital compression? 3. Convert a sample coded (using mu-law compression) as 11001100 to a voltage with the maximum sample voltage normalized at 1 V.
4. Convert the 12-bit PCM sample 110011001100 to an 8-bit compressed sample. 5. Suppose a composite video signal with a baseband frequency range from dc to 4 MHz is transmitted by linear PCM, using eight bits per sample and a sampling rate of 10 MHz.
How many quantization levels are there? Calculate the bit rate, ignoring overhead. What would be the maximum signal-to-noise
ratio, in decibels? What type of noise determines the answer to part (c)?
Example
The compact disc system of digital audio uses two channels with TDM. Each channel is sampled at 44.1 kHz and coded using linear PCM with sixteen bits per sample. Find:
the maximum audio frequency that can be
recorded (assuming ideal filters) the maximum dynamic range in decibels the bit rate, ignoring error correction and framing bits the number of quantizing levels
Digital Modulation/demodulation
mambunquin
3/17/2013
105
Digital Modulation
The transmittal of digitally modulated analog
signals (carriers) between two or more points in a communications systems Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earths atmosphere and used in wireless communications systems
mambunquin
3/17/2013
106
Introduction
ASK
FSK
PSK
QAM
mambunquin
3/17/2013
108
transmission medium after encoding and modulation have occurred Unit of transmission rate, modulation rate, or symbol rate Symbols per second Reciprocal of the time of one output signaling element Where: Baud = symbol rate (baud per second) ts = time of one signaling element (seconds)
mambunquin 3/17/2013
109
Note: Bit rate and baud rate will be equal only if timing is uniform throughout and all pulses are used to send information (i.e. no extra pulses are used for other purposes such as forward error correction.)
mambunquin
3/17/2013
110
signals can be propagated through an ideal noiseless transmission medium at a rate equal to two times the bandwidth of the medium The minimum theoretical bandwidth necessary to propagate a signal is called the minimum Nyquist bandwidth or minimum Nyquist frequency
mambunquin
3/17/2013
111
Where: fb = channel capacity (bps) B = minimum Nyquist bandwidth (hertz) M = number of discrete signal or voltage levels
mambunquin
3/17/2013
112
ideal minimum Nyquist bandwidth have the same value and are equal to :
mambunquin
3/17/2013
113
Example 1:
A modulator transmits symbols, each of which has sixty-four different possible states, 10, 000 times per second. Calculate the baud rate and bit rate. Given: M = 64 Baud = 10 000 times per second Required: Baud rate and Bit rate Solution: Baud rate = 10 000 baud or 10 kbaud fb = baud x N = 10 000 x log2 64 = 60 kbps
mambunquin 3/17/2013
114
Simplest digital modulation technique A binary information signal directly modulates the amplitude of an analog carrier Sometimes called digital amplitude modulation (DAM)
Amplitude-Shift Keying
Where: vask (t) = amplitude-shift keying wave vm (t) = digital information (modulating) signal (volts) A/2 = unmodulated carrier amplitude (volts) c = analog carrier radian frequency (radians per second, 2fc t)
mambunquin 3/17/2013
115
Amplitude-Shift Keying
mambunquin
3/17/2013
116
Amplitude-Shift Keying
mambunquin
3/17/2013
117
Amplitude-Shift Keying
The modulated wave is either
or 0 The carrier is either on or off which is why ASK is sometimes referred to as on-off keying (OOK)
mambunquin
3/17/2013
118
Amplitude-Shift Keying
Binary Input
DAM output
mambunquin
3/17/2013
119
Amplitude-Shift Keying
ASK waveform (baud) is the same as the
mambunquin
3/17/2013
120
Example 2
Determine the baud and minimum bandwidth necessary to pass a 10 kbps
binary signal using amplitude-shift keying. Given: fb = 10 000 bps N = 1 (for ASK) Required: Baud and B Solution: B = fb / N = 10 000 / 1 = 10 000 Hz Baud = fb / N = 10 000 / 1 = 10 000 baud per second
mambunquin 3/17/2013
121
Frequency-Shift Keying
Low-performance type of digital
modulation A form of constant-amplitude angle modulation similar to standard frequency modulation (FM) except the modulating signal is a binary signal that varies between two discrete voltage levels rather than a continuously changing analog waveform Sometimes called binary FSK (BFSK)
mambunquin
3/17/2013
122
Frequency-Shift Keying
Where: vfsk (t) = binary FSK waveform Vc = peak analog carrier amplitude (volts) fc = analog carrier center frequency (hertz) f = peak change (shift) in the analog carrier frequency (hertz) vm (t) = binary input (modulating) signal (volts)
mambunquin 3/17/2013
123
Frequency-Shift Keying
Frequency-shift keying (FSK) is the oldest and
simplest form of modulation used in modems. In FSK, two sine-wave frequencies are used to represent binary 0s and 1s. binary 0, usually called a space binary 1, referred to as a mark
mambunquin
3/17/2013
124
Frequency-Shift Keying
For Vm(t) = + 1 V
For Vm(t) = - 1 V
mambunquin
3/17/2013
125
Frequency-Shift Keying
Logic 1
Logic 0
mambunquin
3/17/2013
126
Frequency-Shift Keying
Frequency deviation is defined as the
difference between either the mark or space frequency and the center frequency, or half the difference between the mark and space frequencies.
Where: f = frequency deviation (hertz) |fm fs | = absolute difference between the mark and space frequencies (hertz)
mambunquin 3/17/2013
127
Frequency-Shift Keying
128
Frequency-Shift Keying
FSK Bit Rate, Baud, and Bandwidth
If N = 1, then Baud = fb
Where: B = minimum Nyquist bandwidth (hertz) f = frequency deviation (hertz) fb = input bit rate (bps)
mambunquin
3/17/2013
129
Frequency-Shift Keying
Example: Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c)
baud for a binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps. Solution: (a)
(b) (c)
mambunquin
3/17/2013
130
Frequency-Shift Keying
Gaussian Minimum-Shift Keying
Special case of FSK used in the GSM cellular radio
and PCS systems In a minimum shift system, the mark and space frequencies are separated by half the bit rate
Where: fm = frequency transmitted for mark (binary 1) fs = frequency transmitted for space (binary 0) fb = bit rate
mambunquin
3/17/2013
131
Frequency-Shift Keying
If we use the conventional FM
terminology, we see that GMSK has a deviation each way from the center (carrier) frequency, of
of
mambunquin
3/17/2013
132
Frequency-Shift Keying
Example 4: The GSM cellular radio system
uses GMSK in a 200-kHz channel, with a channel data rate of 270.833 kb/s. Calculate: (a) the frequency shift between mark and space (b) the transmitted frequencies if the carrier (center) frequency is exactly 880 MHz (c) the bandwidth efficiency of the scheme in b/s/Hz
mambunquin 3/17/2013
133
Frequency-Shift Keying
Solution: (a) fm fs = 0.5 fb = 0.5 x 270.833 kb/s = 135.4165 kHz (b) fmax = fc + 0.25fb = 880 MHz + 0.25 x 270.833 kHz = 880.0677 MHz
fmin = fc 0.25fb = 880 MHz 0.25 x 270.833 kHz = 879.93229 MHz (c) The GSM system has a bandwidth efficiency of 270.833 / 200 = 1.35 b/s/Hz, comfortably under the theoretical maximum of 2 b/s/Hz for a two-level code.
mambunquin 3/17/2013
134
Phase-Shift Keying
Used when somewhat higher data rates are
required in a band-limited channel than can be achieved with FSK Another form of angle-modulated, constantamplitude digital modulation An M-ary digital modulation scheme similar to conventional phase modulation except with PSK the input is a binary digital signal and there are limited number of output phases possible
135
mambunquin
3/17/2013
Phase-Shift Keying
The input binary information is encoded into
groups of bits before modulating the carrier. The number of bits in a group ranges from 1 to 12 or more. The number of output phases is defined by M (as described previously) and determined by the number of bits in the group (n).
mambunquin
3/17/2013
136
Phase-Shift Keying
Binary PSK Quaternary PSK
Offset QPSK
8-PSK
16-PSK
mambunquin
3/17/2013
137
mambunquin
3/17/2013
138
(QPSK or DQPSK) Each symbol represents two bits and the BIT rate is TWICE the BAUD rate. (dibit system) A system can carry twice as much data in the same bandwidth as can a single-bit system like FSK, provided the SNR is high enough
mambunquin
3/17/2013
139
Pi/4 DQPSK
Quadrature
11
00
11
10
10
Symbol 00 01 10 11
140
Error Control
Lecture 8
mambunquin
3/17/2013
142
Baudot Code
One of the earliest, and now essentially
obsolete, paper-tape codes used in Teletype machines Assigns a 5-bit binary number to each letter of the alphabet Shift instruction provided to circumvents the shortcomings of the primitive codes (i.e. 26 capital letters plus space, line feed, and carriage return, including the digits)
mambunquin
3/17/2013
143
ASCII Code
American Standard Code for Information Interchange Has become the standard for digital communication of individual alphabet symbols Also used for very short range communications, such as from the keyboard to the processor of a computer Consists of code words of 7-bit length, thus providing 128 dictionary words An eighth bit is often added as a parity-check bit for error detection
mambunquin
3/17/2013
144
Selectric Code
One of many specialized codes that have
been widely used in the past The Selectric typewriter was the standard of the industry before the days of electronic typewriters.
Uses a 7-bit code to control the position of the
typing ball Although this permits 128 distinct code symbols, only 88 of these are used.
mambunquin
3/17/2013
145
Example
Write the ASCII codes for the characters
below.
B b
Asynchronous Transmission
Synchronizing the transmitter and receiver
clocks at the start of each character Simpler but less efficient than synchronous communication, in which the transmitter and receiver clocks are continuously locked together
TRANSMISSION MODES
The transmission of binary data across a link can be accomplished in either parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick. In serial mode, 1 bit is sent with each clock tick. While there is only one way to send parallel data, there are three subclasses of serial transmission: asynchronous, synchronous, and isochronous.
4.14 8
4.14 9
Parallel transmission
4.15 0
Serial transmission
4.15 1
Note
In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or more stop bits (1s) at the end of each byte. There may be a gap between each byte.
4.15 2
Example
For the following sequence of bits, identify
the ASCII-encoded character, the start and stop bits, and the parity bits (assume even parity and two stop bits).
11111101000001011110001000
AD
Note
Asynchronous here means asynchronous at the byte level, but the bits are still synchronized; their durations are the same.
4.15 4
Asynchronous transmission
4.15 5
Note
In synchronous transmission, we send bits one after another without start or stop bits or gaps. It is the responsibility of the receiver to group the bits.
4.15 6
Example
For the following string of ASCII-encoded
01001111010101000001011011
OT
Synchronous transmission
4.15 8
because all the bits of the data word are transferred simultaneously. Parallel data transmission is impractical for long-distance communication because of:
cost. signal attenuation.
made serially; each bit of a word is transmitted one after another. The least significant bit (LSB) is transmitted first, and the most significant bit (MSB) last. Each bit is transmitted for a fixed interval of time t.
over longer distances than parallel data. Serial buses are now replacing parallel buses in computers, storage systems, and telecommunication equipment where very high speeds are required. Serial-to-parallel and parallel-to-serial data conversion circuits are also referred to as serializer-deserializers (serdes).
The Channel
mambunquin
3/17/2013
165
Introduction
Channel what separates the transmitter
movement from transmitter to receiver Can add noise waveforms to the original transmitted signal
mambunquin
3/17/2013
166
the output sequence depends only upon the corresponding input sequency element and upon the channel characteristics
Channel
mambunquin
3/17/2013
167
take on either of two values, 0 or 1. For a particular input, sout can equal either 0 or 1.
mambunquin
3/17/2013
168
one would expect [T] to be the identity matrix. The sum of the entries of any column of the transition matrix must be unity, since, given the value of the input, the output must be one of the two probabilities.
mambunquin 3/17/2013
169
a. b. c.
Find the probability of a single transmitted symbol being in error assuming that all four input symbols are equally probable at any time Find the probability of a correct symbol transmission. If the symbols are denoted as A, B, C, and D, find the probability that the transmitted sequence BADCAB will be received as DADDAB.
mambunquin
3/17/2013
170
Pe | 1 sent = P01 + P21 + P31 = + 1/6 + 1/6 = 5/6 Pe | 2 sent = P02 + P12 + P32 = 1/6 + + 1/6 = 5/6 Pe | 3 sent = P03 + P13 + P23 = 1/6 + 1/6 + 1/3 = 2/3
mambunquin
3/17/2013
171
mambunquin
3/17/2013
172
probabilities is by use of the transition diagram. The summation of probabilities leaving any node must be unity.
0 P00 P10 P01 1 P11 1 1 p 1p 0 0 1p p 0
A special case of the binary memoryless 1 channel, one in which the two conditional error probabilities are equal.
3/17/2013
mambunquin
173
Shorthand notation
mambunquin
3/17/2013
174
provided that no errors occur in either hop. If an error occurs in each of the two hops, the 1 will be correctly received.
mambunquin
3/17/2013
175
Probability of error:
mambunquin
3/17/2013
176
linearly with the number of hops Thus, for n binary symmetric channels in tandem, the overall probability of error is n times the bit error rate for a single BSC.
mambunquin
3/17/2013
177
mambunquin
3/17/2013
178
mambunquin
3/17/2013
179
binary code words is defined as the number of bit positions in which the two words differ.
Example: The distance between 000 and 111 is
is such that the distance between any two words is at least 2. 0000,
1010,
0011,
1100
0101,
1111
0110,
mambunquin
3/17/2013
182
000
001 100 101
mambunquin
3/17/2013
183
corrected
corrected
mambunquin
3/17/2013
184
Example:
Find the minimum distance for the
1011100
How many bit errors can be detected? How many bit errors can be corrected?
mambunquin
3/17/2013
185
Code Length
Where
mambunquin
3/17/2013
186
Algebraic Codes
One simple form of this is known as singleparity-bit check code.
Message 000
001 010 011 100 101 110 111
mambunquin
3/17/2013
187
Error Detection
Redundancy Checking
VRC (character parity) LRC (message parity) Checksum CRC
Consider code words that add n parity bits to the m message bits to end up with code words of length m + n bits. ai = original message; ci = parity check bit: Code word = a1 a2 a3 . . . am c1 c2 c3 . . . cn Note: out of 2m+n code words, 2m are used
[H]
=0
mambunquin
3/17/2013
189
Syndrome
Where
CRC
Determine the BCS for the following data
Cyclic Codes
following ASCII-encoded message: THE CAT. Use odd parity for the VRCs and even parity for the LRC
Solution
Char HEX B6 B5 B4 B3 B2 B1 B0 VRC T 54 1 0 1 0 1 0 0 0 H 48 1 0 0 1 0 0 0 1 E 45 1 0 0 0 1 0 1 0 SP 20 0 1 0 0 0 0 0 0 C 43 1 0 0 0 0 1 1 0 A 41 1 0 0 0 0 0 1 1 T 54 1 0 1 0 1 0 0 0 LRC 2F 0 1 0 1 1 1 1 0
Checksum
Sender site: The message is divided into 16-bit words. The value of the checksum word is set to 0. All words including the checksum are added using ones complement addition. The sum is complemented and becomes the checksum. The checksum is sent with the data.
Receiver site: The message (including checksum) is divided into 16-bit words. All words are added using ones complement addition. The sum is complemented and becomes the new checksum. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.
Error Correction
Retransmission
ARQ
FEC
Hamming Code
Example
For a 12-bit data string of 101100010010,
determine the number of Hamming bits required, arbitrarily place the Hamming bits into the data string, determine the logic condition of each Hamming bit, assume an arbitrary single-bit transmission error, and prove that the Hamming code will successfully detect the error.
4, 8, 9, 13, 17
Example
Determine the Hamming bits for the ASCII
character B. Insert the Hamming bits into every other bit location starting from the left. Determine the Hamming bits for the ASCII character C (use odd parity and two stop bits). Insert the Hamming bits into every 0010 other location starting from at the right.