Notes
Notes
Notes
R-20 III-I IT
Computer Networks
Unit-2 Data Link Layer
1. DATA LINK LAYER DESIGN ISSUES: The data link layer uses the physical layer to send and
receive bits over communication channels. Its functions are:
Providing a well-defined interface to the network layer.
Dealing with transmission errors.
Regulating data flow so that slow receivers are not flooded by fast senders.
2. Here, the DLL takes the packets it receives from the network layer and encapsulates them into
frames for transmission. Each frame contains a frame header, a payload field for holding the
packet, and a frame trailer.
3. Framing: The DLL is in between physical and network layers. The physical layer receives raw and
noisy data and might add some redundancy to its signals to reduce the error rate. But the data
received by the DLL is not noisy-free and it has to detect and correct the errors.
The DLL breaks up the bit stream into discrete frames, computes a short token called a
checksum for each frame, and includes the checksum in the frame when it is transmitted.
When a frame arrives at the destination, the checksum is recomputed. If it is different from the
existing checksum in the frame, the DLL knows that an error has occurred and takes steps to deal
with it.
4. The methods that exist for breaking up the bit stream into frames are:
Byte Count
Flag bytes with byte stuffing
Flag bits with Bit stuffing
Physical layer coding violations
2
5. Byte Count: This uses a field in the header to specify the number of bytes in the frame. When the
DLL at the destination sees the byte count, it knows how many bytes follow and hence where the
frame ends.
This technique is shown in figure below for four small example frames of sizes 5, 5, 8, and 8 bytes,
respectively.
The problem with this method is if the byte count changes due a single flip/error, it is impossible to
find where the error has occurred and synchronization at sender and receivers’ ends will fallout.
6. Flag Bytes with Byte Stuffing: This framing method solves the problem of resynchronization after
an error by having each frame to start and end with special bytes.
Often the same byte, called a flag byte, is used as both the starting and ending delimiter. This
byte is shown in as FLAG in the figure (a) below.
3
Two consecutive flag bytes indicate the end of one frame and the start of the next. Thus, if the
receiver ever loses synchronization, it can search for two flag bytes to find the end of the current
frame and the start of the next frame.
There is a still a problem to be solved. A flag byte can occur in the middle of data, when
photographs/songs are being transmitted. This disturbs the framing procedure.
One way to solve this problem is to have the sender’s DLL insert a special escape byte (ESC) just
before each ‘‘accidental/unexpected’’ flag byte in the data.
Thus, a flag byte that indicates the end of a frame can be distinguished from a flag byte in the data
by the absence or presence of an escape byte (ESC) before it.
The DLL on the receiving end removes the escape bytes before forwarding the data to the network
layer. This technique is called byte stuffing.
If an escape byte (ESC) occurs in the middle of the data, it too, is stuffed with an escape byte. At
the receiver, the first escape byte is removed, leaving the data byte that follows it intact.
7. Bit Stuffing: This method overcomes the disadvantage of byte stuffing where the size of the byte
cannot be more than 8 bits. Frames here can contain arbitrary number of bits.
4
Each frame begins and ends with a special bit pattern 011111110 or 0x7E in hexadecimal. This
pattern is a flag byte.
Whenever the sender’s data link layer encounters five consecutive 1s in the data, it automatically
stuffs a 0 bit into the outgoing bit stream.
8. With both bit and byte stuffing, a side effect is that the length of a frame depends on the data
content.
For instance, if there are no flag bytes in the data, 100 bytes might be carried in a frame of roughly
100 bytes.
If, however, the data consists only flag bytes, each flag byte will be escaped and the frame will
become roughly 200 bytes long. With bit stuffing, the increase would be roughly 12.5% as 1 bit is
added to every byte.
9. ***The last method of framing is to use a shortcut from the physical layer. The encoding of bits as
signals often includes redundancy to help the receiver to understand the content.
10. Error Control: The usual way to ensure reliable delivery is to provide the sender with some
feedback about what is happening at the other end of the line. Typically, the protocol calls for the
receiver to send back special control frames bearing positive or negative acknowledgements
about the incoming frames.
If the sender receives a positive acknowledgement about a frame, it knows the frame has arrived
safely. On the other hand, a negative acknowledgement means that something has gone wrong
and the frame must be transmitted again.
11. In a peculiar case, a frame might vanish completely (noise burst). In this case, the receiver will not
react at all since it has not received any data. Also, since sender gets no acknowledgement, it
doesn’t know how to proceed.
12. This possibility is dealt with by introducing timers into the data link layer. When the sender
transmits a frame, it generally also starts a timer. The timer is set to expire after an interval long
enough for the frame to reach the destination, process it and send back the acknowledgement.
5
If the frame or acknowledgement is lost, sender might send the data multiple times and the
receiver doesn’t know which frame to forward to the network layer.
13. Error Detection and Correction: Errors are mostly produced from wireless links and old cable
wires. Two strategies exist to deal with errors:
Error Correcting Codes: Include enough redundant information to enable the receiver to
deduce what the transmitted data must have been. Also known as (Forward Error
Correction). These are used in wireless connections.
Error Detecting Codes: Include only enough redundancy to allow the receiver to deduce
that an error has occurred (but not which error) and requests a retransmission. These are
used in reliable connections like fiber.
14. Note that both of these methods can’t deal with all possible errors since the redundant bits that
offer protection are as likely to be received in error as the data bits (which can compromise their
protection). They are all just bits to the channel.
This means that to avoid undetected errors the code must be strong enough to handle the
expected errors.
15. Error-correcting codes are also seen in the physical layer, particularly for noisy channels, and in
higher layers, particularly for real-time media and content distribution.
Error-detecting codes are commonly used in link, network, and transport layers.
NOTE: The second point to bear in mind is that error codes are applied mathematics.
17. All these ECC codes add redundancy to the information that is sent. A frame consists of m data
bits and r redundant/check bits.
In a block code, the r check bits are computed as a function of the m data bits with which they are
associated, [as though the m bits were looked up in a large table to find their corresponding r
check bits].
In a systematic code, the m data bits are sent directly, along with the check bits.
In a linear code, the r check bits are computed as a linear function of the m data bits. Exclusive
OR (XOR) or modulo 2 addition is a popular choice.
Encoding can be done with operations such as matrix multiplications or simple logic circuits.
6
18. Let the total length of a block be n where n = m+ r. It is also called as (n, m) code. An n-bit unit
containing data and check bits is referred to as an n-bit codeword. The code rate or rate, is
a fraction of the codeword that carries non-redundant information, i.e. m/n.
The rates used in practice differ widely. They might be 1/2 for a noisy channel, in which case half
of the received information is redundant, or close to 1 for a high-quality channel, with only a small
number of check bits added to a large message.
To understand how errors can be handled, it is necessary to understand what an error is. Given
any two codewords that may be transmitted or received—say, 10001001 and 10110001—it is
possible to determine how many corresponding bits differ.
To determine how many bits differ, just XOR the two codewords and count the number of 1 bits in
the result.
For example:
10001001
10110001
-------------
00111000
The number of bit positions in which two codewords differ is called the Hamming distance. In the
above example, Hamming distance = 3.
Its significance is that if two codewords are a Hamming distance d apart, it will require d single-bit
errors to convert one codeword into the other.
Given the algorithm for computing the check bits, it is possible to construct a complete list of the
legal codewords, and from this list to find the two codewords with the smallest Hamming
distance. This distance is the Hamming distance of the complete code.
19. In most data transmission applications, all 2m possible data messages are legal. Note that though
2n codewords are possible, all of them are NOT used. When there are r check bits, only the small
fraction of 2m/2n or 1/2r of the possible messages will be legal codewords. Since the message
among the check bits, receiver can detect and correct errors.
20. The error-detecting and error-correcting properties of a block code depend on its Hamming
distance.
To reliably detect d errors, you need a distance d + 1 code. When the receiver sees an illegal
codeword, it can tell that a transmission error has occurred.
Similarly, to correct d errors, we need a distance of 2d+1 code.
Assumption here is that occurrence of large no. of errors is less likely.
Apparently, this code has a Hamming distance of 5, which means it can correct double errors or
detect quadruple (4 times as many) errors.
If 0000000111 arrives and we expect only single- or double-bit errors, the receiver will know that
the original must have been 0000011111. At the same time, if a triple error changes 0000000000
into 0000000111, the error will not be corrected properly.
If we expect all of these errors, we can detect them. But from these examples, we can conclude
that all errors can’t be detected/corrected.
22. To decrease the time consumed for detecting/correcting the errors, we can utilize some shortcuts
as given below:
Imagine that we want to design a code with m message bits and r check bits that will allow all
single errors to be corrected. Each of the 2m legal messages has n illegal codewords at a
distance of 1 from it.
These are formed by inverting each of the n bits in the n-bit codeword formed from it. Thus, each
of the 2m legal messages requires n+1 bit patterns dedicated to it. Since the total number of bit
patterns is 2n, we must have (n+1)2m ≤ 2n. By using n= m + r, this becomes
(m + r + 1) 2m ≤ 2m+r
(m + r + 1) 2m ≤ 2m. 2r
(m + r + 1) ≤ 2r ----- (1)
Given m, this puts a lower limit on the number of check bits needed to correct single errors.
23. This lower limit can be achieved using a method proposed by Hamming. In Hamming codes the
bits of the codeword are numbered consecutively, starting with bit 1 at the left end, bit 2 to its
immediate right, and so on.
The bits that are powers of 2 (1, 2, 4, 8, 16, etc.) are check bits. The rest (3, 5, 6, 7, 9, etc.) are
filled up with the m data bits. This pattern is shown for a (11, 7) Hamming code with 7 data bits
and 4 check bits in the figure below.
8
Each check bit forces modulo 2 sum, or parity, of some collection of bits, including itself, to be
even (or odd).
A bit may be included in several check bit computations. To see which check bits the data bit in
position k contributes to, rewrite k as a sum of powers of 2.
In the example, the check bits are computed for even parity sums for a message that is the ASCII
letter ‘‘A.’’ (65)
This construction gives a code with a Hamming distance of 3, which means that it can correct
single errors (or detect double errors).
00100001001
10000000001
------------------
10100001000
Therefore, Hamming Distance = 3
24. When a codeword arrives, the receiver redoes the check bit computations including the values of
the received check bits. We call these as the check results.
If the check bits are correct, for even parity sums, each check result should be zero. In this
case the codeword is accepted as valid.
If the all the check results are not zero, an error has been detected. The set of check results forms
the error syndrome which is used to pinpoint and correct the error. In the previous figure, a
single-bit error occurred on the channel so the check results are 0, 1, 0, and 1 for k = 8, 4, 2, and
1, respectively. This gives a syndrome of 0101 or 4 + 1=5.
By the design of the scheme, this means that the fifth bit is in error. Flipping the incorrect bit (which
might be a check bit or a data bit) and discarding the check bits gives the correct message of an
ASCII ‘‘A.’’
Hamming distances are valuable for understanding block codes, and Hamming codes are used in
error-correcting memory.
25. Convolutional Code: Here, an encoder processes a sequence of input bits and generates a
sequence of output bits. There is no message size or encoding boundary present here. The
output depends on the current and previous input bits (like RAM).
9
26. The number of previous bits on which the output depends is called the constraint length of
the code. Convolutional codes are specified in terms of their rate and constraint length.
27. Convolutional codes are used in deployed networks, like mobile phone systems, in satellite
communications, and in 802.11.
28. In the figure, each input bit on the LHS produces two output bits on the RHS that are XOR
sums of the input and internal state. Since it deals with bits and performs linear operations, it is
a binary- linear convolutional code.
29. Since 1 input bit produces 2 output bits, the code rate is 1/2. It is not systematic since none of the
output bits is simply the input bit.
30. The internal state is kept in six memory registers. Each time another bit is input, the values
in the registers are shifted to the right. For example, if 111 is input and the initial state is all
zeros, the internal state, written left to right, will become 100000, 110000, and 111000 after the
first, second, and third bits have been input.
31. The output bits will be 11, followed by 10, and then 01. It takes seven shifts to flush an input
completely so that it does not affect the output. The constraint length of this code is thus k = 7.
32. Reed-Solomon code: Like Hamming codes, Reed-Solomon codes are linear block codes, and
they are often systematic too. Unlike Hamming codes, which operate on individual bits, Reed-
Solomon codes operate on m bit symbols.
Reed-Solomon codes are based on the fact that every n degree polynomial is uniquely determined
by n + 1 points. For example, a line having the form ax + b is determined by two points. Extra
points on the same line are redundant/check bits, which is helpful for error correction.
Imagine that we have two data points that represent a line and we send those two data points plus
two check points chosen to lie on the same line. If one of the points is received in error, we can still
recover the data points by fitting a line to the received points. Three of the points will lie on the
line, and one point, the one in error, will not.
10
33. Reed-Solomon codes are actually defined as polynomials that operate over finite fields. For m bit
symbols, the codewords are 2m−1 symbols long. A popular choice is to make m = 8 so that
symbols are bytes. A codeword is then 255 bytes long. The (255, 233) code is widely used; it adds
32 redundant symbols to 233 data symbols.
Reed-Solomon codes are often used in combination with other codes such as a convolutional
code. Convolutional codes are effective at handling isolated bit errors, but they will fail if there are
too many errors in the received bit stream.
By adding a Reed-Solomon code within the convolutional code, the Reed-Solomon decoding
can clean up the error bursts. The overall code provides good protection against both single and
burst errors.
34. The final error-correcting code we will cover is the LDPC (Low-Density Parity Check) code. In an
LDPC code, each output bit is formed from only a fraction of the input bits. This leads to a matrix
representation of the code that has a low density of 1s, hence it is called LDPC. The received
codewords are decoded with an approximation algorithm that iteratively improves on a best fit of
the received data to a legal codeword. This corrects errors.
35. Error Detecting Codes: These are widely used on wireless links, which are noisy and error-
prone. Three error detecting codes exist, which are linear and systematic block codes:
Parity
Checksum
Cyclic Redundancy Check
36. These can more efficient than error correction codes. Consider the first error-detecting code, in
which a single parity bit is appended to the data. The parity bit is chosen so that the number of 1
bits in the codeword is even (or odd). Doing this is equivalent to computing the (even) parity bit as
XOR of the data bits.
For example, when 1011010 is sent in even parity, a bit is added to the end to make it 10110100.
With odd parity 1011010 becomes 10110101. A code with a single parity bit has a distance of 2,
since any single-bit error produces a codeword with the wrong parity. This means that it can detect
single-bit errors.
Consider a channel on which errors are isolated and the error rate is 10-6 per bit. Let the block size
to be 1000 bits. To provide error correction for 1000-bit blocks, we need 10 check bits. (m + r + 1)
≤ 2r
39. This can be improved if each block to be sent is regarded as a rectangular matrix n bits wide and k
bits high. Now, if we send one parity bit for each row, up to k bit errors can be detected as long as
there is one error per row.
40. We can compute the parity bits over the data in a different order than the order in which the data
bits are transmitted. It is known as interleaving. In this case, we will compute a parity bit for each
of the n columns and send all the data bits as k rows, sending the rows from top to bottom and the
bits in each row from left to right. At the last row, we send the n parity bits. This transmission order
is shown in the figure below for n = 7 and k = 7.
41. Interleaving is a technique to convert a code that detects isolated errors into a code that detects
burst errors. When a burst error of length n = 7 occurs as above, the bits that are in error are
spread across different columns.
NOTE: A burst error does not imply that all the bits are wrong; it just implies that at least the first
and last are wrong. In the previous figure, 4 bits were flipped over a range of 7 bits.
42. At most 1 bit in each of the n columns will be affected, so the parity bits on those columns will
detect the error. This method uses n parity bits on blocks of kn data bits to detect a single burst
error of length n or less.
43. Note that a burst of length n + 1 will pass undetected, however, if the first and last bits are
inverted, and all the other bits are correct.
44. Cheksum: Checksum is often used to mean a group of check bits associated with a message,
regardless of how are calculated. Ex: A group of parity bits. Stronger checksums are based on a
running sum of the data bits of the message.
45. The checksum is usually placed at the end of the message, as the complement of the sum
function. This way, errors may be detected by summing the entire received codeword, both data
bits and checksum. If the result comes out to be zero, no error has been detected.
12
Ex: 16-bit Internet checksum used on all Internet packets as part of the IP protocol.
This checksum is a sum of the message bits divided into 16-bit words. Because this method
operates on words rather than on bits, errors that leave the parity unchanged can still alter the
sum and thus, can be detected.
46. Checksum is the error detection method used by upper layer protocols and is considered to be
more reliable than LRC, VRC and CRC. This method makes the use of Checksum Generator on
Sender side and Checksum Checker on Receiver side.
At the Sender side, the data is divided into equal subunits of n bit length by the checksum
generator. This bit is generally of 16-bit length. These subunits are then added together using
one’s complement method. This sum is of n bits. The resultant bit is then complemented. This
complemented sum which is called checksum is appended to the end of original data unit and is
then transmitted to Receiver.
Example:
If the data unit to be transmitted is 10101001 00111001, the following procedure is used at Sender
site and Receiver site.
Sender Site
10101001 subunit 1
00111001 subunit 2
11100010 sum (using 1s complement)
00011101 checksum (complement of sum)
10101001 subunit 1
00111001 subunit 2
00011101 checksum
11111111 sum
00000000 sum's complement
47. Cyclic Redundancy Check (CRC) or Polynomial Codes: These treat bit strings as
representations of polynomials with coefficients of 0 and 1 only. A k-bit frame is regarded as the
coefficient list for a polynomial with k terms, ranging from xk-1 to x0. Such a polynomial is said to be
of degree k − 1.
The high-order (leftmost) bit is the coefficient of x k-1; the next bit is the coefficient of x k-2 and so on.
For example, 110001 has 6 bits and thus represents a six-term polynomial with coefficients 1, 1, 0,
0, 0, and 1: 1x 5 + 1x 4 + 0x 3 + 0x 2 + 0x 1 + 1x 0.
48. Polynomial arithmetic is done through modulo 2, identical to exclusive OR. For example:
49. When the polynomial code method is being used, the sender and receiver must agree upon a
generator polynomial, G(x), in advance. Both the high- and low-order bits of the generator
must be 1.
50. To compute the CRC for a frame with m bits corresponding to the polynomial M(x), the frame
must be longer than the generator polynomial.
Append a CRC to the end of the frame in such a way that the polynomial represented by the frame
with checksum is divisible by G(x). When the receiver gets the above said frame, it tries dividing it
by G(x). If there is a remainder, there has been a transmission error.
Figure below illustrates the calculation for a frame 1101011111 using the generator G(x) = x 4 + x +
1.
52. Elementary Data Link Protocols: Before we look at the protocols, it is useful to make some
assumptions.
Assume that the physical layer, data link layer, and network layer are independent processes that
communicate by passing messages back and forth.
15
NIC: It is a dedicated hardware that runs on physical layer and some of the data link layer
processes.
Device Driver: The rest of the link layer process and the network layer process run on the main
CPU as part of the operating system. Also, a device driver can be described as computer
program that controls a particular type of device that is attached to a computer.
Treating the three layers as separate processes makes the discussion easier and provides
independence to those layers.
53. Another key assumption is that machine A wants to send a long stream of data to machine B.
Later, we will consider the case where B also wants to send data to A simultaneously.
A is assumed to have an infinite supply of data. We also assume that machines do not crash.
For the data link layer, the packet passed across the interface is pure data, whose every bit is to
be delivered to the destination’s network layer. The fact that the destination’s network layer may
interpret part of the packet as a header is of no concern to the DLL.
54. Simplex Protocol: Data are transmitted in one direction only. Both the transmitting and receiving
network layers are always ready. Processing time can be ignored. Infinite buffer space is available.
And best of all, the communication channel between the data link layers never damages or loses
frames.
This protocol 1 (Utopia) provides for data transmission in one direction only, from sender to
receiver. The communication channel is assumed to be error free and the receiver is assumed to
be able to process all the input infinitely quickly. Consequently, the sender just pumps data onto
the line as fast as it can.
16
The utopia protocol is unrealistic because it does not handle either flow control or error correction.
Its processing is close to that of an unacknowledged connectionless
service that relies on higher layers to solve these problems.
55. Simplex Stop-and-Wait Protocol for an Error-Free Channel: Here, we will tackle the problem of
preventing the sender from flooding the receiver with frames faster than its ability to process them.
The communication channel is still assumed to be error free and the data traffic is still simplex.
After having sent a frame, the sender is required to wait until the acknowledgement frame
arrives. This delay is a simple example of a flow control protocol. Hence, it is known as stop and
wait protocol.
56. Simplex Stop-and-Wait Protocol for a Noisy Channel: In the previous protocol, it was assumed
that the channel and data are error free. In reality, frames may be damages or lost completely. But
it can be assumed that if a frame is damaged in transit, the receiver will detect this when it
computes the checksum.
The process here is like this: The sender could send a frame, but the receiver would only send an
acknowledgement frame if the data were correctly received. If a damaged frame arrived at the
receiver, it would be discarded. After a while the sender would time out and send the frame again.
This process would be repeated until the frame finally arrived intact.
1. The network layer on A gives packet 1 to its DLL. The packet is correctly received at B and
passed to the network layer on B. B sends an acknowledgement frame back to A.
2. The acknowledgement frame gets lost completely. It just never arrives at all.
3. Since it didn’t receive an acknowledgement, DLL on A times out and assumes that its data
frame was lost or damaged and sends the frame containing packet 1 again.
4. The duplicate frame also arrives intact at the DLL on B and is passed to the network layer there.
If A is sending a file to B, part of the file will be duplicated.
57. Sliding Window Protocol: In the previous protocols, data frames were transmitted in one
direction only. It is better to use the same link for data in both directions. In this model the data
frames from A to B are intermixed with the acknowledgement frames from A to B. By looking at the
17
kind field in the header of an incoming frame, the receiver can tell whether the frame is data or an
acknowledgement.
58. NOTE: Expansion of a DLL frame: The fields of a Data Link Layer Frame are explained below:
Frame Header: It contains the source and the destination addresses of the frame and the
control bytes.
Trailer: It contains the error detection and error correction bits. It is also called a Frame Check
Sequence (FCS).
Flag: Two flag at the two ends mark the beginning and the end of the frame.
Frame Header: A frame header contains the destination address, the source address and three
control fields kind, seq, and ack serving the following purposes:
kind: This field states whether the frame is a data frame or it is used for control functions like
error and flow control or link management etc.
seq: This contains the sequence number of the frame for rearrangement of the sequence and
sending acknowledgment by the receiver.
When a data frame arrives, instead of immediately sending a separate control frame, the receiver
waits until the network layer passes the next packet to it. The acknowledgement is attached to the
outgoing data frame (using the ack field in the frame header). That is, the acknowledgement is
sent with next outgoing data frame. The technique of temporarily delaying outgoing
acknowledgements so that they can be hooked onto the next outgoing data frame is known as
piggybacking.
61. The next three protocols are bidirectional protocols that belong to the class of sliding window
protocols. The three differ among themselves in terms of efficiency, complexity, and buffer
19
requirements. Each outbound frame contains a sequence number, ranging from 0 up to some
maximum.
62. The maximum is 2n-1in an n-bit field. The stop-and-wait sliding window protocol uses n = 1,
restricting the sequence numbers to 0 and 1, but more sophisticated versions can use an arbitrary
n.
63. In all sliding window protocols, the sender maintains a set of sequence corresponding to the
frames that it is permitted to send. These frames are said to fall within the sending window. The
receiver also maintains a receiving window corresponding to the set of frames it is permitted to
accept.
64. The sender’s window and the receiver’s window need not have the same lower and upper limits or
even have the same size. The sequence numbers within the sender’s window represent frames
that have been sent or can be sent but are as yet not acknowledged.
Whenever a new packet arrives from the network layer, it is given the next highest sequence
number, and the upper edge of the window is advanced by one. When an acknowledgement
comes in, the lower edge is advanced by one. In this way the window continuously maintains
a list of unacknowledged frames.
The previous figure shows an example with a maximum window size of 1. Initially, no frames are
outstanding, so the lower and upper edges of the sender’s window are equal, but as time goes on,
the situation progresses as shown. Unlike the sender’s window, the receiver’s window always
remains at its initial size, rotating as the next frame is accepted and delivered to the network layer.
20
NOTE: Since frames currently within the sender’s window may be lost in transit, the sender must
keep all frames in its memory for retransmission. If the maximum window size is n, the sender
needs n buffers to hold the unacknowledged frames.
65. If the window grows to its maximum size, the sending DLL must shut off the network layer forcibly,
until another buffer becomes free.
66. One-Bit Sliding Window Protocol: Consider a sliding window protocol with a window size of 1.
Such a protocol uses one bit stop-and-wait since the sender transmits a frame and waits for its
acknowledgement before sending the next one.
Normally, starting machine fetches the first packet from its network layer, builds a frame from it,
and sends it. When this frame arrives, the receiving machine DLL checks to see if it is a duplicate.
If the frame is the one that had been expected, it is passed to the network layer and the receiver’s
window is slid up.
The acknowledgement field contains the number of the last frame received without error. If
this number is same as the sequence number of the frame the sender is trying to send, the sender
knows it is done with the frame stored in buffer and can fetch the next packet from its network
layer.
If the sequence number disagrees, it must continue trying to send the same frame. Whenever a
frame is received, a frame is also sent back.
67. If A and B simultaneously initiate communication, their first frames cross, and the data link layers
then get into situation (b). In (a) each frame arrival brings a new packet for the network layer; there
are no duplicates.
21
In (b) half of the frames contain duplicates, even though there are no transmission errors. Similar
situations can occur as a result of timeouts. If multiple timeouts occur, frames may be sent three or
more times, wasting bandwidth.
68. Sliding Window Protocol using Go-Back-N: The previous protocols made the assumption that
the transmission time of a packet + transmission time required for receiving back an
acknowledgement is negligible.
Apparently, this assumption is false in some cases. In these situations the long round-trip time can
reduce the efficiency of the bandwidth utilization. This is a consequence of the rule requiring a
sender to wait for an acknowledgement before sending another frame.
For more efficiency, we can relax that rule and allow the sender to transmit up to w frames before
blocking (window is full) – not just 1 frame. If ‘w’ is large enough, the sender can transmit frames
continuously, since the acknowledgements for previous frames will arrive before the window
becomes full and is blocked.
69. To decide the value of ‘w’, we need to know how many frames can fit inside the channel as they
propagate from sender to receiver. This capacity is determined by the bandwidth in bits/sec
multiplied by one-way transit time, or bandwidth-delay product of the link.
70. The above said quantity can be divided by the no. of bits in a frame, to express as a no. of frames.
(called as BD)
Then, w = 2BD + 1 since 2BD is the total transit time for sending frames and receiving
acknowledgements. ‘1’ is added since an ‘ack’ is not sent until a complete frame is received.
Ex: For the example link with a bandwidth of 50 kbps and a one-way transit time of 250 msec, the
bandwidth-delay product is 12.5 kbit or 12.5 frames of 1000 bits each. 2BD + 1 is then 26 frames.
Assume the sender begins sending frame 0 as before and sends a new frame every 20 msec. By
the time it has finished sending 26 frames, at t = 520 msec, the acknowledgement for frame 0 will
have just arrived.
Thereafter, acknowledgements will arrive every 20 msec, so the sender will always get permission
to continue just when it needs it. From then onwards, 25 or 26 unacknowledged frames will always
be outstanding. Put in other terms, the sender’s maximum window size is 26.
71. For smaller window sizes, the utilization of the link will be less than 100% since the sender will be
blocked sometimes. We can write the utilization as the fraction of time that the sender is not
blocked:
22
This value is an upper bound because it does not allow for any frame processing time and treats
the acknowledgement frame as having zero length, since it is usually short. The equation shows
the need for having a large window w whenever the bandwidth-delay product is large.
When a damaged frame arrives at the receiver, it should be discarded, but what should the
receiver do with all the correct frames following it? Remember that the receiving data link layer
is obligated to hand packets to the network layer in sequence.
73. Two basic approaches are available for dealing with errors in the presence of pipelining, both of
which are shown in the figure below.
74. In go-back-n protocol fig (a), the receiver discards all the subsequent frames after an error. This
strategy applies for a receive window of size 1. The data link layer refuses to accept any frame
except the next one it must give to the network layer.
23
If the sender’s window fills up before the timer runs out, the pipeline will begin to empty.
Eventually, the sender will time out and retransmit all unacknowledged frames in order, starting
with the damaged or lost one.
Note that this approach can waste a lot of bandwidth if error rate is high.
75. Selective Repeat: In fig (b) above, we can see the go-back-n concept for a case where receiver’s
window is large. Frames 0 and 1 are correctly received and acknowledged. Frame 2 is
damaged/lost. But the sender us unaware of this problem and keeps on sending frames until the
timer for frame 2 ack expires. Then it backs up to frame 2 and starts over with it, sending 2, 3, 4
etc. all over again.
In selective repeat, a bad frame that is received is discarded, but any good frames received after it
are accepted and buffered. When the sender times out, only the oldest unacknowledged frame is
retransmitted.
Selective repeat is often combined with having the receiver send a negative acknowledgement
(NAK) when it detects an error, for example, when it receives a checksum error or a frame out of
sequence.
In fig (b), frames 0 and 1 are again correctly received and acknowledged and frame 2 is lost.
When frame 3 arrives at the receiver, the data link layer notices that it has missed a frame, so it
sends back a NAK for 2 but buffers 3.
When frames 4 and 5 arrive, they, too, are buffered by the data link layer instead of being passed
to the network layer. Eventually, the NAK 2 gets back to the sender, which immediately resends
frame 2. When that arrives, the data link layer now has 2, 3, 4, and 5 and can pass all of them to
the network layer in the correct order.