CN Unit4
CN Unit4
CN Unit4
The services provided by the transport layer protocols can be divided into five
categories:
• End-to-end delivery
• Addressing
• Reliable delivery
• Flow control
• Multiplexing
End-to-end delivery:
• The transport layer transmits the entire message to the destination. Therefore, it ensures
the end-to-end delivery of an entire message from a source to the destination.
• Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission arrive at the
destination, not some of them. On the sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver’s transport layer to identify the missing segment.
Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data arrive at the
destination. Sequence numbers are used to identify the lost packets; similarly, it allows the receiver to identify and discard
duplicate segments.
Flow Control
• Flow control is used to prevent the sender from overwhelming the receiver.
• If the receiver is overloaded with too much data, then the receiver discards the packets and asks for the retransmission of
packets.
• This increases network congestion and thus, reduces the system performance. The transport layer is responsible for flow
control. It uses the sliding window protocol that makes the data transmission more efficient as well as controls the flow of
data so that the receiver does not become overwhelmed. Sliding window protocol is byte-oriented rather than frame
oriented.
Multiplexing
The transport layer uses multiplexing to improve transmission efficiency.
Multiplexing can occur in two ways:
• Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same network connection.
To make it more cost-effective, the transport layer sends several transmissions bound for the same destination along the same
path; this is achieved through upward multiplexing.
Downward multiplexing:
• It means one transport layer connection uses multiple network connections.
• It allows the transport layer to split a connection among several paths to improve the throughput.
• This type of multiplexing is used when networks have a low or slow capacity.
Addressing
• According to the layered model, the transport layer interacts with the functions of the session layer. Many protocols
combine session, presentation, and application layer protocols into a single layer known as the application layer.
• In these cases, delivery to the session layer means delivery to the application layer. Data generated by an application on
one machine must be transmitted to the correct application on another machine. In this case, addressing is provided by the
transport layer.
• The transport layer provides the user address which is specified as a station or port. The port variable represents a
particular TS-user of a specified station known as a Transport Service access point (TSAP). Each station has only one
transport entity.
• The transport layer protocols need to know which upper-layer protocols are communicating.
Transport Layer protocols
• The transport layer is represented by two protocols: TCP and UDP.
• The IP protocol in the network layer delivers a datagram from a source host to the destination host.
• Nowadays, the operating system supports multiuser and multiprocessing environments, an executing program is called a
process. When a host sends a message to another host means that the source process is sending a process to a destination
process. The transport layer protocols define some connections to individual ports known as protocol ports.
• An IP protocol is a host-to-host protocol used to deliver a packet from the source host to the destination host while
transport layer protocols are port-to-port protocols that work on top of the IP protocols to deliver the packet from the
originating port to the IP services, and from IP services to the destination port.
• Each port is defined by a positive integer address, and it is 16 bits.
UDP
• UDP stands for User Datagram Protocol.
• UDP is a simple protocol and it provides non-sequenced transport functionality.
• UDP is a connectionless protocol.
• This type of protocol is used when reliability and security are less important than speed and size.
• UDP is an end-to-end transport-level protocol that adds transport-level addresses, checksum error control, and length
information to the data from the upper layer.
• The packet produced by the UDP protocol is known as a user datagram.
User Datagram Format
Where,
• Source port address: It defines the address of the application process that has delivered a message. The source port
address is 16 bits address.
• Destination port address: It defines the address of the application process that will receive the message. The destination
port address is a 16-bit address.
• Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
• Checksum: The checksum is a 16-bit field that is used in error detection.
Disadvantages of UDP protocol
• UDP provides basic functions needed for the end-to-end delivery of a transmission.
• It does not provide any sequencing or reordering functions and does not specify the damaged packet when reporting an
error.
• UDP can discover that an error has occurred, but it does not specify which packet has been lost as it does not contain an ID
or sequencing number of a particular data segment.
PROTOCOLS SUPPORTED BY TCP AND UDP:
Protocols supported by UDP are:
• Dynamic Host Transfer Protocol (DHCP)
• Domain Name System (DNS)
• Trivial File Transfer Protocol (TFTP)
• Voice over Internet Protocol (VoIP)
Where,
• Source port address: It is used to define the address of the application program in a source computer. It is a 16-bit field.
• Destination port address: It is used to define the address of the application program in a destination computer. It is a 16-bit
field.
• Sequence number: A stream of data is divided into two or more TCP segments. The 32-bit sequence number field
represents the position of the data in an original data stream.
• Acknowledgement number: A 32-field acknowledgment number acknowledges the data from other communicating
devices. If the ACK field is set to 1, then it specifies the sequence number that the receiver is expecting to receive.
• Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The minimum size of the header is 5
words, and the maximum size of the header is 15 words. Therefore, the maximum size of the TCP header is 60 bytes, and
the minimum size of the TCP header is 20 bytes.
• Reserved: It is a six-bit field that is reserved for future use.
• Control bits: Each bit of a control field functions individually and independently. A control bit defines the use of a segment
or serves as a validity check for other fields.
As defined in Request For Comment (RFC) 7913, TCP has the following features:
• Connection establishment and termination.
• Multiplexing using ports.
• Flow control using windowing.
• Error recovery.
• Ordered data transfer and data segmentation.
Differences b/w TCP & UDP
Basis for Comparison TCP UDP
Definition TCP establishes a virtual circuit UDP transmits the data directly
before transmitting the data. to the destination computer
without verifying whether the
receiver is ready to receive or
not.
Connection Type It is a Connection-Oriented It is a Connectionless protocol
protocol
Speed slow high
Reliability It is a reliable protocol. It is an unreliable protocol.
Header size 20 byte(more overload) 8 bytes(less overload)
Transmission Slow transmission Fast as cpmpared to TCP
acknowledgement It waits for the It neither takes the
acknowledgement of data and acknowledgement, nor it
has the ability to resend the lost retransmits the damaged frame.
packets.
A TCP Connection
• TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual path between the
source and destination.
• All the segments belonging to a message are then sent over this virtual path.
• Using a single virtual pathway for the entire message facilitates the acknowledgment process as well as the
retransmission of damaged or lost frames.
• The point is that a TCP connection is virtual, not physical.
• TCP operates at a higher level.
• TCP uses the services of IP to deliver individual segments to the receiver, but it controls the connection
itself.
• If a segment is lost or corrupted, it is retransmitted. Unlike TCP, IP is unaware of this retransmission.
• If a segment arrives out of order, TCP holds it until the missing segments arrive; IP is unaware of this
reordering.
• In TCP, connection-oriented transmission requires three phases: connection establishment, data transfer, and
connection termination.
1. Connection Establishment
• TCP transmits data in full-duplex mode.
• This implies that each party must initialize communication and get approval from the other party before any data are
transferred.
Three-Way Handshaking
• The connection establishment in TCP is called three-way handshaking.
• In our example, an application program, called the client, wants to make a connection with another application program,
called the server, using TCP as the transport layer protocol.
• The process starts with the server.
• The server program tells its TCP that it is ready to accept a connection. This is called a request for a passive open.
• Although the server TCP is ready to accept any connection from any machine in the world, it cannot make the connection
itself.
• The client program issues a request for an active open.
• A client that wishes to connect to an open server tells its TCP that it needs to be connected to that particular server.
• TCP can now start the three-way handshaking process.
• To show the process, we use two timelines: one at each site.
• Each segment has values for all its header fields and perhaps for some of its option fields, too.
Figure Connection establishment using three-way handshaking
The three steps in this phase are as follows.
1. The client sends the first segment, an SYN segment, in which only the SYN flag is set. This segment is for the
synchronization of sequence numbers. When the data transfer starts, the sequence number is incremented by 1. We can
say that the SYN segment carries no real data, but we can think of it as containing 1 imaginary byte.
2. The server sends the second segment, an SYN +ACK segment, with 2 flag bits set: SYN and ACK. This segment has a
dual purpose. It is an SYN segment for communication in the other direction and serves as the acknowledgment for the
SYN segment. It consumes one sequence number.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the receipt of the second segment
with the ACK flag and acknowledgment number field.
Simultaneous Open
• A rare situation, called a simultaneous open, may occur when both processes issue an active open.
• In this case, both TCPs transmit an SYN + ACK segment to each other, and one single connection is established between
them.
SYN Flooding Attack
•The connection establishment procedure in TCP is susceptible to a serious security problem that
pretending that each of them is coming from a different client
by faking the source IP addresses in the datagrams.
•The server, assuming that the clients are issuing an active open, allocates the necessary resources, such as creating
communication tables and setting timers.
•The TCP server then sends the SYN +ACK segments to the fake clients, which are lost.
2. Data Transfer
• After a connection is established, bidirectional data transfer can take place. The client and server can both send data and
acknowledgments.
• In this example, after the connection is established (not shown in the figure), the client sends 2000 bytes of data in two
segments. The server then sends 2000 bytes in one segment.
• The client sends one more segment.
• The first three segments carry both data and acknowledgment, but the last segment carries only an acknowledgment
because there are no more data to be sent.
• Note the values of the sequence and acknowledgment numbers.
• The data segments sent by the client have the PSH (push) flag set so that the server TCP knows to deliver data to the
server process as soon as they are received.
• The segment from the server, on the other hand, does not set the push flag.
• Most TCP implementations have the option to set or not set this flag.
Figure Data transfer
3. Connection Termination
• Any of the two parties involved in exchanging data (client or server) can close the connection, although it is usually
initiated by the client.
• Most implementations today allow two options for connection termination: three-way handshaking and four-way
handshaking with a half-close option.
Most implementations today allow three-way handshaking for connection termination as shown in Figure.
1. In a normal situation, the client TCP, after receiving a close command from the client process, sends the first segment, a
FIN segment in which the FIN flag is set.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends the second segment, a
FIN +ACK segment, to confirm the receipt of the FIN segment from the client and at the same time to announce the
closing of the connection in the other direction.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN segment from the TCP
server. This segment contains the acknowledgment number, which is 1 plus the sequence number received in the FIN
segment from the server.
Figure Connection termination using three-way handshaking
Flow control using windowing
• Because the network host has limited resources such as limited space and
processing power, TCP implements a mechanism called flow control using a
window concept. This is applied to the amount of data that can be awaiting
acknowledgment at any one point in time.
• The receiving device uses the windowing concept to inform the sender how much
data it can receive at any given time. This allows the sender to either speed up or
slow down the sending of segments through a window-sliding process.
There are total six types of flags in control field:
• URG: The URG field indicates that the data in a segment is urgent.
• ACK: When the ACK field is set, then it validates the acknowledgment number.
• PSH: The PSH field is used to inform the sender that higher throughput is needed so if possible, data must be pushed with
higher throughput.
• RST: The reset bit is used to reset the TCP connection when there is any confusion occurs in the sequence numbers.
• SYN: The SYN field is used to synchronize the sequence numbers in three types of segments: connection request,
connection confirmation ( with the ACK bit set ), and confirmation acknowledgment.
• FIN: The FIN field is used to inform the receiving TCP module that the sender has finished sending data. It is used in
connection with termination in three types of segments: termination request, termination confirmation, and
acknowledgment of termination confirmation.
• Window Size: The window is a 16-bit field that defines the size of the window.
• Checksum: The checksum is a 16-bit field used in error detection.
• Urgent pointer: If the URG flag is set to 1, then this 16-bit field is an offset from the sequence number indicating that
it is the last urgent data byte.
• Options and padding: It defines the optional fields that convey the additional information to the receiver.
CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens or remove it after it happens. In general, we can divide congestion control
mechanisms into two broad categories: open-loop congestion control (prevention) and closed-
loop congestion control (removal).
• In open-loop congestion control, policies are applied to prevent congestion before it
happens. In these mechanisms, congestion control is handled by either the source or the
destination.
• Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Congestion Policy
• TCP's general policy for handling congestion is based on three phases: slow start, congestion avoidance, and
congestion detection.
1. slow start.
• This algorithm is based on the idea that the size of the congestion window (cwnd) starts with one maximum
segment size (MSS).
• The MSS is determined during connection establishment by using an option of the same name.
• The size of the window increases one MSS each time an acknowledgment is received.
2. avoid congestion.
• When the threshold is reached, the data rate is reduced to avoid congestion.
• If we start with the slow-start algorithm, the size of the congestion window increases exponentially.
• To avoid congestion before it happens, one must slow down this exponential growth.
• TCP defines another algorithm called congestion avoidance, which undergoes an additive increase instead of an
exponential one.
• When the size of the congestion window reaches the slow-start threshold, the slow-start phase stops and the
additive phase begins.
Congestion Policy
3. congestion detection
• congestion is detected, the sender goes back to the slow-start or congestion avoidance phase
based on how the congestion is detected
• If congestion occurs, the congestion window size must be decreased.
• The only way the sender can guess that congestion has occurred is by the need to retransmit a
segment.
• However, retransmission can occur in one of two cases: when a timer times out or when the
ACKs are not received.
• In both cases, the size of the threshold is dropped to one-half, a multiplicative decrease.
QUALITY OF SERVICE
• Quality of service (QoS) is an internetworking issue that has been discussed more than
defined. We can informally define quality of service as something a flow seeks to attain.
Flow Characteristics
• Traditionally, four types of characteristics are attributed to a flow: reliability, delay,
jitter, and bandwidth, as shown in Figure
TECHNIQUES TO IMPROVE QoS
We briefly discuss four common methods: scheduling, traffic shaping,
admission control, and resource reservation.
1. Scheduling
• Packets from different flows arrive at a switch or router for processing.
• A good scheduling technique treats the different flows in a fair and
appropriate manner.
• Several scheduling techniques are designed to improve the quality of service.
• We discuss three of them here: FIFO queuing, priority queuing, and weighted
fair queuing.
2. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of traffic sent
to the network. Two techniques can shape traffic: leaky bucket and token
bucket.
TECHNIQUES TO IMPROVE QoS
1. Leaky Bucket
• If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket.
• The rate at which the water leaks does not depend on the rate at which the water is input to
the bucket unless the bucket is empty.
• The input rate can vary, but the output rate remains constant.
• Similarly, in networking, a technique called leaky bucket can smooth out burst traffic.
• Bursty chunks are stored in the bucket and sent out at an average rate.
• Figure 24.19 shows a leaky bucket and its effects.
Similarly, each network interface contains a leaky bucket, and the following steps are
involved in the leaky bucket algorithm:
• When the host wants to send a packet, a packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
• In Figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In Figure (B) We see that three of the five packets have gotten
through, but the other two are stuck waiting for more tokens to be generated.
• Formula: M * s = C + ρ * s
where S – is the time taken
M – Maximum output rate
ρ – Token arrival rate
C – Capacity of the token bucket in byte
TECHNIQUES TO IMPROVE QoS
3. Resource reservation:
The Resource Reservation Protocol (RSVP) is a transport layer protocol that reserves
resources across a network and can be used to deliver specific levels of QoS for application
data streams. Resource reservation enables businesses to divide network resources by
traffic of different types and origins, define limits, and guarantee bandwidth.
4. Admission Control:
It refers to mechanism used by a router, or a switch, to accept or reject a flow based on
predefined parameters called flow specifications. Before a router accepts a flow for
processing, it checks the flow specifications to see if its capacity and its previous
commitments to other flows can handle the new flow.
Questions
Ten Questions related to the topic:
1.What is the transport layer? Explain in brief. (KCS 603.4, K2)
2.Draw the diagram of the TCP header and explain the use of the following: (KCS 603.4, K3)
1. Source and destination port address.
2. Sequence and acknowledgement numbers.
3. Code bits 4. Window bits 5. Urgent pointer.
3. Explain three-way handshaking(KCS 603.4, K3)
4. What is the difference between TCP and UDP? (KCS 603.4, K2)
5. Define UDP. What is the maximum and minimum size of a UDP datagram? Also discuss the use of
UDP. (KCS 603.4, K2)
6. Explain the header format of TCP. (KCS 603.4, K3)
7. Explain the header format of UDP. (KCS 603.4, K3)
8. Compare the TCP header with the UDP header. (KCS 603.4, K2)
9. What is meant by quality of service? (KCS 603.4, K2)
10. What are the two categories of QoS attributes? (KCS 603.4, K2)
Answer: a
Explanation: UDP is an alternative for TCP and it is used for those purposes where speed matters most whereas
loss of data is not a problem. UDP is connectionless whereas TCP is connection-oriented
Answer: a
Explanation: The datagram congestion control is a transport layer protocol which deals with reliable connection setup,
teardown, congestion control, explicit congestion notification, and feature negotiation. It is used in modern day systems
where there are really high chances of congestion. The protocol was last updated in the year 2008.
8. A _____ is a TCP name for a transport service access point.
a) port
b) pipe
c) node
d) protocol
Answer: a
Explanation: Just as the IP address identifies the computer, the network port identifies the application or service running on
the computer. A port number is 16 bits. The combination of IP address preceded with the port number is called the socket
address.
9. Transport layer protocols deals with ____________
a) application to application communication
b) process to process communication
c) node to node communication
d) man to man communication
Answer: b
Explanation: Transport layer is 4th layer in TCP/IP model and OSI reference model. It deals with logical communication
between process. It is responsible for delivering a message between network host.
10. Which of the following is a transport layer protocol?
a) stream control transmission protocol
b) internet control message protocol
c) neighbor discovery protocol
d) dynamic host configuration protocol
Answer: a
Explanation: The Stream Control Transmission Protocol (SCTP) is a transport layer protocol used in networking system
where streams of data are to be continuously transmitted between two connected network nodes. Some of the other transport
layer protocols are RDP, RUDP, TCP, DCCP, UDP etc.
If WAN link is 2 Mbps and RTT between source and destination is 300 msec, what
would be the optimal TCP window size needed to fully utilize the line?
1.60,000 bits
2.75,000 bytes
3.75,000 bits
4.60,000 bytes
Solution-
Given-
• Bandwidth = 2 Mbps
• RTT = 300 msec
Thank You!