Module 4 - Transport Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Module 4: Transport Layer

In OSI suite, there are 7 layers, and all layers provide different services. Today we are here
with the detailed information of one of the layers i.e. transport layer which provides process to
process communication in OSI suite.

Transport Layer in Computer Networks is an end-to-end layer which is used to deliver message
to a host. In OSI model, transport layer is located between session layer and network layer.

Scope

This article discusses about the transport layer and how the transport layer works in Data
Communication. We will also see the services provided by transport layer and transport layer
protocols.

What is transport Layer?

 Transport Layer is the fourth layer from the top in OSI Model which provide
communication services to the application processes that was running on different
hosts.
 Transport Layer provides the services to the session layer and it receives the services
from network layer.
 The services provided by transport layer includes error correction as well as segmenting
and desegmenting data before and after it's sent on the network.
 Transport layer also provides the flow control functionality and ensures that segmented
data is delivered across the network in the right sequence.

Note: Main duty of transport layer is to provide process to process communication.

Services provided by Transport Layer

1. Process to Process Communication

Transport Layer is responsible for delivery of message to appropriate process.

Transport Layer uses a port number to deliver the segmented data to the correct process
amongst the multiple processes that are running on a particular host. A port number is a 16-bit
address used by transport layer to identify any client-server program.

2. Muliplexing and Demultiplexing

The transport layer provides the multiplexing service to improve transmission efficiency in
data communication. At the receiver side, demultiplexing is required to collect the data coming
from different processes. Transport Layer provides Upward and Downward Multiplexing:

Upward multiplexing means multiple transport layer connections utilizes the connection of
same network. Transport layer transmits several transmissions bound for the same destination
along the same path in network.
Downward multiplexing means a transport layer connection utilizes the multiple connections.
This multiplexing allows the transport layer to split a network connection among several paths
to improve the throughput in network.

3. Flow Control

 Flow control makes sure that the data is transmitted at a rate that is accept table for both
sender and receiver by managing data flow.
 The transport layer provides a flow control service between the adjacent layers of the
TCP/IP model. Transport Layer uses the concept of sliding window protocol to provide
flow control.

4. Data integrity

Transport Layer provides data integrity by:

 Detecting and discarding corrupted packets.


 Tracking of lost and discarded packets and re-transmit them.
 Recognizing duplicate packets and discarding them.
 Buffering out of order packets until the missing the packets arrive.

5. Congestion avoidance

 In network, if the load on network is greater than the network load capacity, then the
congestion may occur.
 Congestion Control refers to the mechanisms and techniques to control the congestion
and keep the load below the capacity.
 The transport layer recognizes overloaded nodes and reduced flow rates and take proper
steps to overcome this.

Example of Transport Layer

Let us understand transport layer with the help of example. Let us take an example of sending
email.

 When we send an email then the email then in OSI model each layer communicates to
the corresponding layer of the receiver.
 So when the mail will come at transport layer on sender side then the email is broken
down in to small segments. Then that broken segments are sent to network layer and
transport layer also specifies the source and destination port.
 At the receiver side, transport layer reassembles all the segment to get the data and use
port number to identify the application to deliver data.

Working of Transport Layer

The transport layer receives the services from the network layer and then give services to the
session layer.
At the sender’s side: At the sender's end, transport layer collect data from application layer i.e
message and performs segementation to divide the message into segments and then adds the
port number of source and destination in header and send that message to network layer.

At the receiver’s side: At the receiver's end, transport layer collects data from network layer
and then reassembles the segmented data and identifies port number by reading its header to
send that message to appropriate port in the session layer.

Refer to the image below to see the working of Transport Layer.

Transport Layer Protocols

 UDP (User Datagram Protocol)


 TCP(Transmission Control Protocol)
 SCTP (Stream Control Transmission Protocol)

UDP

 Connection less protocol


 Unreliable protocol
 UDP stands for User Datagram Protocol.

 UDP is one of the simplest transport layer protocol which provides non sequenced data
transmission functionality.
 UDP is consider as connection less transport layer protocol.
 This type of protocol is referred to be used when speed and size are more important
than reliability and security.
 It is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data received from the upper
layer.
 User datagram is the packet constructed by the UDP protocol

Format of User Datagram

Refer to the image below to see the header of UDP packet consisting of four fields.

User datagram have a fixed size header of 8 bytes which is divided into four parts -

Source port address: It defines source port number and it is of 16 bits.

Destination port address: It defines destination port number and it is of 16 bits.

Total length: This field is used to define the total length of the user datagram which is sum of
header and data length in bytes. It is a 16-bit field.

Checksum: Checksum is also 16 bit field to carry the optional error detection data.

UDP Services

 Process to Process Communication


 Connectionless Service
 Fast delivery of message
 Checksum

Disadvantages

 UDP delivers basic functions required for the end-to-end transimission of data.
 It does not use any sequencing and does not identify the damaged packet while
reporting an error.
 UDP can identify that an error has happened, but UDP does not identify which packet
has been lost.
TCP
 Connection oriented protocol
 Reliable protocol
 Provide error and flow control

 TCP stands for Transmission Control Protocol.


 TCP is a connection-oriented transport layer protocol.
 TCP explicitly defines connection establishment, data transfer, and connection tear
down phases to provide connection oriented service for data transmission.
 TCP is the most commonly used transport layer protocol.

Features Of TCP protocol

 Stream data transfer


 Reliability
 Flow Control
 Error Control
 Multiplexing
 Logical Connections
 Full Duplex

TCP Segment Format

Refer to the image below to see the header of TCP Segment.


 Source port address is a 16 bit field that defines port number of application program
that is sending the segment.
 Destination port address is a 16 bit field that defines port number of application
program that is receiving the segment.
 Sequence number is a field of 32 bit that will define the number assigned to data first
byte contained in segment.
 Acknowledgement number is a 32 bit field that describe the next byte that receiver is
looking forward to receive next from sender.
 Header Length (HLEN) is a field of 4 bit that specify the number of 4 byte words in
TCP header. The header length of TCP header can be between 20 to 60 bytes.
 Reserved is a field 6 bit that are reserved for future use.
 Control bits are 6 different independent control bits or flags in this field.
 There are six in control field:
1. URG: Urgent pointer
2. ACK: Acknowledgement number
3. PSH: Push request
4. RST: Reset connection
5. SYN: Sequence number Synchronization
6. FIN: Connection termination
 Window Size is a 16-bit field that defines the size of the window of sending TCP in
bytes.
 Checksum, 16-bit field contains checksum and used for error detection.
 Urgent pointer is a 16 bit field .This flag is set when there is urgent data in the data
segment.
 Options and padding can be upto 40 bytes field for optional information in TCP
header.

TCP 3-Way Handshake Process:

This could also be seen as a way of how TCP connection is established. Before getting into
the details, let us look at some basics. TCP stands for Transmission Control Protocol which
indicates that it does something to control the transmission of the data in a reliable way.
The process of communication between devices over the internet happens according to the
current TCP/IP suite model(stripped out version of OSI reference model). The Application
layer is a top pile of a stack of TCP/IP models from where network referenced applications
like web browsers on the client-side establish a connection with the server. From the
application layer, the information is transferred to the transport layer where our topic comes
into the picture. The two important protocols of this layer are – TCP, UDP(User Datagram
Protocol) out of which TCP is prevalent(since it provides reliability for the connection
established). However, you can find an application of UDP in querying the DNS server to get
the binary equivalent of the Domain Name used for the website.
TCP provides reliable communication with something called Positive Acknowledgement
with Re-transmission(PAR). The Protocol Data Unit(PDU) of the transport layer is called a
segment. Now a device using PAR resend the data unit until it receives an acknowledgement.
If the data unit received at the receiver’s end is damaged(It checks the data with checksum
functionality of the transport layer that is used for Error Detection), the receiver discards the
segment. So the sender has to resend the data unit for which positive acknowledgement is not
received. You can realize from the above mechanism that three segments are exchanged
between sender(client) and receiver(server) for a reliable TCP connection to get established.
Let us delve into how this mechanism works :

 Step 1 (SYN): In the first step, the client wants to establish a connection with a
server, so it sends a segment with SYN(Synchronize Sequence Number) which
informs the server that the client is likely to start communication and with what
sequence number it starts segments with
 Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK
signal bits set. Acknowledgement(ACK) signifies the response of the segment it
received and SYN signifies with what sequence number it is likely to start the
segments with
 Step 3 (ACK): In the final part client acknowledges the response of the server and
they both establish a reliable connection with which they will start the actual data
transfer
SCTP
 SCTP stands for Stream Control Transmission Protocol.
 SCTP is one of the connection oriented tranport layer protocols.
 It allows transmitting of data between sender and receiver in full duplex mode.
 This protocol makes it simpler to build connection over wireless network and to control
multimedia data transmission.

Features of SCTP

 Unicast with Multiple properties


 Reliable Transmission
 Message oriented
 Multi-homing

Conclusion:

 Transport Layer is the fourth layer of TCP/IP suite which provide process to process
communication
 Transport Layer provides process to process communication, data integrity, flow
control, congestion avoidance, muliplexing` and demultiplexing services
 UDP is Transport layer protocol that provides connection less service.
 TCP and SCTP is Transport layer protocol that provides connection oriented service.

Differences between TCP and UDP

Transmission control protocol User datagram protocol


Basis (TCP) (UDP)

TCP is a connection-oriented UDP is the Datagram-oriented


protocol. Connection-orientation protocol. This is because there
means that the communicating is no overhead for opening a
devices should establish a connection, maintaining a
connection before transmitting connection, and terminating a
data and should close the connection. UDP is efficient for
connection after transmitting the broadcast and multicast types of
Type of Service data. network transmission.

TCP is reliable as it guarantees the The delivery of data to the


delivery of data to the destination destination cannot be
Reliability router. guaranteed in UDP.
Transmission control protocol User datagram protocol
Basis (TCP) (UDP)

TCP provides extensive error-


checking mechanisms. It is UDP has only the basic error
Error checking because it provides flow control checking mechanism using
mechanism and acknowledgment of data. checksums.

An acknowledgment segment is
Acknowledgment present. No acknowledgment segment.

Sequencing of data is a feature of There is no sequencing of data


Transmission Control Protocol in UDP. If the order is required,
(TCP). this means that packets it has to be managed by the
Sequence arrive in order at the receiver. application layer.

TCP is comparatively slower than UDP is faster, simpler, and


Speed UDP. more efficient than TCP.

There is no retransmission of
Retransmission of lost packets is lost packets in the User
Retransmission possible in TCP, but not in UDP. Datagram Protocol (UDP).

TCP has a (20-60) bytes variable UDP has an 8 bytes fixed-


Header Length length header. length header.

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, It’s a connectionless protocol


Techniques ACK, SYN-ACK i.e. No handshake

TCP doesn’t support


Broadcasting Broadcasting. UDP supports Broadcasting.

TCP is used by HTTP, UDP is used by DNS, DHCP,


Protocols HTTPs, FTP, SMTP and Telnet. TFTP, SNMP, RIP, and VoIP.

The TCP connection is a byte UDP connection is message


Stream Type stream. stream.

Overhead Low but higher than UDP. Very low.


A short example to understand the differences clearly :
Suppose there are two houses, H1 and H2 and a letter have to be sent from H1 to H2. But
there is a river in between those two houses. Now how can we send the letter?
Solution 1: Make a bridge over the river and then it can be delivered.
Solution 2: Get it delivered through a pigeon.
Consider the first solution as TCP. A connection has to be made ( bridge ) to get the data
(letter) delivered.
The data is reliable because it will directly reach another end without loss in data or error.
And the second solution is UDP. No connection is required for sending the data.
The process is fast as compared to TCP, where we need to set up a connection(bridge). But
the data is not reliable: we don’t know whether the pigeon will go in the right direction, or
it will drop the letter on the way, or some issue is encountered in mid-travel.
Quality of Service:
Quality of Service (QOS) determines a network's capability to support predictable service over
various technologies, containing frame relay, Asynchronous Transfer Mode (ATM), Ethernet,
SONET IP-routed networks. The networks can use any or all of these frameworks.
The QOS also provides that while supporting priority for one or more flows does not create
other flows fail. A flow can be a combination of source and destination addresses, source and
destination socket numbers, session identifier, or packet from a specific application or an
incoming interface.
The QOS is primarily used to control resources like bandwidth, equipment, wide-area facilities
etc. It can get more efficient use of network resources, provide tailored services, provide
coexistence of mission-critical applications, etc.

QOS Concepts

The QOS concepts are explained below−


Congestion Management
The bursty feature of data traffic sometimes bounds to increase traffic more than a connection
speed. QoS allows a router to put packets into different queues. Servicespecific queues more
often depend on priority than buffer traffic in an individual queue and let the first packet by
the first packet out.
Queue Management
The queues in a buffer can fill and overflow. A packet would be dropped if a queue is complete,
and the router cannot prevent it from being dropped if it is a high priority packet. This is
referred to as tail drop.
Link Efficiency
The low-speed links are bottlenecks for lower packets. The serialization delay caused by the
high packets forces the lower packets to wait longer. The serialization delay is the time created
to put a packet on the connection.
Elimination of overhead bits
It can also increase efficiency by removing too many overhead bits.
Traffic shaping and policing
Shaping can prevent the overflow problem in buffers by limiting the full bandwidth potential
of the applications packets. Sometimes, many network topologies with a highbandwidth link
connected with a low-bandwidth link in remote sites can overflow low bandwidth connections.
Therefore, shaping is used to provide the traffic flow from the high bandwidth link closer to
the low bandwidth link to avoid the low bandwidth link's overflow. Policing can discard the
traffic that exceeds the configured rate, but it is buffered in the case of shaping.

Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either


differentiate performance based on application or network-operator requirements or provide
predictable or guaranteed performance to applications, sessions, or traffic aggregates. Basic
phenomenon for QoS means in terms of packet delay and losses of various kinds.
Need for QoS –
 Video and audio conferencing require bounded delay and loss rate.
 Video and audio streaming requires bounded packet loss rate, it may not be so
sensitive to delay.
 Time-critical applications (real-time control) in which bounded delay is
considered to be an important factor.
 Valuable applications should be provided better services than less valuable
applications.
QoS Specification –
QoS requirements can be specified as:
1. Delay
2. Delay Variation(Jitter)
3. Throughput
4. Error Rate
There are two types of QoS Solutions:
1. Stateless Solutions –
Routers maintain no fine-grained state about traffic, one positive factor of it is that
it is scalable and robust. But it has weak services as there is no guarantee about
the kind of delay or performance in a particular application which we have to
encounter.
2. Stateful Solutions –
Routers maintain a per-flow state as flow is very important in providing the
Quality-of-Service i.e. providing powerful services such as guaranteed services
and high resource utilization, providing protection, and is much less scalable and
robust.

Integrated Services(IntServ) –
1. An architecture for providing QoS guarantees in IP networks for individual
application sessions.
2. Relies on resource reservation, and routers need to maintain state information of
allocated resources and respond to new call setup requests.
3. Network decides whether to admit or deny a new call setup request.
IntServ QoS Components –
 Resource reservation: call setup signaling, traffic, QoS declaration, per-element
admission control.
 QoS-sensitive scheduling e.g WFQ queue discipline.
 QoS-sensitive routing algorithm(QSPF)
 QoS-sensitive packet discard strategy.
RSVP-Internet Signaling –
It creates and maintains distributed reservation state, initiated by the receiver and scales for
multicast, which needs to be refreshed otherwise reservation times out as it is in soft state.
Latest paths were discovered through “PATH” messages (forward direction) and used by
RESV messages (reserve direction).
Call Admission –
 Session must first declare it’s QoS requirement and characterize the traffic it will
send through the network.
 R-specification: defines the QoS being requested, i.e. what kind of bound we
want on the delay, what kind of packet loss is acceptable, etc.
 T-specification: defines the traffic characteristics like bustiness in the traffic.
 A signaling protocol is needed to carry the R-spec and T-spec to the routers where
reservation is required.
 Routers will admit calls based on their R-spec, T-spec and based on the current
resource allocated at the routers to other calls.
Diff-Serv –

Differentiated Service is a stateful solution in which each flow doesn’t mean a different state.
It provides reduced state services i.e. maintaining state only for larger granular flows rather
than end-to-end flows tries to achieve the best of both worlds.
Intended to address the following difficulties with IntServ and RSVP:
1. Flexible Service Models:
IntServ has only two classes, want to provide more qualitative service classes:
want to provide ‘relative’ service distinction.
2. Simpler signaling:
Many applications and users may only want to specify a more qualitative notion
of service.

Streaming Live Multimedia –


 Examples: Internet radio talk show, Live sporting event.
 Streaming: playback buffer, playback buffer can lag tens of seconds after and
still have timing constraint.
 Interactivity: fast forward is impossible, but rewind and pause is possible.
QoS improving techniques:
In the network layer, before the network can make Quality of service guarantees, it must
know what traffic is being guaranteed. One of the main causes of congestion is that traffic is
often bursty.
To understand this concept first we have to know little about traffic shaping. Traffic
Shaping is a mechanism to control the amount and the rate of traffic sent to the network.
Approach of congestion management is called Traffic shaping. Traffic shaping helps to
regulate the rate of data transmission and reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water, at random points in time, but we
have to get water at a fixed rate, to achieve this we will make a hole at the bottom of the
bucket. This will ensure that the water coming out is at some fixed rate, and also if the bucket
gets full, then we will stop pouring water into it.
The input rate can vary, but the output rate remains constant. Similarly, in networking, a
technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the
bucket and sent out at an average rate.

In the above figure, we assume that the network has committed a bandwidth of 3 Mbps for a
host. The use of the leaky bucket shapes the input traffic to make it conform to this
commitment. In the above figure, the host sends a burst of data at a rate of 12 Mbps for 2s,
for a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a rate of 2
Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in 10 s.
The leaky bucket smooths out the traffic by sending out data at a rate of 3 Mbps during the
same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by consuming more
bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds
the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the
process removes a fixed number of packets from the queue at each tick of the clock. If the
traffic consists of variable-length packets, the fixed output rate must be based on the number
of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.

Note: In the below examples, the head of the queue is the rightmost position and the tail of
the queue is the leftmost position.
Example: Let n=1000
Packet=

Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.

Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.

Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
Below is the implementation of above explained approach:

 C++
 Java
 Python3

// cpp program to implement leakybucket


#include <bits/stdc++.h>
using namespace std;
int main()
{
int no_of_queries, storage, output_pkt_size;
int input_pkt_size, bucket_size, size_left;

// initial packets in the bucket


storage = 0;

// total no. of times bucket content is checked


no_of_queries = 4;

// total no. of packets that can


// be accommodated in the bucket
bucket_size = 10;

// no. of packets that enters the bucket at a time


input_pkt_size = 4;

// no. of packets that exits the bucket at a time


output_pkt_size = 1;
for (int i = 0; i < no_of_queries; i++) // space left
{
size_left = bucket_size - storage;
if (input_pkt_size <= size_left) {
// update storage
storage += input_pkt_size;
}
else {
printf("Packet loss = %d\n", input_pkt_size);
}
printf("Buffer size= %d out of bucket size= %d\n",
storage, bucket_size);
storage -= output_pkt_size;
}
return 0;
}

// This code is contributed by bunny09262002


// Improved by: rishitchaudhary

Output

Buffer size= 4 out of bucket size= 10


Buffer size= 7 out of bucket size= 10
Buffer size= 10 out of bucket size= 10
Packet loss = 4
Buffer size= 9 out of bucket size= 10
Some advantage of token Bucket over leaky bucket
 If a bucket is full in tokens Bucket, tokens are discard not packets. While in leaky
bucket, packets are discarded.
 Token Bucket can send large bursts at a faster rate while leaky bucket always
sends packets at constant rate.
Token Bucket algorithm
Token bucket algorithm is one of the techniques for congestion control algorithms. When too
many packets are present in the network it causes packet delay and loss of packet which
degrades the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for handling congestions. One
of the most effective ways to control congestion is trying to reduce the load that transport layer
is placing on the network. To maintain this network and transport layers have to work together.
The Token Bucket Algorithm is diagrammatically represented as follows −

With too much traffic, performance drops sharply.

Token Bucket Algorithm

The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy
the traffic is. So, to deal with the more traffic, we need a flexible algorithm so that the data is
not lost. One such approach is the token bucket algorithm.
Let us understand this algorithm step wise as given below −
 Step 1 − In regular intervals tokens are thrown into the bucket f.
 Step 2 − The bucket has a maximum capacity f.
 Step 3 − If the packet is ready, then a token is removed from the bucket, and the
packet is sent.
 Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.

Example

Let us understand the Token Bucket Algorithm with an example −


In figure (a) the bucket holds two tokens, and three packets are waiting to be sent out of the
interface.
In Figure (b) two packets have been sent out by consuming two tokens, and 1 packet is still
left.
When compared to Leaky bucket the token bucket algorithm is less restrictive that means it
allows more traffic. The limit of busyness is restricted by the number of tokens available in
the bucket at a particular instant of time.
The implementation of the token bucket algorithm is easy − a variable is used to count the
tokens. For every t seconds the counter is incremented and then it is decremented whenever a
packet is sent. When the counter reaches zero, no further packet is sent out.
This is shown in below given diagram −
Differentiate Between Leaky Bucket And Token Bucket Methods Of Traffic Shaping

Sr no. Token Bucket Leaky Bucket


1 Token bucket is token dependent. Leaky bucket is token independent.

Block diagram token bucket. Block diagram of leaky bucket.


3 If bucket is full token are discarded but not If bucket is full packet or data is discarded.
the packet.
4 Token bucket allows for large bursts to be Leaky bucket sends the packets at an average
sent rate.
faster by speeding up the output.
5 Token bucket allows saving up of tokens Leaky bucket does not allow saving a
(permission) to send large bursts. constant rate is maintained.
6 Packets can only Transmitted when Packet are transmitted continuously.
there are enough token.
7 It save token. It is does not save token.

You might also like