Module 4 - Transport Layer
Module 4 - Transport Layer
Module 4 - Transport Layer
In OSI suite, there are 7 layers, and all layers provide different services. Today we are here
with the detailed information of one of the layers i.e. transport layer which provides process to
process communication in OSI suite.
Transport Layer in Computer Networks is an end-to-end layer which is used to deliver message
to a host. In OSI model, transport layer is located between session layer and network layer.
Scope
This article discusses about the transport layer and how the transport layer works in Data
Communication. We will also see the services provided by transport layer and transport layer
protocols.
Transport Layer is the fourth layer from the top in OSI Model which provide
communication services to the application processes that was running on different
hosts.
Transport Layer provides the services to the session layer and it receives the services
from network layer.
The services provided by transport layer includes error correction as well as segmenting
and desegmenting data before and after it's sent on the network.
Transport layer also provides the flow control functionality and ensures that segmented
data is delivered across the network in the right sequence.
Transport Layer uses a port number to deliver the segmented data to the correct process
amongst the multiple processes that are running on a particular host. A port number is a 16-bit
address used by transport layer to identify any client-server program.
The transport layer provides the multiplexing service to improve transmission efficiency in
data communication. At the receiver side, demultiplexing is required to collect the data coming
from different processes. Transport Layer provides Upward and Downward Multiplexing:
Upward multiplexing means multiple transport layer connections utilizes the connection of
same network. Transport layer transmits several transmissions bound for the same destination
along the same path in network.
Downward multiplexing means a transport layer connection utilizes the multiple connections.
This multiplexing allows the transport layer to split a network connection among several paths
to improve the throughput in network.
3. Flow Control
Flow control makes sure that the data is transmitted at a rate that is accept table for both
sender and receiver by managing data flow.
The transport layer provides a flow control service between the adjacent layers of the
TCP/IP model. Transport Layer uses the concept of sliding window protocol to provide
flow control.
4. Data integrity
5. Congestion avoidance
In network, if the load on network is greater than the network load capacity, then the
congestion may occur.
Congestion Control refers to the mechanisms and techniques to control the congestion
and keep the load below the capacity.
The transport layer recognizes overloaded nodes and reduced flow rates and take proper
steps to overcome this.
Let us understand transport layer with the help of example. Let us take an example of sending
email.
When we send an email then the email then in OSI model each layer communicates to
the corresponding layer of the receiver.
So when the mail will come at transport layer on sender side then the email is broken
down in to small segments. Then that broken segments are sent to network layer and
transport layer also specifies the source and destination port.
At the receiver side, transport layer reassembles all the segment to get the data and use
port number to identify the application to deliver data.
The transport layer receives the services from the network layer and then give services to the
session layer.
At the sender’s side: At the sender's end, transport layer collect data from application layer i.e
message and performs segementation to divide the message into segments and then adds the
port number of source and destination in header and send that message to network layer.
At the receiver’s side: At the receiver's end, transport layer collects data from network layer
and then reassembles the segmented data and identifies port number by reading its header to
send that message to appropriate port in the session layer.
UDP
UDP is one of the simplest transport layer protocol which provides non sequenced data
transmission functionality.
UDP is consider as connection less transport layer protocol.
This type of protocol is referred to be used when speed and size are more important
than reliability and security.
It is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data received from the upper
layer.
User datagram is the packet constructed by the UDP protocol
Refer to the image below to see the header of UDP packet consisting of four fields.
User datagram have a fixed size header of 8 bytes which is divided into four parts -
Total length: This field is used to define the total length of the user datagram which is sum of
header and data length in bytes. It is a 16-bit field.
Checksum: Checksum is also 16 bit field to carry the optional error detection data.
UDP Services
Disadvantages
UDP delivers basic functions required for the end-to-end transimission of data.
It does not use any sequencing and does not identify the damaged packet while
reporting an error.
UDP can identify that an error has happened, but UDP does not identify which packet
has been lost.
TCP
Connection oriented protocol
Reliable protocol
Provide error and flow control
This could also be seen as a way of how TCP connection is established. Before getting into
the details, let us look at some basics. TCP stands for Transmission Control Protocol which
indicates that it does something to control the transmission of the data in a reliable way.
The process of communication between devices over the internet happens according to the
current TCP/IP suite model(stripped out version of OSI reference model). The Application
layer is a top pile of a stack of TCP/IP models from where network referenced applications
like web browsers on the client-side establish a connection with the server. From the
application layer, the information is transferred to the transport layer where our topic comes
into the picture. The two important protocols of this layer are – TCP, UDP(User Datagram
Protocol) out of which TCP is prevalent(since it provides reliability for the connection
established). However, you can find an application of UDP in querying the DNS server to get
the binary equivalent of the Domain Name used for the website.
TCP provides reliable communication with something called Positive Acknowledgement
with Re-transmission(PAR). The Protocol Data Unit(PDU) of the transport layer is called a
segment. Now a device using PAR resend the data unit until it receives an acknowledgement.
If the data unit received at the receiver’s end is damaged(It checks the data with checksum
functionality of the transport layer that is used for Error Detection), the receiver discards the
segment. So the sender has to resend the data unit for which positive acknowledgement is not
received. You can realize from the above mechanism that three segments are exchanged
between sender(client) and receiver(server) for a reliable TCP connection to get established.
Let us delve into how this mechanism works :
Step 1 (SYN): In the first step, the client wants to establish a connection with a
server, so it sends a segment with SYN(Synchronize Sequence Number) which
informs the server that the client is likely to start communication and with what
sequence number it starts segments with
Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK
signal bits set. Acknowledgement(ACK) signifies the response of the segment it
received and SYN signifies with what sequence number it is likely to start the
segments with
Step 3 (ACK): In the final part client acknowledges the response of the server and
they both establish a reliable connection with which they will start the actual data
transfer
SCTP
SCTP stands for Stream Control Transmission Protocol.
SCTP is one of the connection oriented tranport layer protocols.
It allows transmitting of data between sender and receiver in full duplex mode.
This protocol makes it simpler to build connection over wireless network and to control
multimedia data transmission.
Features of SCTP
Conclusion:
Transport Layer is the fourth layer of TCP/IP suite which provide process to process
communication
Transport Layer provides process to process communication, data integrity, flow
control, congestion avoidance, muliplexing` and demultiplexing services
UDP is Transport layer protocol that provides connection less service.
TCP and SCTP is Transport layer protocol that provides connection oriented service.
An acknowledgment segment is
Acknowledgment present. No acknowledgment segment.
There is no retransmission of
Retransmission of lost packets is lost packets in the User
Retransmission possible in TCP, but not in UDP. Datagram Protocol (UDP).
QOS Concepts
Integrated Services(IntServ) –
1. An architecture for providing QoS guarantees in IP networks for individual
application sessions.
2. Relies on resource reservation, and routers need to maintain state information of
allocated resources and respond to new call setup requests.
3. Network decides whether to admit or deny a new call setup request.
IntServ QoS Components –
Resource reservation: call setup signaling, traffic, QoS declaration, per-element
admission control.
QoS-sensitive scheduling e.g WFQ queue discipline.
QoS-sensitive routing algorithm(QSPF)
QoS-sensitive packet discard strategy.
RSVP-Internet Signaling –
It creates and maintains distributed reservation state, initiated by the receiver and scales for
multicast, which needs to be refreshed otherwise reservation times out as it is in soft state.
Latest paths were discovered through “PATH” messages (forward direction) and used by
RESV messages (reserve direction).
Call Admission –
Session must first declare it’s QoS requirement and characterize the traffic it will
send through the network.
R-specification: defines the QoS being requested, i.e. what kind of bound we
want on the delay, what kind of packet loss is acceptable, etc.
T-specification: defines the traffic characteristics like bustiness in the traffic.
A signaling protocol is needed to carry the R-spec and T-spec to the routers where
reservation is required.
Routers will admit calls based on their R-spec, T-spec and based on the current
resource allocated at the routers to other calls.
Diff-Serv –
Differentiated Service is a stateful solution in which each flow doesn’t mean a different state.
It provides reduced state services i.e. maintaining state only for larger granular flows rather
than end-to-end flows tries to achieve the best of both worlds.
Intended to address the following difficulties with IntServ and RSVP:
1. Flexible Service Models:
IntServ has only two classes, want to provide more qualitative service classes:
want to provide ‘relative’ service distinction.
2. Simpler signaling:
Many applications and users may only want to specify a more qualitative notion
of service.
In the above figure, we assume that the network has committed a bandwidth of 3 Mbps for a
host. The use of the leaky bucket shapes the input traffic to make it conform to this
commitment. In the above figure, the host sends a burst of data at a rate of 12 Mbps for 2s,
for a total of 24 Mbits of data. The host is silent for 5 s and then sends data at a rate of 2
Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in 10 s.
The leaky bucket smooths out the traffic by sending out data at a rate of 3 Mbps during the
same 10 s.
Without the leaky bucket, the beginning burst may have hurt the network by consuming more
bandwidth than is set aside for this host. We can also see that the leaky bucket may prevent
congestion.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds
the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the
process removes a fixed number of packets from the queue at each tick of the clock. If the
traffic consists of variable-length packets, the fixed output rate must be based on the number
of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the rightmost position and the tail of
the queue is the leftmost position.
Example: Let n=1000
Packet=
Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.
Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
Below is the implementation of above explained approach:
C++
Java
Python3
Output
The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy
the traffic is. So, to deal with the more traffic, we need a flexible algorithm so that the data is
not lost. One such approach is the token bucket algorithm.
Let us understand this algorithm step wise as given below −
Step 1 − In regular intervals tokens are thrown into the bucket f.
Step 2 − The bucket has a maximum capacity f.
Step 3 − If the packet is ready, then a token is removed from the bucket, and the
packet is sent.
Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.
Example