CN Unit4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Department of Applied Computational Science and Engineering

Course Code: KCS 603


Course Name: Computer Networks
Topics to be Covered

⮚ UNIT -4(Transport Layer)


⮚ Process-to-process delivery
⮚ Transport layer protocols (UDP and TCP)
⮚ Multiplexing
⮚ Connection management
⮚ Flow control and retransmission
⮚ Window management
⮚ TCP Congestion control
⮚ Quality of service

Subject: Computer Networks


Transport Layer

• The transport layer is the 4th layer from the top.


• To provide communication services directly to the application processes running on different hosts.
• The transport layer protocols are implemented in the end systems but not in the network routers.
• A computer network provides more than one protocol to the network applications. For example, TCP and
UDP are two transport layer protocols that provide a different set of services to the network layer.
• All transport layer protocols provide multiplexing/demultiplexing service. It also provides other services
such as reliable data transfer, bandwidth guarantees, and delay guarantees.
• Each of the applications in the application layer has the ability to send a message by using TCP or UDP. The
application communicates by using either of these two protocols. Both TCP and UDP will then communicate
with the internet protocol in the internet layer. The applications can read and write to the transport layer.
Therefore, we can say that communication is a two-way process.
Services provided by the Transport Layer
• The services provided by the transport layer are similar to those of the data link layer.
• The data link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks.
• The data link layer controls the physical layer while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five
categories:
• End-to-end delivery
• Addressing
• Reliable delivery
• Flow control
• Multiplexing
End-to-end delivery:

• The transport layer transmits the entire message to the destination. Therefore, it ensures
the end-to-end delivery of an entire message from a source to the destination.
• Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged
packets.

Reliable delivery has four aspects:


• Error control
• Sequence control
• Loss control
• Duplication control
Error Control
• The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free
delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
• The data link layer also provides the error handling mechanism, but it ensures only node-to-node error-free
delivery. However, node-to-node reliability does not ensure end-to-end reliability.
• The data link layer checks for errors between each network. If an error is introduced inside one of the routers,
then this error will not be caught by the data link layer. It only detects those errors that have been introduced
between the beginning and end of the link.
• Therefore, the transport layer performs the checking for the errors end-to-end to ensure that the packet has
arrived correctly.
Sequence Control
• The second reliability aspect is sequence control implemented at the transport layer.
• On the sending end, the transport layer is responsible for ensuring that the packets received from the upper layers can be
used by the lower layers.
• On the receiving end, it ensures that the various segments of a transmission can be correctly reassembled.

Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission arrive at the
destination, not some of them. On the sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver’s transport layer to identify the missing segment.

Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data arrive at the
destination. Sequence numbers are used to identify the lost packets; similarly, it allows the receiver to identify and discard
duplicate segments.
Flow Control
• Flow control is used to prevent the sender from overwhelming the receiver.
• If the receiver is overloaded with too much data, then the receiver discards the packets and asks for the retransmission of
packets.
• This increases network congestion and thus, reduces the system performance. The transport layer is responsible for flow
control. It uses the sliding window protocol that makes the data transmission more efficient as well as controls the flow of
data so that the receiver does not become overwhelmed. Sliding window protocol is byte-oriented rather than frame
oriented.
Multiplexing
The transport layer uses multiplexing to improve transmission efficiency.
Multiplexing can occur in two ways:
• Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same network connection.
To make it more cost-effective, the transport layer sends several transmissions bound for the same destination along the same
path; this is achieved through upward multiplexing.
Downward multiplexing:
• It means one transport layer connection uses multiple network connections.
• It allows the transport layer to split a connection among several paths to improve the throughput.
• This type of multiplexing is used when networks have a low or slow capacity.

Addressing
• According to the layered model, the transport layer interacts with the functions of the session layer. Many protocols
combine session, presentation, and application layer protocols into a single layer known as the application layer.
• In these cases, delivery to the session layer means delivery to the application layer. Data generated by an application on
one machine must be transmitted to the correct application on another machine. In this case, addressing is provided by the
transport layer.
• The transport layer provides the user address which is specified as a station or port. The port variable represents a
particular TS-user of a specified station known as a Transport Service access point (TSAP). Each station has only one
transport entity.
• The transport layer protocols need to know which upper-layer protocols are communicating.
Transport Layer protocols
• The transport layer is represented by two protocols: TCP and UDP.
• The IP protocol in the network layer delivers a datagram from a source host to the destination host.
• Nowadays, the operating system supports multiuser and multiprocessing environments, an executing program is called a
process. When a host sends a message to another host means that the source process is sending a process to a destination
process. The transport layer protocols define some connections to individual ports known as protocol ports.
• An IP protocol is a host-to-host protocol used to deliver a packet from the source host to the destination host while
transport layer protocols are port-to-port protocols that work on top of the IP protocols to deliver the packet from the
originating port to the IP services, and from IP services to the destination port.
• Each port is defined by a positive integer address, and it is 16 bits.
UDP
• UDP stands for User Datagram Protocol.
• UDP is a simple protocol and it provides non-sequenced transport functionality.
• UDP is a connectionless protocol.
• This type of protocol is used when reliability and security are less important than speed and size.
• UDP is an end-to-end transport-level protocol that adds transport-level addresses, checksum error control, and length
information to the data from the upper layer.
• The packet produced by the UDP protocol is known as a user datagram.
User Datagram Format

The user datagram has a 16-byte header which is shown below:

Where,
• Source port address: It defines the address of the application process that has delivered a message. The source port
address is 16 bits address.
• Destination port address: It defines the address of the application process that will receive the message. The destination
port address is a 16-bit address.
• Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
• Checksum: The checksum is a 16-bit field that is used in error detection.
Disadvantages of UDP protocol

• UDP provides basic functions needed for the end-to-end delivery of a transmission.
• It does not provide any sequencing or reordering functions and does not specify the damaged packet when reporting an
error.

• UDP can discover that an error has occurred, but it does not specify which packet has been lost as it does not contain an ID
or sequencing number of a particular data segment.
PROTOCOLS SUPPORTED BY TCP AND UDP:
Protocols supported by UDP are:
• Dynamic Host Transfer Protocol (DHCP)
• Domain Name System (DNS)
• Trivial File Transfer Protocol (TFTP)
• Voice over Internet Protocol (VoIP)

Protocols supported by TCP are:


• File Transfer Protocol (FTP)
• Hyper Text Transfer Protocol (HTTP)
• Secure Shell (SSH)
TCP
• TCP stands for Transmission Control Protocol.
• It provides full transport layer services to applications.
• It is a connection-oriented protocol means the connection is established between both ends of the transmission.
For creating the connection, TCP generates a virtual circuit between the sender and receiver for the duration of
a transmission.

Well-known ports (0-1023)


• Well-known ports are port numbers assigned to services such as web browsers, email clients, HTTPS, and
Telnet.
• The RFC6335 outlines the registration procedures for these services and port numbers.
• The table below shows us some well-known port numbers, the transport layer protocol that they support, and
their applications. These port numbers are assigned as listed in RFC6335.
Port number Protocol Application
20 TCP FTP data
21 TCP FTP control
22 TCP SSH
23 TCP SMTP
53 UDP DNS
67 UDP DHCP Server
Features Of TCP protocol
• Stream data transfer: TCP protocol transfers the data in the form of a contiguous
stream of bytes.
TCP groups the bytes in the form of TCP segments and then passed them to the IP layer
for transmission to the destination. TCP itself segments the data and forwards it to the
IP.
• Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgment from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they
arrive out of order or to eliminate the duplicate segments.
• Flow Control: When receiving TCP sends an acknowledgment back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
The number of bytes is sent in ACK in the form of the highest sequence number that it
can receive without any problem. This mechanism is also referred to as a window
mechanism.
Features Of TCP protocol
• Multiplexing: Multiplexing is a process of accepting the data from different applications and
forwarding to the different applications on different computers. At the receiving end, the data
is forwarded to the correct application. This process is known as demultiplexing. TCP
transmits the packet to the correct application by using the logical channels known as ports.
• Logical Connections: The combination of sockets, sequence numbers, and window sizes, is
called a logical connection. Each connection is identified by the pair of sockets used by
sending and receiving processes.
• Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both directions at the
same time. To achieve Full Duplex service, each TCP should have sending and receiving
buffers so that the segments can flow in both directions. TCP is a connection-oriented
protocol. Suppose process A wants to send and receive the data from process B. The
following steps occur:
• Establish a connection between two TCPs.
• Data is exchanged in both directions.
• The Connection is terminated.
TCP Segment Format

Where,
• Source port address: It is used to define the address of the application program in a source computer. It is a 16-bit field.
• Destination port address: It is used to define the address of the application program in a destination computer. It is a 16-bit
field.
• Sequence number: A stream of data is divided into two or more TCP segments. The 32-bit sequence number field
represents the position of the data in an original data stream.
• Acknowledgement number: A 32-field acknowledgment number acknowledges the data from other communicating
devices. If the ACK field is set to 1, then it specifies the sequence number that the receiver is expecting to receive.
• Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The minimum size of the header is 5
words, and the maximum size of the header is 15 words. Therefore, the maximum size of the TCP header is 60 bytes, and
the minimum size of the TCP header is 20 bytes.
• Reserved: It is a six-bit field that is reserved for future use.
• Control bits: Each bit of a control field functions individually and independently. A control bit defines the use of a segment
or serves as a validity check for other fields.
As defined in Request For Comment (RFC) 7913, TCP has the following features:
• Connection establishment and termination.
• Multiplexing using ports.
• Flow control using windowing.
• Error recovery.
• Ordered data transfer and data segmentation.
Differences b/w TCP & UDP
Basis for Comparison TCP UDP
Definition TCP establishes a virtual circuit UDP transmits the data directly
before transmitting the data. to the destination computer
without verifying whether the
receiver is ready to receive or
not.
Connection Type It is a Connection-Oriented It is a Connectionless protocol
protocol
Speed slow high
Reliability It is a reliable protocol. It is an unreliable protocol.
Header size 20 byte(more overload) 8 bytes(less overload)
Transmission Slow transmission Fast as cpmpared to TCP
acknowledgement It waits for the It neither takes the
acknowledgement of data and acknowledgement, nor it
has the ability to resend the lost retransmits the damaged frame.
packets.
A TCP Connection
• TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual path between the
source and destination.
• All the segments belonging to a message are then sent over this virtual path.
• Using a single virtual pathway for the entire message facilitates the acknowledgment process as well as the
retransmission of damaged or lost frames.
• The point is that a TCP connection is virtual, not physical.
• TCP operates at a higher level.
• TCP uses the services of IP to deliver individual segments to the receiver, but it controls the connection
itself.
• If a segment is lost or corrupted, it is retransmitted. Unlike TCP, IP is unaware of this retransmission.
• If a segment arrives out of order, TCP holds it until the missing segments arrive; IP is unaware of this
reordering.
• In TCP, connection-oriented transmission requires three phases: connection establishment, data transfer, and
connection termination.
1. Connection Establishment
• TCP transmits data in full-duplex mode.
• This implies that each party must initialize communication and get approval from the other party before any data are
transferred.
Three-Way Handshaking
• The connection establishment in TCP is called three-way handshaking.
• In our example, an application program, called the client, wants to make a connection with another application program,
called the server, using TCP as the transport layer protocol.
• The process starts with the server.
• The server program tells its TCP that it is ready to accept a connection. This is called a request for a passive open.
• Although the server TCP is ready to accept any connection from any machine in the world, it cannot make the connection
itself.
• The client program issues a request for an active open.
• A client that wishes to connect to an open server tells its TCP that it needs to be connected to that particular server.
• TCP can now start the three-way handshaking process.
• To show the process, we use two timelines: one at each site.
• Each segment has values for all its header fields and perhaps for some of its option fields, too.
Figure Connection establishment using three-way handshaking
The three steps in this phase are as follows.
1. The client sends the first segment, an SYN segment, in which only the SYN flag is set. This segment is for the
synchronization of sequence numbers. When the data transfer starts, the sequence number is incremented by 1. We can
say that the SYN segment carries no real data, but we can think of it as containing 1 imaginary byte.
2. The server sends the second segment, an SYN +ACK segment, with 2 flag bits set: SYN and ACK. This segment has a
dual purpose. It is an SYN segment for communication in the other direction and serves as the acknowledgment for the
SYN segment. It consumes one sequence number.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the receipt of the second segment
with the ACK flag and acknowledgment number field.
Simultaneous Open
• A rare situation, called a simultaneous open, may occur when both processes issue an active open.
• In this case, both TCPs transmit an SYN + ACK segment to each other, and one single connection is established between
them.
SYN Flooding Attack
•The connection establishment procedure in TCP is susceptible to a serious security problem that
pretending that each of them is coming from a different client
by faking the source IP addresses in the datagrams.
•The server, assuming that the clients are issuing an active open, allocates the necessary resources, such as creating
communication tables and setting timers.
•The TCP server then sends the SYN +ACK segments to the fake clients, which are lost.
2. Data Transfer
• After a connection is established, bidirectional data transfer can take place. The client and server can both send data and
acknowledgments.
• In this example, after the connection is established (not shown in the figure), the client sends 2000 bytes of data in two
segments. The server then sends 2000 bytes in one segment.
• The client sends one more segment.
• The first three segments carry both data and acknowledgment, but the last segment carries only an acknowledgment
because there are no more data to be sent.
• Note the values of the sequence and acknowledgment numbers.
• The data segments sent by the client have the PSH (push) flag set so that the server TCP knows to deliver data to the
server process as soon as they are received.
• The segment from the server, on the other hand, does not set the push flag.
• Most TCP implementations have the option to set or not set this flag.
Figure Data transfer
3. Connection Termination

• Any of the two parties involved in exchanging data (client or server) can close the connection, although it is usually
initiated by the client.
• Most implementations today allow two options for connection termination: three-way handshaking and four-way
handshaking with a half-close option.
Most implementations today allow three-way handshaking for connection termination as shown in Figure.
1. In a normal situation, the client TCP, after receiving a close command from the client process, sends the first segment, a
FIN segment in which the FIN flag is set.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends the second segment, a
FIN +ACK segment, to confirm the receipt of the FIN segment from the client and at the same time to announce the
closing of the connection in the other direction.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN segment from the TCP
server. This segment contains the acknowledgment number, which is 1 plus the sequence number received in the FIN
segment from the server.
Figure Connection termination using three-way handshaking
Flow control using windowing

• Because the network host has limited resources such as limited space and
processing power, TCP implements a mechanism called flow control using a
window concept. This is applied to the amount of data that can be awaiting
acknowledgment at any one point in time.
• The receiving device uses the windowing concept to inform the sender how much
data it can receive at any given time. This allows the sender to either speed up or
slow down the sending of segments through a window-sliding process.
There are total six types of flags in control field:
• URG: The URG field indicates that the data in a segment is urgent.
• ACK: When the ACK field is set, then it validates the acknowledgment number.
• PSH: The PSH field is used to inform the sender that higher throughput is needed so if possible, data must be pushed with
higher throughput.
• RST: The reset bit is used to reset the TCP connection when there is any confusion occurs in the sequence numbers.
• SYN: The SYN field is used to synchronize the sequence numbers in three types of segments: connection request,
connection confirmation ( with the ACK bit set ), and confirmation acknowledgment.
• FIN: The FIN field is used to inform the receiving TCP module that the sender has finished sending data. It is used in
connection with termination in three types of segments: termination request, termination confirmation, and
acknowledgment of termination confirmation.
• Window Size: The window is a 16-bit field that defines the size of the window.
• Checksum: The checksum is a 16-bit field used in error detection.
• Urgent pointer: If the URG flag is set to 1, then this 16-bit field is an offset from the sequence number indicating that
it is the last urgent data byte.
• Options and padding: It defines the optional fields that convey the additional information to the receiver.
CONGESTION CONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens or remove it after it happens. In general, we can divide congestion control
mechanisms into two broad categories: open-loop congestion control (prevention) and closed-
loop congestion control (removal).
• In open-loop congestion control, policies are applied to prevent congestion before it
happens. In these mechanisms, congestion control is handled by either the source or the
destination.
• Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Congestion Policy
• TCP's general policy for handling congestion is based on three phases: slow start, congestion avoidance, and
congestion detection.
1. slow start.
• This algorithm is based on the idea that the size of the congestion window (cwnd) starts with one maximum
segment size (MSS).
• The MSS is determined during connection establishment by using an option of the same name.
• The size of the window increases one MSS each time an acknowledgment is received.
2. avoid congestion.
• When the threshold is reached, the data rate is reduced to avoid congestion.
• If we start with the slow-start algorithm, the size of the congestion window increases exponentially.
• To avoid congestion before it happens, one must slow down this exponential growth.
• TCP defines another algorithm called congestion avoidance, which undergoes an additive increase instead of an
exponential one.
• When the size of the congestion window reaches the slow-start threshold, the slow-start phase stops and the
additive phase begins.
Congestion Policy

3. congestion detection
• congestion is detected, the sender goes back to the slow-start or congestion avoidance phase
based on how the congestion is detected
• If congestion occurs, the congestion window size must be decreased.
• The only way the sender can guess that congestion has occurred is by the need to retransmit a
segment.
• However, retransmission can occur in one of two cases: when a timer times out or when the
ACKs are not received.
• In both cases, the size of the threshold is dropped to one-half, a multiplicative decrease.
QUALITY OF SERVICE
• Quality of service (QoS) is an internetworking issue that has been discussed more than
defined. We can informally define quality of service as something a flow seeks to attain.
Flow Characteristics
• Traditionally, four types of characteristics are attributed to a flow: reliability, delay,
jitter, and bandwidth, as shown in Figure
TECHNIQUES TO IMPROVE QoS
We briefly discuss four common methods: scheduling, traffic shaping,
admission control, and resource reservation.
1. Scheduling
• Packets from different flows arrive at a switch or router for processing.
• A good scheduling technique treats the different flows in a fair and
appropriate manner.
• Several scheduling techniques are designed to improve the quality of service.
• We discuss three of them here: FIFO queuing, priority queuing, and weighted
fair queuing.
2. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of traffic sent
to the network. Two techniques can shape traffic: leaky bucket and token
bucket.
TECHNIQUES TO IMPROVE QoS
1. Leaky Bucket
• If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket.
• The rate at which the water leaks does not depend on the rate at which the water is input to
the bucket unless the bucket is empty.
• The input rate can vary, but the output rate remains constant.
• Similarly, in networking, a technique called leaky bucket can smooth out burst traffic.
• Bursty chunks are stored in the bucket and sent out at an average rate.
• Figure 24.19 shows a leaky bucket and its effects.
Similarly, each network interface contains a leaky bucket, and the following steps are
involved in the leaky bucket algorithm:

• When the host wants to send a packet, a packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.

• Bursty traffic is converted to uniform traffic by the leaky bucket.


• In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
Need of token bucket Algorithm:-
The leaky bucket algorithm enforces output patterns at the average rate, no matter
how bursty the traffic is. So in order to deal with the bursty traffic we need a
flexible algorithm so that the data is not lost. One such algorithm is the token
bucket algorithm.
The steps of this algorithm can be described as follows:
• In regular intervals, tokens are thrown into the bucket. ƒ
• The bucket has a maximum capacity. ƒ
• If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
• If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,

• In Figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In Figure (B) We see that three of the five packets have gotten
through, but the other two are stuck waiting for more tokens to be generated.

• Ways in which token bucket is superior to leaky bucket:


The leaky bucket algorithm controls the rate at which the packets are introduced in the network, but it is very
conservative in nature. Some flexibility is introduced in the token bucket algorithm. In the token bucket, algorithm
tokens are generated at each tick (up to a certain limit). For an incoming packet to be transmitted, it must capture a token
and the transmission takes place at the same rate. Hence some of the busty packets are transmitted at the same rate if
tokens are available and thus introduce some amount of flexibility in the system.

• Formula: M * s = C + ρ * s
where S – is the time taken
M – Maximum output rate
ρ – Token arrival rate
C – Capacity of the token bucket in byte
TECHNIQUES TO IMPROVE QoS
3. Resource reservation:
The Resource Reservation Protocol (RSVP) is a transport layer protocol that reserves
resources across a network and can be used to deliver specific levels of QoS for application
data streams. Resource reservation enables businesses to divide network resources by
traffic of different types and origins, define limits, and guarantee bandwidth.

4. Admission Control:
It refers to mechanism used by a router, or a switch, to accept or reject a flow based on
predefined parameters called flow specifications. Before a router accepts a flow for
processing, it checks the flow specifications to see if its capacity and its previous
commitments to other flows can handle the new flow.
Questions
Ten Questions related to the topic:
1.What is the transport layer? Explain in brief. (KCS 603.4, K2)
2.Draw the diagram of the TCP header and explain the use of the following: (KCS 603.4, K3)
1. Source and destination port address.
2. Sequence and acknowledgement numbers.
3. Code bits 4. Window bits 5. Urgent pointer.
3. Explain three-way handshaking(KCS 603.4, K3)
4. What is the difference between TCP and UDP? (KCS 603.4, K2)
5. Define UDP. What is the maximum and minimum size of a UDP datagram? Also discuss the use of
UDP. (KCS 603.4, K2)
6. Explain the header format of TCP. (KCS 603.4, K3)
7. Explain the header format of UDP. (KCS 603.4, K3)
8. Compare the TCP header with the UDP header. (KCS 603.4, K2)
9. What is meant by quality of service? (KCS 603.4, K2)
10. What are the two categories of QoS attributes? (KCS 603.4, K2)

Subject: Computer Networks


Ten Questions related to topic:

1.What is the transport layer? Explain in brief.


2.Draw the diagram of TCP header and explain the use of following:
1. Source and destination port address.
2. Sequence and acknowledgement numbers.
3. Code bits 4. Window bits 5. Urgent pointer.
3.Write a note on UDP.
4.What is the different between TCP and UDP.
5.What is UDP? What is maximum and minimum size of a UDP datagram. Also discuss the use of
UDP.
6.Explain the header format of TCP.
7.Explain the header format of UDP.
8.Compare the TCP header with UDP header.
9.What is meant by quality of service?
10.What are the two categories of QoS attributes?
This set of Computer Networks Multiple Choice Questions & Answers (MCQs) focuses on “Transport
Layer”.
1. Transport layer aggregates data from different applications into a single stream before passing it to ____________
a) network layer
b) data link layer
c) application layer
d) physical layer
Answer: a
Explanation: The flow of data in the OSI model flows in following manner Application -> Presentation -> Session ->
Transport -> Network -> Data Link -> Physical. Each and every layer has its own set of functions and protocols to ensure
efficient network performance.
2. Which of the following are transport layer protocols used in networking?
a) TCP and FTP
b) UDP and HTTP
c) TCP and UDP
d) HTTP and FTP
Answer: c
Explanation: Both TCP and UDP are transport layer protocol in networking. TCP is an abbreviation for Transmission
Control Protocol and UDP is an abbreviation for User Datagram Protocol. TCP is connection oriented whereas UDP is
connectionless.
3. User datagram protocol is called connectionless because _____________
a) all UDP packets are treated independently by transport layer
b) it sends data as a stream of related packets
c) it is received in the same order as sent order
d) it sends data very quickly

Answer: a
Explanation: UDP is an alternative for TCP and it is used for those purposes where speed matters most whereas
loss of data is not a problem. UDP is connectionless whereas TCP is connection-oriented

4. Transmission control protocol ___________


a) is a connection-oriented protocol
b) uses a three way handshake to establish a connection
c) receives data from application as a single stream
d) all of the mentioned
Answer: d
Explanation: TCP provides reliable and ordered delivery of a
stream of bytes between hosts communicating via an IP
network. Major internet applications like www, email, file
transfer etc rely on TCP. TCP is connection oriented and it is
optimized for accurate delivery rather than timely delivery
5. An endpoint of an inter-process communication flow across a computer network is called __________
a) socket
b) pipe
c) port
d) machine
Answer: a
Explanation: Socket is one end point in a two way communication link in the network. TCP layer can identify the
application that data is destined to be sent by using the port number that is bound to socket.

6. Socket-style API for windows is called ____________


a) wsock
b) winsock
c) wins
d) sockwi
Answer: b
Explanation: Winsock is a programming interface which deals with input output requests for internet applications
in windows OS. It defines how windows network software should access network services.
7. Which one of the following is a version of UDP with congestion control?
a) datagram congestion control protocol
b) stream control transmission protocol
c) structured stream transport
d) user congestion control protocol

Answer: a
Explanation: The datagram congestion control is a transport layer protocol which deals with reliable connection setup,
teardown, congestion control, explicit congestion notification, and feature negotiation. It is used in modern day systems
where there are really high chances of congestion. The protocol was last updated in the year 2008.
8. A _____ is a TCP name for a transport service access point.
a) port
b) pipe
c) node
d) protocol

Answer: a
Explanation: Just as the IP address identifies the computer, the network port identifies the application or service running on
the computer. A port number is 16 bits. The combination of IP address preceded with the port number is called the socket
address.
9. Transport layer protocols deals with ____________
a) application to application communication
b) process to process communication
c) node to node communication
d) man to man communication

Answer: b
Explanation: Transport layer is 4th layer in TCP/IP model and OSI reference model. It deals with logical communication
between process. It is responsible for delivering a message between network host.
10. Which of the following is a transport layer protocol?
a) stream control transmission protocol
b) internet control message protocol
c) neighbor discovery protocol
d) dynamic host configuration protocol

Answer: a
Explanation: The Stream Control Transmission Protocol (SCTP) is a transport layer protocol used in networking system
where streams of data are to be continuously transmitted between two connected network nodes. Some of the other transport
layer protocols are RDP, RUDP, TCP, DCCP, UDP etc.
If WAN link is 2 Mbps and RTT between source and destination is 300 msec, what
would be the optimal TCP window size needed to fully utilize the line?
1.60,000 bits
2.75,000 bytes
3.75,000 bits
4.60,000 bytes

Solution-

Given-
• Bandwidth = 2 Mbps
• RTT = 300 msec
Thank You!

You might also like