Network Layer

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

Network

Layer
Static Routing
Static Routing is also known as Nonadaptive Routing.

It is a technique in which the administrator manually adds the routes in a routing table.

A Router can send the packets for the destination along the route defined by the administrator.

In this technique, routing decisions are not made based on the condition or topology of the networks
Default Routing
Default Routing is a technique in which a router is configured to send all the packets to the same hop
device, and it doesn't matter whether it belongs to a particular network or not. A Packet is transmitted to the
device for which it is configured in default routing.

Default Routing is used when networks deal with the single exit point.

It is also useful when the bulk of transmission networks have to transmit the data to the same hop device.

When a specific route is mentioned in the routing table, the router will choose the specific route rather than
the default route. The default route is chosen only when a specific route is not mentioned in the routing
table.
Dynamic Routing
It is also known as Adaptive Routing.

It is a technique in which a router adds a new route in the routing table for each packet in response to
the changes in the condition or topology of the network.

Dynamic protocols are used to discover the new routes to reach the destination.

In Dynamic Routing, RIP and OSPF are the protocols used to discover the new routes.

If any route goes down, then the automatic adjustment will be made to reach the destination.
Network
Layer
Protocols
Address Resolution Protocol (ARP)
It is used to associate an IP address with the MAC
address.

If the host wants to know the physical address of


another host on its network, then it sends an ARP
query packet that includes the IP address and
broadcast it over the network. Every host on the
network receives and processes the ARP packet,
but only the intended recipient recognizes the IP
address and sends back the physical address. The
host holding the datagram adds the physical
address to the cache memory and to the datagram
header, then sends back to the sender.
RARP
• RARP stands for Reverse Address Resolution Protocol.

• If the host wants to know its IP address, then it broadcast the


RARP query packet that contains its physical address to the
entire network. A RARP server on the network recognizes the
RARP packet and responds back with the host IP address.
• The protocol which is used to obtain the IP address from a server
is known as Reverse Address Resolution Protocol.
• The message format of the RARP protocol is similar to the ARP
protocol.
• Like ARP frame, RARP frame is sent from one machine to
another encapsulated in the data portion of a frame.
ICMP
• ICMP stands for Internet Control Message Protocol.

• The ICMP is a network layer protocol used by hosts and routers to send the notifications
of IP datagram problems back to the sender.
• ICMP uses echo test/reply to check whether the destination is reachable and responding.

• ICMP handles both control and error messages, but its main function is to report the error
but not to correct them.
• An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate routers.
• ICMP protocol communicates the error messages to the sender. ICMP messages cause the
errors to be returned back to the user processes.
• ICMP messages are transmitted within IP datagram.
Error Reporting by ICMP

ICMP protocol reports the error messages to the sender.

Five types of errors are handled by the ICMP protocol:


•Destination unreachable

•Source Quench

•Time Exceeded

•Parameter problems

•Redirection
IGMP
• IGMP stands for Internet Group Message Protocol.

• The IP protocol supports two types of communication:


• Unicasting: It is a communication between one sender and one
receiver. Therefore, we can say that it is one-to-one communication.
• Multicasting: Sometimes the sender wants to send the same
message to a large number of receivers simultaneously. This process
is known as multicasting which has one-to-many communication.

• The IGMP protocol is used by the hosts and router to support


multicasting.
• The IGMP protocol is used by the hosts and router to identify the
hosts in a LAN that are the members of a group.
IGMP
Membership Query message: This message is sent by a router to all hosts on a local area
network to determine the set of all the multicast groups that have been joined by the host.

• Membership Report message: Membership report messages can also be generated by the
host when a host wants to join the multicast group without waiting for a membership query
message from the router.

• Membership report messages are received by a router as well as all the hosts on an attached
interface.

• Each membership report message includes the multicast address of a single group that the
host wants to join.

• IGMP protocol does not care which host has joined the group or how many hosts are
present in a single group. It only cares whether one or more attached hosts belong to a
single multicast group.

• Leave Report: When the host does not send the "Membership Report message", it means
that the host has left the group. The host knows that there are no members in the group, so
even when it receives the next query, it would not report the group.
DHCP
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to
dynamically assign an IP address to nay device, or node, on a network so they can
communicate using IP (Internet Protocol).
DHCP is based on client-server protocol in which servers manage a pool of unique IP
addresses, as well as information about client configuration parameters, and assign addresses
out of those address pools.

The DHCP lease process works as follows:


 First of all, a client (network device) must be connected to the internet.
 DHCP clients request an IP address. Typically, client broadcasts a query for this information.
 DHCP server responds to the client request by providing IP server address and other configuration
information. This configuration information also includes time period, called a lease, for which the
allocation is valid.
 When refreshing an assignment, a DHCP clients request the same parameters, but the DHCP server
may assign a new IP address. This is based on the policies set by the administrator.
Components of DHCP

•DHCP Server: DHCP server is a networked device running the DCHP


service that holds IP addresses and related configuration information. This
is typically a server or a router but could be anything that acts as a host,
such as an SD-WAN appliance.
•DHCP client: DHCP client is the endpoint that receives configuration
information from a DHCP server. This can be any device like computer,
laptop, IoT endpoint or anything else that requires connectivity to the
network. Most of the devices are configured to receive DHCP information
by default.
•IP address pool: IP address pool is the range of addresses that are
available to DHCP clients. IP addresses are typically handed out
sequentially from lowest to the highest.
Components of DHCP

•Subnet: Subnet is the partitioned segments of the IP networks. Subnet is


used to keep networks manageable.
•Lease: Lease is the length of time for which a DHCP client holds the IP
address information. When a lease expires, the client has to renew it.
•DHCP relay: A host or router that listens for client messages being
broadcast on that network and then forwards them to a configured server.
The server then sends responses back to the relay agent that passes them
along to the client. DHCP relay can be used to centralize DHCP servers
instead of having a server on each subnet.
Congestion
Control
SIT DOLOR AMET
Congestion
 Congestion is a situation in Communication Networks in which too many
packets are present in a part of the subnet, performance degrades.
 Congestion in a network may occur when the load on the network (i.e. the
number of packets sent to the network) is greater than the capacity of the
network (i.e. the number of packets a network can handle.).
 Network congestion occurs in case of traffic overloading.

 Causing of Congestion:

 The input traffic rate exceeds the capacity of the output lines.

 The routers are too slow to perform bookkeeping tasks (queuing buffers, updating
tables, etc.).

 The routers’ buffer is too limited.

 Congestion in a subnet can occur if the processors are slow.


Open Loop Congestion Control
(policies are used to prevent the congestion before it happens)
Retransmission Policy :- The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted. To prevent
congestion, retransmission timers must be designed to prevent congestion and also able to optimize efficiency.

Window Policy:- To implement window policy, selective reject window method is used for congestion control. Selective Reject
method is preferred over Go-back-n window as in Go-back-n method, when timer for a packet times out, several packets are
resent, although some may have arrived safely at the receiver. Thus, this duplication may make congestion worse.

Acknowledgement Policy:- The acknowledgement policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives it may slow down the sender and help prevent congestion. The receiver should
send an acknowledgment only if it has to send a packet or a timer expires.

Discarding Policy:- A router may discard less sensitive packets or partially discard the packet when congestion is likely to
happen.

Admission Policy:- An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual circuit
networks. Switches in a flow fi rst check the resource requirement of a flow before admitting it to the network. A router can deny
establishing a virtual circuit connection if there is congestion in the “network or if there is a possibility of future congestion.
Closed Loop Congestion Control
Closed loop congestion control mechanisms try to remove the congestion after it happens.

Backpressure:- Back pressure is a node-to-node congestion control that starts with a node and
propagates, in the opposite direction of data flow.

Choke Packet:- In this method of congestion control, congested router or node sends a special
type of packet called choke packet to the source to inform it about the congestion.

Implicit Signaling:- In implicit signaling, there is no communication between the congested node
or nodes and the source. The source guesses that there is congestion somewhere in the network
when it does not receive any acknowledgment. Therefore the delay in receiving an
acknowledgment is interpreted as congestion in the network.

Explicit Signaling:- In this method, the congested nodes explicitly send a signal to the source or
destination to inform about the congestion.
Leaky Bucket Algorithm

Leaky Bucket Algorithm mainly controls the total amount and the rate of the traffic sent to the
network.

Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at which water is
poured into the bucket is not constant and can vary but it leaks from the bucket at a constant rate.

Step 2 − So (up to water is present in the bucket), the rate at which the water leaks does not depend
on the rate at which the water is input to the bucket.

Step 3 − If the bucket is full, additional water that enters into the bucket that spills over the sides
and is lost.

Step 4 − Thus the same concept applied to packets in the network. Consider that data is coming
from the source at variable speeds. Suppose that a source sends data at 10 Mbps for 4 seconds. Then
there is no data for 3 seconds. The source again transmits data at a rate of 8 Mbps for 2 seconds.

That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps for 9 seconds. Thus,
the constant flow is maintained.
Token Bucket
The leaky bucket algorithm allows only an average (constant) rate of data flow. Its
major problem is that it cannot deal with bursty data.

• A leaky bucket algorithm does not consider the idle time of the host. For example, if
the host was idle for 10 seconds and now it is willing to sent data at a very high speed
for another 10 seconds, the total data transmission will be divided into 20 seconds and
average data rate will be maintained. The host is having no advantage of sitting idle for
10 seconds.

• To overcome this problem, a token bucket algorithm is used. A token bucket algorithm
allows bursty data transfers.

• A token bucket algorithm is a modification of leaky bucket in which leaky bucket


contains tokens.

• In this algorithm, a token(s) are generated at every clock tick. For a packet to be
transmitted, system must remove token(s) from the bucket.

• Thus, a token bucket algorithm allows idle hosts to accumulate credit for the future in
form of tokens.

You might also like