CN Unit 2
CN Unit 2
CN Unit 2
Network software encompasses a broad range of software used for design, implementation,
and operation and monitoring of computer networks. Traditional networks were hardware based
with software embedded. With the advent of Software – Defined Networking (SDN), software is
separated from the hardware thus making it more adaptable to the ever-changing nature of
the computer network.
What is Protocol?
A protocol is simply defined as a set of rules and regulations for data communication.
Rules are defined for every step and process at the time of communication among two
or more computers. Networks are needed to follow these protocols to transmit the data
successfully. All protocols might be implemented using hardware, software, or a
combination of both of them. There are three aspects of protocols given below :
• Syntax – It is used to explain the data format that is needed to be sent or received.
• Semantics – It is used to explain the exact meaning of each of the sections of bits
that are usually transferred.
• Timings – This is used to explain the exact time at which data is generally
transferred along with the speed at which it is transferred.
Protocol Hierarchies
Generally, Computer networks are comprised of or contain a large number of hardware
and software. For network design, various networks are organized and arranged as a
stack of layers of hardware and software, one on top of another. The number, name,
content, and function of each layer might vary and can be different from one network to
another. The main purpose of each layer is to provide services to higher layers that are
present. Every layer has some particular task or function. The networks are organized
and arranged as different layers or levels simply to reduce and minimize the
complexity of the design of network software.
Protocol Hierarchy
Reliability is a cornerstone design issue in computer networks. Networks are composed of various
components, and some of these components may be inherently unreliable, leading to potential data
loss during transmission. Ensuring that data is transferred without distortion or corruption is
paramount. Robust error detection and correction mechanisms are essential for preserving data
integrity, especially in the face of unreliable communication channels.
2. Addressing
3. Error Control
The inherent imperfections in physical communication circuits necessitate error control as a vital
design issue. To safeguard data integrity, error-detecting and error-correcting codes are employed.
However, it's imperative that both the sending and receiving ends reach a consensus on the specific
error detection and correction codes to be used, ensuring effective data packet protection.
4. Flow Control
Maintaining an equilibrium between data senders and receivers is essential to prevent data loss due to
speed mismatches. A fast sender transmitting data to a slower receiver necessitates the
implementation of a flow control mechanism. Several approaches are used, such as increasing buffer
sizes at receivers or slowing down the fast sender. Additionally, the network should handle
processes that cannot accommodate arbitrarily long messages by disassembling, transmitting, and
reassembling messages as required.
Efficient data transmission on a network often involves transmitting data separately on the
transmission medium. Setting up separate connections for every pair of communicating processes is
neither practical nor cost-effective. To address this challenge, multiplexing is employed at the sender's
end, allowing data from multiple sources to be combined into a single transmission stream. De-
multiplexing is then performed at the receiver's end to separate and direct the data to the appropriate
recipients.
6. Scalability
As networks expand in size and complexity, new challenges inevitably arise. Scalability is crucial to
ensuring that networks can continue to function effectively as they grow. The network's design should
accommodate increasing sizes, reducing the risk of congestion and compatibility issues when new
technologies are introduced. Scalability is a cornerstone for ensuring the network's long-term viability.
7. Routing
Routing is a critical function within the network layer. When multiple paths exist between a source and
destination, the network must select the most optimal route for data transmission. Various routing
algorithms are utilized to make this determination, with the aim of minimizing cost and time, thereby
ensuring efficient and reliable data transfer.
The security of a network is critical. Confidentiality methods are critical for protecting against risks like
eavesdropping and preventing unauthorized parties from accessing sensitive data. Data integrity is
also crucial since it protects against tampering and unauthorized changes to messages during
transmission.
9. Service Quality (QoS):
QoS refers to a network's ability to deliver varying levels of service to different types of traffic. Video
streaming, VoIP, and data transmission all have varying bandwidth, latency, and reliability needs. It is a
difficult challenge to ensure that the network can prioritize and distribute resources effectively to
satisfy these objectives.
Network management includes monitoring and maintaining the health and performance of different
network components such as routers, switches, and servers. Device configuration, fault detection,
performance analysis, and security monitoring all need network management tools and protocols.
Effective network administration is critical for detecting and resolving problems in real time,
optimizing resource utilization, and maintaining a positive user experience.
In scenarios where a network has multiple servers or paths to handle incoming traffic, load balancing
becomes critical. The challenge is to distribute network traffic evenly across these resources to prevent
overloads and optimize resource utilization. Load balancing can be achieved through hardware or
software solutions, and it may require advanced algorithms to make intelligent decisions based on
factors like server health and current traffic loads.
The choice of network topology can significantly impact the network's performance, scalability, and
fault tolerance. Designing the right topology for a given scenario involves considering factors such as
cost, reliability, ease of expansion, and fault tolerance. For example, a star topology might be suitable
for a small office network, while a mesh or hybrid topology could be preferred for a large-scale data
center.
With increasing concerns about energy consumption and its environmental impact, designing energy-
efficient networks is essential. This includes using energy-efficient hardware, optimizing network
protocols, and implementing strategies for turning off or reducing power to unused network
components during periods of low demand. Energy-efficient network design helps reduce operational
costs and minimizes the carbon footprint.
14. Interoperability:
It is a huge task to ensure that these components can function together seamlessly. Adherence to
industry standards and protocols, as well as testing and certification processes, are used to achieve
interoperability. It's crucial to ensure that data can flow smoothly between diverse network elements.
Network virtualization involves creating virtual instances of network components and services, such as
virtual routers and firewalls. Managing these virtual networks, ensuring their security, and dynamically
scaling resources to meet changing demands is a complex task. Network Function Virtualization (NFV)
extends this concept by virtualizing network functions like firewalls and load balancers, enabling
flexible and cost-effective service delivery.
As the use of mobile devices and wireless connections continues to grow, designing networks that
provide seamless connectivity as users move between different access points is a challenge. This
involves implementing mobility management protocols, handover procedures, and efficient spectrum
management to prevent interference and optimize wireless performance.
Many existing networks include legacy systems and technologies that must be integrated with
modern networking solutions. This can be complex because older systems may not support the latest
standards and security protocols. Network designers must ensure compatibility while maintaining
security during the integration process.
Planning for network resilience in the face of disasters, equipment failures, or cyberattacks is critical.
Redundancy, failover mechanisms, and disaster recovery strategies must be in place to maintain
network continuity. This involves duplicating critical components, creating backup data centers, and
implementing data backup and recovery solutions.
With the proliferation of Internet of Things (IoT) devices and the adoption of edge computing,
networks must handle a massive number of connected devices and process data at the edge of the
network. This presents challenges related to device management, data processing, and ensuring
security and privacy for IoT devices.
Networks often need to comply with specific regulations and industry-specific standards, such as data
privacy laws (e.g., GDPR) or compliance requirements for industries like healthcare or finance. Meeting
these requirements involves implementing security measures, data encryption, and auditing processes
to ensure network compliance while avoiding legal and financial penalties.
OSI Model
o OSI stands for Open System Interconnection is a reference model that describes how
information from a software application in one computer moves through a physical medium
to the software application in another computer.
o OSI consists of seven layers, and each layer performs a particular network function.
o OSI model was developed by the International Organization for Standardization (ISO) in 1984,
and it is now considered as an architectural model for the inter-computer communications.
o OSI model divides the whole task into seven smaller and manageable tasks. Each layer is
assigned a particular task.
o Each layer is self-contained, so that task assigned to each layer can be performed
independently.
o The OSI model is divided into two layers: upper layers and lower layers.
o The upper layer of the OSI model mainly deals with the application related issues, and they
are implemented only in the software. The application layer is closest to the end user. Both
the end user and the application layer interact with the software applications. An upper layer
refers to the layer just above another layer.
o The lower layer of the OSI model deals with the data transport issues. The data link layer and
the physical layer are implemented in hardware and software. The physical layer is the lowest
layer of the OSI model and is closest to the physical medium. The physical layer is mainly
responsible for placing the information on the physical medium.
7 Layers of OSI Model
There are the seven OSI layers. Each layer has different functions. A list of seven layers are given
below:
1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
1) Physical layer
o The main functionality of the physical layer is to transmit the individual bits from one node to
another node.
o It is the lowest layer of the OSI model.
o It establishes, maintains and deactivates the physical connection.
o It specifies the mechanical, electrical and procedural network interface specifications.
3) Network Layer
o It is a layer 3 that manages device addressing, tracks the location of devices on the network.
o It determines the best path to move data from source to the destination based on the
network conditions, the priority of service, and other factors.
o The Data link layer is responsible for routing and forwarding the packets.
o Routers are the layer 3 devices, they are specified in this layer and used to provide the routing
services within an internetwork.
o The protocols used to route the network traffic are known as Network layer protocols.
Examples of protocols are IP and Ipv6.
4) Transport Layer
o The Transport layer is a Layer 4 ensures that messages are transmitted in the order in which
they are sent and there is no duplication of data.
o The main responsibility of the transport layer is to transfer the data completely.
o It receives the data from the upper layer and converts them into smaller units known as
segments.
o This layer can be termed as an end-to-end layer as it provides a point-to-point connection
between source and destination to deliver the data reliably.
5) Session Layer
6) Presentation Layer
o A Presentation layer is mainly concerned with the syntax and semantics of the information
exchanged between the two systems.
o It acts as a data translator for a network.
o This layer is a part of the operating system that converts the data from one presentation
format to another format.
o The Presentation layer is also known as the syntax layer.
7) Application Layer
o An application layer serves as a window for users and application processes to access network
service.
o It handles issues such as network transparency, resource allocation, etc.
o An application layer is not an application, but it performs the application layer functions.
o This layer provides the network services to the end-users.
TCP/IP model
o The TCP/IP model was developed prior to the OSI model.
o The TCP/IP model is not exactly similar to the OSI model.
o The TCP/IP model consists of five layers: the application layer, transport layer, network layer,
data link layer and physical layer.
o The first four layers provide physical standards, network interface, internetworking, and
transport functions that correspond to the first four layers of the OSI model and these four
layers are represented in TCP/IP model by a single layer called the application layer.
o TCP/IP is a hierarchical protocol made up of interactive modules, and each of them provides
specific functionality.
Here, hierarchical means that each upper-layer protocol is supported by two or more lower-level
protocols.
Internet Layer
o An internet layer is the second layer of the TCP/IP model.
o An internet layer is also known as the network layer.
o The main responsibility of the internet layer is to send the packets from any network, and they
arrive at the destination irrespective of the route they take.
Following are the protocols used in this layer are:
IP Protocol: IP protocol is used in this layer, and it is the most significant part of the entire TCP/IP
suite.
o IP Addressing: This protocol implements logical host addresses known as IP addresses. The
IP addresses are used by the internet and higher layers to identify the device and to provide
internetwork routing.
o Host-to-host communication: It determines the path through which the data is to be
transmitted.
o Data Encapsulation and Formatting: An IP protocol accepts the data from the transport
layer protocol. An IP protocol ensures that the data is sent and received securely, it
encapsulates the data into message known as IP datagram.
o Fragmentation and Reassembly: The limit imposed on the size of the IP datagram by data
link layer protocol is known as Maximum Transmission unit (MTU). If the size of IP datagram is
greater than the MTU unit, then the IP protocol splits the datagram into smaller units so that
they can travel over the local network. Fragmentation can be done by the sender or
intermediate router. At the receiver side, all the fragments are reassembled to form an original
message.
o Routing: When IP datagram is sent over the same local network such as LAN, MAN, WAN, it is
known as direct delivery. When source and destination are on the distant network, then the IP
datagram is sent indirectly. This can be accomplished by routing the IP datagram through
various devices such as routers.
ARP Protocol
ICMP Protocol
Transport Layer
The transport layer is responsible for the reliability, flow control, and correction of data which is being
sent over the network.
The two protocols used in the transport layer are User Datagram protocol and Transmission
control protocol.
Application Layer
o An application layer is the topmost layer in the TCP/IP model.
o It is responsible for handling high-level protocols, issues of representation.
o This layer allows the user to interact with the application.
o When one application layer protocol wants to communicate with another application layer, it
forwards its data to the transport layer.
o There is an ambiguity occurs in the application layer. Every application cannot be placed
inside the application layer except those who interact with the communication system. For
example: text editor cannot be considered in application layer while web browser
using HTTP protocol to interact with the network where HTTP protocol is an application layer
protocol.
Advantages:
⇢ Better performance at a higher data rate in comparison to UTP
⇢ Eliminates crosstalk
⇢ Comparatively faster
Disadvantages:
⇢ Comparatively difficult to install and manufacture
⇢ More expensive
⇢ Bulky
Applications:
The shielded twisted pair type of cable is most frequently used in extremely cold
climates, where the additional layer of outer covering makes it perfect for withstanding
such temperatures or for shielding the interior components.
(ii) Coaxial Cable –
It has an outer plastic covering containing an insulation layer made of PVC or Teflon
and 2 parallel conductors each having a separate insulated protection cover. The
coaxial cable transmits information in two modes: Baseband mode(dedicated cable
bandwidth) and Broadband mode(cable bandwidth is split into separate ranges). Cable
TVs and analog television networks widely use Coaxial cables.
Advantages:
• High Bandwidth
• Better noise Immunity
• Easy to install and expand
• Inexpensive
Disadvantages:
• Single cable failure can disrupt the entire network
Applications:
Radio frequency signals are sent over coaxial wire. It can be used for cable television
signal distribution, digital audio (S/PDIF), computer network connections (like
Ethernet), and feedlines that connect radio transmitters and receivers to their
antennas.
(iii) Optical Fiber Cable –
It uses the concept of refraction of light through a core made up of glass or plastic. The
core is surrounded by a less dense glass or plastic covering called the cladding. It is
used for the transmission of large volumes of data.
The cable can be unidirectional or bidirectional. The WDM (Wavelength Division
Multiplexer) supports two modes, namely unidirectional and bidirectional mode.
Advantages:
• Increased capacity and bandwidth
• Lightweight
• Less signal attenuation
• Immunity to electromagnetic interference
• Resistance to corrosive materials
Disadvantages:
• Difficult to install and maintain
• High cost
• Fragile
Applications:
• Medical Purpose: Used in several types of medical instruments.
• Defence Purpose: Used in transmission of data in aerospace.
• For Communication: This is largely used in formation of internet cables.
• Industrial Purpose: Used for lighting purposes and safety measures in designing the
interior and exterior of automobiles.
2. Unguided Media:
It is also referred to as Wireless or Unbounded transmission media. No physical
medium is required for the transmission of electromagnetic signals.
Features:
• The signal is broadcasted through air
• Less Secure
• Used for larger distances
There are 3 types of Signals transmitted through unguided media:
(i) Radio waves –
These are easy to generate and can penetrate through buildings. The sending and
receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz. AM and FM
radios and cordless phones use Radio waves for transmission.
Microwave Transmission
(iii) Infrared –
Infrared waves are used for very short distance communication. They cannot penetrate
through obstacles. This prevents interference between systems. Frequency
Range:300GHz – 400THz. It is used in TV remotes, wireless mouse, keyboard, printer,
etc.
Electromagnetic spectrum
The electromagnetic spectrum is a range of frequencies, wavelengths and photon energies covering
frequencies from below 1 hertz to above 1025 Hz, corresponding to wavelengths which are a few
kilometres to a fraction of the size of an atomic nucleus in the spectrum of electromagnetic waves.
Generally, in a vacuum, electromagnetic waves tend to travel at speeds which is similar to that of
light. However, they do so at a wide range of wavelengths, frequencies and photon energies.
The electromagnetic spectrum consists of a span of all electromagnetic radiation which further
contains many subranges, which are commonly referred to as portions. These can be further
classified as infrared radiation, visible light or ultraviolet radiation.
Let us look into the uses of electromagnetic waves in our daily life.
Radio: A radio basically captures radio waves that are transmitted by radio stations. Radio waves can
also be emitted by gases and stars in space. Radio waves are mainly used for TV/mobile
communication.
Microwave: This type of radiation is found in microwaves and helps in cooking at home/office. It is
also used by astronomers to determine and understand the structure of nearby galaxies and stars.
Infrared: It is used widely in night vision goggles. These devices can read and capture the infrared
light emitted by our skin and objects with heat. In space, infrared light helps to map interstellar dust.
X-ray: X-rays can be used in many instances. For example, a doctor can use an X-ray machine to
take an image of our bones or teeth. Airport security personnel use it to see through and check bags.
X-rays are also given out by hot gases in the universe.
Gamma-ray: It has a wide application in the medical field. Gamma-ray imaging is used to see inside
our bodies. Interestingly, the universe is the biggest gamma-ray generator of all.
Ultraviolet: The Sun is the main source of ultraviolet radiation. It causes skin tanning and burns. Hot
materials that are in space also emit UV radiation.
Visible: Visible light can be detected by our eyes. Light bulbs, stars, etc., emit visible light.
Spectroscopy: Spectroscopy is used to study the way different electromagnetic waves interact with
matter.
We can learn about a substance by analysing the EM spectrum given by it. When light scatters or
passes through matter, it tends to interact with molecules and atoms. Since atoms and molecules
have resonance frequencies, they directly interact with those light waves having the exact
frequencies. When collisions occur in an excited state, the atoms and molecules emit light with a
certain set of characteristic frequencies. This further results in a line spectrum. Here, only light with
detached wavelengths is produced. The spectrum is also not continuous, but it consists of a set of
emission lines.
In cases where light with continuous wavelengths passes through a low-density material, the atoms
and molecules of the material will absorb light waves with the same set of characteristic frequencies.
This results in the production of the absorption spectrum, which is a nearly continuous spectrum with
missing lines.
Nonetheless, the main significance of the electromagnetic spectrum is that it can be used to classify
electromagnetic waves and arrange them according to their different frequencies or wavelengths.
● The visible light portion of the electromagnetic spectrum is the reason for all visual aids in daily life.
This is the portion of the electromagnetic spectrum that helps us to see all objects, including colours.
● The X-rays discovered by Roentgen proved to be useful in medicine for detecting many ailments or
deformities in bones.
● The high ultraviolet radiation has energies to ionise the atoms causing chemical reactions.
● The gamma rays discovered by Paul Villard are useful for ionisation purposes and nuclear
medicine.