Datalink Layer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Chapter -3

DATALINK LAYER
Introduction
> It is the second layer of OSI model.
>It is the intermediate level of physical layer and
network layer.
>It helps to transfer the data between the reliable link.
>This layer is responsible for the error-free transfer of
data frames.
>It defines the format of the data on the network.
>It provides a reliable and efficient communication
between two or more devices.
>It is mainly responsible for the unique identification
of each device that resides on a local network.
> It is responsible for converting data steam to
single bit by bit and send that over the
underlying hardware.

>At the receiver end, it picks up data from the


hardware as in the form of electric signals and
assembles them in a recognizable frame format
and send to upper layer.
Functions:
a. Frame Synchronization:
The data link layer translates the physical's raw bit stream into packets
known as frames. The data link layer adds the header and trailer to the frame. The
header which is added to the frame contains the hardware destination and source
address.
b. Flow control:
Flow control is the main functionality of the data-link layer. It is the
technique through which the constant data rate is maintained on both the sides so
that no data get corrupted. It ensures that the transmitting station such as a server
with higher processing speed does not exceed the receiving station, with lower
processing speed.
c. Error Control:
Error control is achieved by adding a calculated value CRC (cyclic
redundancy check) that is placed to the data link layer's trailer which is added to
the message frame before it is sent to the physical layer. If any error seems to
occur, then the receiver sends the acknowledgment for the retransmission of the
corrupted frames.
d. Addressing:
When two or more devices are connected to the same communication
channel, then the data link layer protocols are used to determine which device
has control over the link at a given time.
e. Control and data on same link:
It combine a frame and transmit it from source to destination. The
destination must be able to recognize control information from the data being
transmitted.
f. Link management:
It manages the initialization, maintenance and termination of the link
between the sources and destination as well as required to effectively exchange
the data.
g. Half-duplex and full duplex:

h. Congestion Control:
h. Services provided to network layer:
Logical link control and Media Access Control
Logical Link Control Layer
It is responsible for transferring the packets to the Network
layer of the receiver that is receiving.
It identifies the address of the network layer protocol from
the header.
It also provides flow control.

Media Access Control Layer


A Media access control layer is a link between the Logical
Link Control layer and the network's physical layer.
It is used for transferring the packets over the network.
It has two form: distributed and centralized.
It determines where the one frame of data ends and next one
start.
There are four means of doing that: time based, character
countion, byte stuffing and bit stuffing.
Error Detection
Data-link layer uses error control techniques to ensure that
frames, i.e. Bit streams of data, are transmitted from the source to the
destination with a certain extent of accuracy.

At DLL, if a frame is corrupted between the two nodes, it is need


to be corrected before it continues its journey to other nodes.

Errors:
When bits are transmitted over the computer network, they are
subject to get corrupted due to interference and network problems. The
corrupted bits leads to spurious data being received by the destination
and are called errors.

Whenever bits flows from one point to other, they are subject to
unpredictable change due to interference, noise, distortion and
attenuation.
Types of Error :
There are three types of errors:
a. Single Bit Error:
In the received frame, only one
bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

b: Multiple Bits Error:


In the received frame, more than
one bits are corrupted.

c. Brust Error:
In the received frame, more than
one consecutive bits are corrupted.
Error detection code:
>Central concept in detecting or correcting errors is
redundancy.

> We must use an extra bits with the data to detect or


correct errors.

>These redundant bits are added by the sender and


removed by the receiver to check whether the data is
corrupted of not.
Error detection code:
>It is implemented either in DLL or TLL.

> if the data is transmitted, it gets scrambled by noise


and data gets corrupted.

>To avoid this problem, we used error detecting codes


by adding additional bits with the data to find whether
the data is correctly received by receiver.

> Basic approach used for error detection is the use of


redundancy bits,
Error detection Technique:
Single Parity check:
>Single parity checking is the simple mechanism and inexpensive to
detect the errors.
>In this technique, a redundant bit is also known as a parity bit
which is appended at the end of the data unit so that the number of
1s becomes even. Therefore, the total number of transmitted bits
would be 9 bits.
>If the number of 1s bits is odd, then parity bit 1 is appended and if
the number of 1s bits is even, then parity bit 0 is appended at the end
of the data unit.
>At the receiving end, the parity bit is calculated from the received
data bits and compared with the received parity bit.
>This technique generates the total number of 1s even, so it is
known as even-parity checking.
Drawbacks Of Single Parity Checking

>It can only detect single-bit errors which are very


rare.

>If two bits are interchanged, then it cannot detect


the errors.
Two Dimensional Parity check:
>Performance can be improved by using two-dimensional
parity check which organizes the data in the form of a
table.

>Parity check bits are computed for each row, which is


equivalent to the single-parity check.

>In two-dimensional parity check, a block of bits is divided


into rows, and the redundant row of bits is added to the
whole block.

>At the receiving end, the parity bits are compared with the
parity bits computed from the received data.
Two Dimensional Parity check:

DRAWBACKS OF 2D PARITY CHECK


>If two bits in one data unit are corrupted and two bits exactly the
same position in another data unit are also corrupted, then 2D parity
checker will not be able to detect the error.

>This technique cannot be used to detect the 4-bit errors or more in


some cases.
Checksum:
> A checksum is an error detection technique based on
the concept of redundancy.

> It is divided into two parts.


-Checksum Generator
- Checksum checker
Checksum Generator:
> A checksum is generated at the sending side.

>Checksum generator subdivides the data into equal


segments of n bits each, and all these segments are
added together by using one's complement arithmetic.

>The sum is complemented and appended to the


original data, known as checksum field.

>The extended data is transmitted across the network.


Checksum Checker:
> A checksum is verified at the receiving side.

> The receiver subdivides the incoming data into equal


segments of n bits each, and all these segments are
added together, and then this sum is complemented.

> If the complement of the sum is zero, then the data is


accepted otherwise data is rejected.
Checksum generator and Checksum
Checker
Checksum generator and Checksum
Checker
Cyclic Redundancy Check (CRC):

> CRC is a redundancy error technique used to


determine the error.

> It is based on binary division, Instead of adding bits to


achieve desire parity, the redundant CRC remainder is
appended to the end of the data unit so that the
resulting data unit becomes exactly divisible by a
second.
Following are the steps used in CRC
for error detection:
>In CRC technique, a string of n 0s is appended to the data unit, and this n
number is less than the number of bits in a predetermined number, known as
division which is n-1 bits.

>Secondly, the newly extended data is divided by a divisor using a process is


known as binary division. The remainder generated from this division is known
as CRC remainder.

>Thirdly, the CRC remainder replaces the appended 0s at the end of the
original data. This newly generated unit is sent to the receiver.

>The receiver receives the data followed by the CRC remainder. The receiver
will treat this whole unit as a single unit, and it is divided by the same divisor
that was used to find the CRC remainder.
If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an
error. Therefore, the data is discarded.
Cyclic Redundancy Check (CRC):
Cyclic Redundancy Check (CRC):
Cyclic Redundancy Check (CRC):
Error Correction:
Error Correction codes are used to detect and correct the
errors when data is transmitted from the Sender to the receiver.

The two basic to design the channel code and protocol for an error
correction system:

Automatic Repeat-Request(ARQ):
The transmitter sends data as well as error detection technique, which the
receivers checks for the error.
The receiver sends acknowledgement (ACK) of correctly received data and transmitter re-sends
the data if acknowledgement is not received within the
period.
Eg. Stop and wait ARQ, Go-back-N ARQ, and selective Repeat ARQ

Forward Error Correction (EFC):


transmitter encodes data with an error-correction code and send the coded
message.
The Receiver never sends any msg back to the transmitter where the receiver decodes
the messages.
Eg. Hamming cod
Hamming Codes:
It is use to determine the position of the bit which is in error.
It is used to detect single bit error only.
It is used to determine the position of error in the sending data
bits.
It was developed by R.W. Hamming.
It can be applied to any length of data unit and uses the
relationship between data units and redundant unit.

The number of redundant bits r is calculated by


2r>=m+r+1
Steps for Hamming Codes:
1. ‘m’ bits information is added to the redundant bit ‘r’
to form m+r.

2. location of each of the (m+r) digits is assigned a


decimal value.

3. ‘r’ bits are placed in the position 20 ,21 …….,2k-1

4. In the receiver end re-calculate the parity bit, and


determine the position of an error.
Framing

The physical layer transmit the data in the form of bit


from source to destination.

DLL packs these bits into frames and takes the packets
from NL and encapsulates them into frames.

Some times the frame size will be too large and it is


further divided into small sized frames.
Framing
Parts of a frame:
Parts of a Frame

a. Frame Header: It contain s the sources and destination address of


the frame. it contain the information to process the frame.

b. Payload field: It is the main message to be delivered in the network. It


is much larger than header.

c. Trailer: It contain the errors detection and error correction bits.

d. Flag: It marks the beginning and ending of the frame.


Problems in Framing:
a. Detecting start of the frame:

b. How do station detects a frame:

c. Detecting the end of the frame:


Types of Framing:
It is divided into two types:
> Fixed sized framing
> Variable sized framing

Fixed sized Framing:


- frame size is fixed.
- Doesnot required additional boundary bits to
identify start and end of the frame.
-Eg ATM Cell
- Drawbacks:- it suffers from internal
fragmentation if the data size is less than frame
size.
-Padding can be the solution for this.
Variable sized framing
- frame size is not fixed and the size varies.
- required additional mechanism to identify start and end of the frame.
- It defines two ways:

-Length field: it determines the size of the frame.


It is used Ethernet (IEEE 802.3)

- End Delimiter(ED): a pattern is used as a delimiter to determine


the size of the frame. It is used in Token Rings. If pattern occurs in the
message, then two approaches are used to avoid the situtation.

Bytes Stuffing: also called Character-oriented Framing. A Byte is


stuffed in the message to differentiate from the delimiter.

Bit Stuffing: also Called bit oriented framing. A pattern of bits of


arbitery is stuffed in the message to differentiate from the delimiter.
Flow Control:
Flow control is a set of procedures that restrict the amount of
data a sender should send before it waits for some
acknowledgment from the receiver.

Receiver may have limited speed and memory to store the data,
thus it must be able to inform the device to stop the transmission
temporary before the limits are reached.

Two methods are used to control the flow of data:


> Stop and Wait
> Sliding window
Stop-and-wait protocol
Stop-and-wait protocol works under the
assumption that the communication channel
is noiseless and transmissions are error-free.

Working:
The sender sends data to the receiver.
The sender stops and waits for the acknowledgment.
The receiver receives the data and processes it.
The receiver sends an acknowledgment for the above
data to the sender.

The sender sends data to the receiver after receiving


the acknowledgment of previously sent data.

The process is unidirectional and continues until the


sender sends the end of transmission (EOT) frame.
Sliding window protocol:
Sliding window protocol is the flow control protocol for
noisy channels that allows the sender to send multiple
frames even before acknowledgments are received. It
is called a sliding window because the sender slides its
window upon receiving the acknowledgments for the
sent frames.

Working:
-The sender and receiver have a “window” of frames. A
window is a space that consists of multiple bytes. The size of
the window on the receiver side is always 1.

-Each frame is sequentially numbered from 0 to n - 1, where


n is the window size at the sender side.

-The sender sends as many frames as would fit in a window.


-After receiving the desired number of frames, the receiver
sends an acknowledgment. The acknowledgment (ACK)
includes the number of the next expected frame.
Error Control:
Error Control is a combination of both error detection and error correction.
It ensures that the data received at the receiver end is the same as the one sent
by the sender.

Error detection is the process by which the receiver informs the sender about any
erroneous frame (damaged or lost) sent during transmission.

The most common techniques for error control are based on some or all of the
following:
Error detection: Using parity check or CRC check.
Positive acknowledgement: The destination returns a positive acknowledgement to
successfully received, error-free frames.
Re-transmission after timeout: The source re-transmits a frame that has not been
acknowledged after a predetermined amount of time.
Negative acknowledgement and re-transmission: The destination returns a negative
acknowledgement to frames in which an error is detected. The source re-transmits such frames.
Error Control:
Stop-and-wait protocol
Stop-and-wait ARQ is a technique used to retransmit the data in case of
damage or loss Frame.

This technique works on the principle that the sender will not transmit the
new frame until it receives the acknowledgement on the last transmitted
frame.

Featues of Retransmission:
> Sender keep the copy of last transmitted frame until the ack is received.
> Both the data frame and ACK is numbered alternately as 0 and 1.
> If error occur, NAK is received by the sender and retransmit the last frame
again.
>It works with timer, so Ack is not received by the sender within the allotted time.
Retransmit the frame again.
Stop-and-wait protocol
Multiple Access Protocol
Multiple Access Protocols are mechanisms that regulate how multiple devices
communicate over a common channel or network.

When nodes or stations are connected and use a common link, called a
multipoint or broadcast link, we need a multiple access protocol to coordinate
access to the link.
Many formal protocols have been devised to handle access to a shared link.
All of these protocols belongs to a sub-layer in the datalink layer called MAC
We categorize them into three groups.
Random Access Protocol
In random access or contention methods, no station is superior to
another station and none is assigned the control over another.
No station permits or does not permit another station to send.
At each instance, a station that has to send uses a procedure
defined by the protocol to make a decision on whether or not to
send.
This decision depends on the state of the medium (idle or busy).
It has two features:
i) No fixed time for sending data.
ii) No fixed sequence of stations sending the data.
ALOHA
It was developed as a part of project by University of Hawaii.
It proposes how multiple terminals access the media without interference or
collision.
It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit
data across a network simultaneously when a data frameset is available for
transmission.
Aloha Rules
> Any station can transmit data to a channel at any time.
> It does not require any carrier sensing.
> Collision and data frames may be lost during the transmission of data through multiple
stations.
> Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
> It requires retransmission of data after some random amount of time.
Pure ALOHA
Whenever data is available for sending over a channel at stations, we use Pure
Aloha.
In pure Aloha, when each station transmits data to a channel without checking
whether the channel is idle or not, the chances of collision may occur, and the
data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for
the receiver's acknowledgment.
If it does not acknowledge the receiver end within the specified time, the station
waits for a random amount of time, called the backoff time (Tb).
And the station may assume the frame has been lost or destroyed. Therefore, it
retransmits the frame until all the data are successfully transmitted to the
receiver.
The total vulnerable time of pure Aloha is 2 * Tfr.
Maximum throughput occurs when G = 1/ 2 that is 18.4%.
Successful transmission of data frame is S = G * e ^ - 2 G.
Where G is number of stations want to transmit in Tfr Slot.
Pure ALOHA
Slotted ALOHA
The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting.
In slotted Aloha, the shared channel is divided into a fixed time interval called
slots.
So that, if a station wants to send a frame to a shared channel, the frame can
only be sent at the beginning of the slot, and only one frame is allowed to be sent
to each slot.
And if the stations are unable to send data to the beginning of the slot, the
station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.

Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


The probability of successfully transmitting the data frame in the slotted Aloha is
S = G * e ^ - 2 G.
The total vulnerable time required in slotted Aloha is Tfr.
Slotted ALOHA
CSMA (Carrier Sense Multiple Access)
Carrier Sense Multiple Access ensures fewer collisions as the station is
required to first sense the medium (for idle or busy) before transmitting
data.
It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.
CSMA Access mode:
1-persistent: The node senses the channel, if idle it sends the data,
otherwise it continuously keeps on checking the medium for being
idle and transmits unconditionally (with 1 probability) as soon as
the channel gets idle.
CSMA (Carrier Sense Multiple Access)
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it
checks the medium after a random amount of time (not continuously) and
transmits when found idle.

P-persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for some
time and checks the medium again, now if it is found idle then it sends with p
probability. This repeat continues until the frame is sent. It is used in Wi-Fi and
packet radio systems.

O-persistent: Superiority of nodes is decided beforehand and transmission occurs


in that order. If the medium is idle, node waits for its time slot to send data.
CSMA/CD (Carrier Sense Multiple Access with
Collision Detection)
The CSMA method does not tell us what to do in case there is a collision.
In Carrier sense multiple access with collision detection method, a station
monitors the medium after it sends a frame to see if the transmission was
successful.
If so, the transmission is completed. However, if there is a collision, the frame
is sent again.
The basic idea behind CSMA/CD is that a station needs to be able to receive
while transmitting, to detect a collision. When there is no collision, the station
receives one signal; its own signal. When there is a collision, the station
receives two signals: its own signal and the signal transmitted by a second
station.
To distinguish between these two cases, the received signals in these two
cases must be significantly different.
In other words, the signal from the second station needs to add a significant
amount of energy to the one created by the first station.
CSMA/CD (Carrier Sense Multiple Access with
Collision Detection)
CSMA/CD (Carrier Sense Multiple Access with
Collision Detection)
The process of collision detection involves sender receiving acknowledgement
signals.
If there is just one signal (its own), then the data is successfully sent but if there
are two signals (its own and the one with which it has collided), then it means a
collision has occurred.
To distinguish between these two cases, collision must have a lot of impact on
received signal.
The second signal adds significant amount of energy to the first signal. However,
this applies only to the wired networks since the received signal has almost the
same energy as the sent signal.
In wireless networks, much of the sent energy is lost in transmission.
The received signal has very little energy. Therefore, a collision may add only 5
to 10 percent additional energy.
This is not useful for effective collision detection. We need to avoid collisions on
wireless networks because they cannot be detected.
CSMA/CA (Carrier Sense Multiple Access with
Collision Avoidance)
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) was invented
for this network.
In contrast to the Carrier Sense Multiple Access/Collision Detect (CSMA/CD)
protocol, which handles transmissions only after a collision has taken place,
CSMA/CA works to avoid collisions prior to their occurrence. Collisions are
avoided through the use of CSMA/CA’s three strategies as shown in figure
below.
Interframe space – Station waits for medium to become idle and if found idle it
does not immediately send data (to avoid collision due to propagation delay)
rather it waits for a period of time called Interframe space or IFS. After this time,
it again checks the medium for being idle. IFS can also be used to define the
priority of a station or a frame. Higher the IFS lower is the priority.
• Contention Window –It is the amount of time divided into slots. A station
which is ready to send frames chooses random number of slots as wait time.
• Acknowledgement – The positive acknowledgements and time-out timer can
help guarantee a successful transmission of the frame.
Channelization protocols
Channelization is a way to provide multiple access by sharing the available
bandwidth in time, frequency, or through code between source and destination
nodes. Channelization Protocols can be classified as

FDMA (Frequency Division Multiple Access)


TDMA (Time Domain Multiple Access)
CDMA (Code Division Multiple Access)
FDMA (Frequency Division Multiple Access)
In this technique, the bandwidth is divided into frequency bands, and each
frequency band is allocated to a particular station to transmit its data.
The frequency band distributed to the stations becomes reserved.
Each station uses a band-pass filter to confine their data transmission into their
assigned frequency band.
Each frequency band has some gap in-between to prevent interference of
multiple bands, and these are called guard bands.
TDMA (Time Domain Multiple Access)
TDMA is another technique to enable multiple access in a shared medium.
The stations share the channel's bandwidth time-wise.
Every station is allocated a fixed time to transmit its signal.
The data link layer tells its physical layer to use the allotted time.
TDMA requires synchronization between stations.
There is a time gap between the time intervals, called guard time, which is
assigned for the synchronization between stations.
The rate of data in TDMA is greater than FDMA but lesser than CDMA.
CDMA (Code Division Multiple Access)
In the CDMA technique, communication happens using codes. Using this
technique, different stations can transmit their signal on the same channel
using other codes.
There is only one channel in CDMA that carries all the signals.
CDMA is based on the coding technique, where each station is assigned a code
(a sequence of numbers called chips).
It differs from TDMA as all the stations can transmit simultaneously in the
channel as there is no time sharing.
And it differs from FDMA as only one channel occupies the whole bandwidth.
Controlled access protocols
Controlled access protocols primarily grant permission to send only one node at
a time simultaneously, thus avoiding collisions among the shared mediums.
No station can send the data unless authorized by the other stations.
These protocols lie beneath the category of Controlled access are as follows:
Reservation
Polling
Token Passing.
Reservation:
In the reservation method, a station needs to make a reservation before
sending data.
Time is divided into intervals. In each interval, a reservation frame precedes the
data frames sent in that interval.
If there are N stations in the system, there are exactly N reservation mini slots
in the reservation frame. Each mini slot belongs to a station. When a station
needs to send a data frame, it makes a reservation in its own mini slot. The
stations that have made reservations can send their data frames after the
reservation frame.
Figure shows a situation with five stations and a five-mini slot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In
the second interval, only station 1 has made a reservation.
Polling:
Polling works with topologies in which one device is served as a primary station
and the other devices are secondary stations. All data exchanges must be made
through the primary device even when the destination is a secondary device.
The primary device controls the link; the secondary devices follow its
instructions. It is up to the primary device to determine which device is allowed
to use the channel at a given time. The primary device, therefore, is always the
initiator of a session.
The primary device sends and
receives data by using Select
and Poll function
Select:
The select function is used whenever the primary device has something to send.
Before sending data, the primary creates and transmits a one field which is
select (SEL) frame that includes the address of the intended secondary that
alerts the secondary to the upcoming transmission and wait for an
acknowledgment of the secondary's ready status.Poll:
The select function is used whenever the primary device has something to
receive.
When the primary is ready to receive data,
it must ask (poll) each device if it has
anything to send. When the first secondary
is approached, it responds either with a
NAK frame if it has nothing to send or with
data frame if it does. When the response
is a data frame, the primary reads the
frame and returns an acknowledgment
(ACK frame), verifying its receipt.
Token Passing:
In the token-passing method, the stations in a network are organized in a logical
ring. In other words, for each station, there is a predecessor and a successor.
In this method, a special packet called a token circulates through the ring. The
possession of the token gives the station the right to access the channel and
send its data.
When a station has some data to send, it waits until it receives the token from
its predecessor. It then holds the token and sends its data.
When the station has no more data to send, it releases the token, passing it to
the next logical station in the ring. The station cannot send data until it receives
the token again in the next round. In this process, when a station receives the
token and has no data to send, it just passes the data to the next station
(successor).
The high-speed Token Ring networks called FDDI (Fiber Distributed Data
Interface) and CDDI (Copper Distributed Data Interface) use this topology. The
Token Bus ( also called bus ring topology) LAN, standardized by IEEE and Token
Ring LAN designed by IBM, also uses this topology
Token Passing:
Overview of IEEE Standard 802: :
In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers.
Project 802 does not seek to replace any part of the OSI or the Internet model. Instead, it
is a way of specifying functions of the physical layer and the data link layer of major LAN
protocols.

IEEE 802 is comprised of standards with separate working groups that regulate different
communication networks,
including IEEE 802.1 for bridging (bottom sublayer), 802.2 for Logical link (upper sublayer),
802.3 for Ethernet, 802.5 for token ring, 802.11 for Wi-Fi, 802.15 for Wireless Personal area
networks,
802.15.1 for Bluetooth, 802.16 for Wireless Metropolitan Area Networks etc.
Ethernet
The original Ethernet was created in 1976 and since then, it has gone through four
generations.

Ethernet is a family of computer networking technologies commonly used in local area


networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN).
Systems using Ethernet communication divide data streams into packets, which are
known as frames.
Frames include source and destination address information, as well as mechanisms used
to detect errors in transmitted data and retransmission requests.
An Ethernet cable is the physical, encased wiring over which the data travels.
Compared to wireless LAN technology, Ethernet is typically less vulnerable to disruptions.
It can also offer a greater degree of network security and control than wireless technology,
as devices must connect using physical cabling, making it difficult for outsiders to access
network data or hijack bandwidth for
unsanctioned devices.
Token Ring
Token ring is the IEEE 802.5 standard for a token-passing ring in Communication
networks.
A ring consists of a collection of ring interfaces connected by point-to-point lines i.e. ring
interface of one station is connected to the ring interfaces of its left station as well as
right station. Internally, signals travel around the Communication network from one
station to the next in a ring.
These point-to-point links can be created with twisted pair, coaxial cable or fiber optics.
Each bit arriving at an interface is copied into a 1bit buffer. In this buffer the bit is
checked and may be modified and is then copied out to the ring again.
This copying of bit in the buffer introduces a 1-bit delay at each interface.
Token Ring is a LAN protocol defined in the IEEE 802.5 where all stations are connected
in a ring and each station can directly hear transmissions only from its immediate
neighbor. Permission to transmit is granted by a message (token) that circulates around
the ring. A token is a special bit pattern (3 bytes long).
There is only one token in the network. Token-passing networks move a small frame,
called a token, around the network. Possession of the token grants the right to transmit. If
a node receiving the token in order to transmit data, it seizes the token, alters 1 bit of the
token (which turns the token into a start-of frame sequence), appends the information
that it wants to transmit, and sends this information to the next station on the ring. Since
only one station can possess the token and transmit data at any given time,
there are no collisions.
Token Bus:
Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in
LANs. The physical media has a bus or a tree topology and uses coaxial cables. A virtual
ring is created with the nodes/stations and the token is passed from one node to the next
in a sequence along this virtual ring. Each node knows the address of its preceding
station and its succeeding station. A station can only transmit data when it has the token.

The working principle of token bus is similar to Token Ring. Token Passing Mechanism in
Token Bus
A token is a small message that circulates among the stations of a computer network
providing permission to the stations for transmission. If a station has data to transmit
when it receives a token, it sends the data and then passes the token to the next station;
otherwise, it simply passes the token to the next station.
This is depicted in the following diagram:
Bluetooth:
Bluetooth is a short-range wireless communication technology that allows devices such
as mobile phones, computers, and peripherals to transmit data or voice wirelessly over a
short distance.
The purpose of Bluetooth is to replace the cables that normally connect devices, while
still keeping the communications between them secure.
It creates a 10-meter radius wireless network, called a personal area network (PAN) or
piconet, which can network between two and eight devices. Bluetooth uses less power
and costs less to implement than Wi-Fi.
Its lower power also makes it far less prone to suffering from or causing interference with
other wireless devices in the same 2.4GHz radio band.
There are some downsides to Bluetooth. The first is that it can be a drain on battery
power for mobile wireless devices like smartphones, though as the technology (and
battery technology) has improved, this problem is less significant than it used to be.
Also, the range is fairly limited, usually extending only about 30 feet, and as with all
wireless technologies, obstacles such as walls, floors, or ceilings can reduce this range
further. The pairing process may also be difficult, often depending on the devices
involved, the manufacturers, and other factors that all can result in frustration when
attempting to connect. Bluetooth defines two types of networks: piconet and scatternet.
Wi-Fi:
The IEEE 802.11 wireless LAN, also known as Wi-Fi, is the name of a popular wireless
networking technology that uses radio waves to provide wireless high-speed Internet and
network connections.
WiFi networks have no physical wired connection between sender and receiver, by using
radio frequency (RF) technology (a frequency within the electromagnetic spectrum
associated with radio wave propagation). When an RF current is supplied to an antenna,
an electromagnetic field is created that then is able to propagate through space.
There are several 802.11 standards for wireless LAN technology, including 802.11b,
802.11a, and 802.11g.
Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to seamlessly
interwork with its wired sister protocol Ethernet. Devices that can use Wi-Fi technologies
include desktops and laptops, smartphones and tablets, smart TVs, printers, digital audio
players, digital cameras, cars and drones. Compatible devices can connect to each other
over Wi-Fi through a wireless access point as well as to connected Ethernet devices and
may use it to access the Internet. Such an access point (or hotspot) has a range of about
20 meters (66 feet) indoors and a greater range outdoors. Hotspot coverage can be as
small as a single room with walls that block radio waves, or as large as many square
kilometers achieved by using multiple overlapping access points.
Data Link Layer Protocols (DLL):
Data Link Layer protocols are generally responsible to simply ensure and confirm that the
bits and bytes that are received are identical to the bits and bytes being transferred. It is
basically a set of specifications that are used for implementation of data link layer just
above the physical layer of the OSI Model.

High-Level Data Link Protocol (HDLC):


HDLC is basically a protocol that is now assumed to be an umbrella under which many
Wide Area protocols reside. It is used to connect all of the remote devices to mainframe
computers at central locations may be in point-to-point or multipoint connections. It is
also used to make sure that the data units should arrive correctly and with right flow from
one network point to next network point. It also provides best effort unreliable service and
also reliable service. HDLC is a bit-oriented protocol that is applicable for point-to-point
and multipoint communications both.
Data Link Layer Protocols (DLL):
Point to Point Protocol (PPP):
PPP is a protocol that is basically used to provide functionality to add a framing byte at
end of IP packet.
It is basically a data link control facility that is required for transferring IP packets usually
among Internet Service Providers (ISP) and a home user. It is most robust protocol that is
used to transport other types of packets also along with IP Packets. It is a byte-oriented
protocol that is also used for error detection.

Difference Between High-level Data Link Control (HDLC) and Point-to-Point Protocol (PPP):
High-level Data Link Control is the bit-oriented protocol, on the other hand, Point-to-Point
Protocol is the byte-oriented protocol.

Another difference between HDLC and PPP is that HDLC is implemented by Point-to-point
configuration

and also multi-point configurations on the other hand While PPP is implemented by Point-
to-Point configuration only.

You might also like