Ad-Hoc & Sensor Networks IV Year II Sem

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 53

Ad-Hoc & Sensor Networks

IV year II sem
INFORMATION TECHNOLOGY

UNIT - II
ISSUES IN DESIGNING A MAC PROTOCOL FOR AD HOC
WIRELESS NETWORKS

The main issues need to be addressed while designing a MAC protocol for ad hoc wireless

networks:

Bandwidth efficiency is defined at the ratio of the bandwidth used for actual data
transmission to the total available bandwidth. The MAC protocol for ad-hoc networks
should maximize it.

Quality of service support is essential for time- critical applications. The MAC protocol

for ad-hoc networks should consider the constraint of ad-hoc networks.

Synchronization can be achieved by exchange of control packets.


Hidden and exposed terminal problems:

A)THE HIDDEN STATION PROBLEM. (B) THE EXPOSEDSTATION


PROBLEM.
ISSUES IN DESIGNING A MAC PROTOCOL FOR AD HOC
WIRELESS NETWORKS

Error-Prone Shared Broadcast Channel :


• When a node is receiving data, no other node in its neighborhood, apart from the
sender, should transmit.
• A node should get access to the shared medium only when its transmissions do not
affect any ongoing session.
• Since multiple nodes may contend for the channel simultaneously, the possibility of
packet collisions is quite high in wireless networks.
ISSUES IN DESIGNING A MAC PROTOCOL FOR AD HOC
WIRELESS NETWORKS

Distributed Nature/Lack of Central Coordination :


• In cellular networks, the base stations act as central coordinating nodes and allocate
bandwidth to the mobile terminals.
• This is not possible in an ad hoc network, where nodes keep moving continuously.
Mobility of Nodes :
• Important factor affecting the performance (throughput) of the protocol.
• The protocol design must take this mobility factor into consideration so that the
performance of the system is not significantly affected due to node mobility.
Design goals of a MAC Protocol

Design goals of a MAC protocol for ad hoc wireless networks

 The operation of the protocol should bedistributed.

The protocol should provide QoS support for real- time traffic.

The access delay, which refers to the average delay experienced by any packet

to get transmitted, must be kept low.


 The available bandwidth must be utilized efficiently.

The protocol should ensure fair allocation of bandwidth to nodes.


Design goals of a MAC Protocol

 Control overhead must be kept as low aspossible.

The protocol should minimize the effects of hidden and exposed terminal
problems.
 The protocol must be scalable to largenetworks.

 It should have power controlmechanisms.

The protocol should have mechanisms for adaptive data rate control.

 It should try to use directional antennas.

The protocol should provide synchronization among nodes.


Classifications of MAC protocols

Ad hoc network MAC protocols can be classified into three


types:
 Contention-based protocols
 Contention-based protocols with reservation mechanisms
 Contention-based protocols with scheduling mechanisms
Classifications of MAC
protocols
Contention Based Protocols
• A node does not make any resource reservation a priori.

• Whenever it receives a packet to be transmitted, it contends with its neighbor


nodes for access to the shared channel.

• Cannot provide QoS guarantees to sessions since nodes are not guaranteed
regular access to the channel.
Contention Based Protocols (contd..)
• Random access protocols can be further divided into two types:

1. Sender-initiated protocols: Packet transmissions are initiated by the sender


node.

2. Receiver-initiated protocols: The receiver node initiates the contention


resolution protocol.
Contention Based Protocols (contd..)
• Sender-initiated protocols can be further divided into two types:

I. Single-channel sender-initiated protocols:

The total available bandwidth is used as it is, without being divided.

A node that wins the contention to the channel can make use of the entire bandwidth.

II. Multichannel sender-initiated protocols:

The available bandwidth is divided into multiple channels.

This enables several nodes to simultaneously transmit data, each using a separate
channel.
MACAW: A Media Access Protocol for Wireless LANs

MACAW: A Media Access Protocol for Wireless LANs is based


on MACA (Multiple Access Collision Avoidance) Protocol
• MACA
When a node wants to transmit a data packet, it first transmit a RTS

(Request To Send) frame.


The receiver node, on receiving the RTS packet, if it is ready to receive the

data packet, transmits a CTS (Clear to Send) packet.


MACAW(contd..)

Once the sender receives the CTS packet without any error, it starts
transmitting the data packet.

If a packet transmitted by a node is lost, the node uses the binary
exponential back-off (BEB) algorithm to back off a random interval of
time before retrying.

The binary exponential back-off mechanism used in MACA might starves


flows sometimes. The problem is solved by MACAW.
MACA
Protocol

The MACA protocol. (a) A sending an RTS to B.


(b) B responding with a CTS to A.
MACA
examples

RTS

CTS CTS
A B C

 MACA avoids the problem of hidden


terminals
 A and C want to send to B
 A sends RTS first
 C waits after receiving CTS from B
MACA
examples
MACA avoids the of
problem terminals exposed
 B wants to send to A, C to another
terminal cannot
now C does not have to wait for
it receive CTS from A
RTS RTS

CTS
A B C
MACA

Packet transmission in MACA


MACA (contd..)
• If a packet transmitted by a node is lost, the node uses the binary
exponential back-off (BEB) algorithm to back off for a random
interval of time before retrying.

• In the binary exponential backoff mechanism, each time a collision is


detected, the node doubles its maximum back-off window.
MACAW
• The binary exponential back-off mechanism used in MACA at times starves flows. For
example, consider below Figure, Here both nodes S1 and S2 keep generating a high
volume of traffic.
• The node that first captures the channel (say, node S1) starts transmitting packets. The
packets transmitted by the other node S2 get collided, and the node keeps incrementing its
back-off window according to the BEB algorithm.
• As a result, the probability of node S2 acquiring the channel keeps decreasing, and over a
period of time it gets completely blocked.
MACAW (contd..)
• To prevent such large variations in the back-off values, a multiplicative
increase and linear decrease (MILD) back-off mechanism is used in
MACAW.

• Here, upon a collision, the back-off is increased by a multiplicative factor


(1.5), and upon a successful transmission, it is decremented by one.

• This eliminates contention and hence long contention periods after every
successful transmission, at the same time providing a reasonably quick
escalation in the back-off values when the contention is high.
MACAW (contd..)
• MACAW implements per flow fairness as opposed to the per node fairness in
MACA.

• This is done by maintaining multiple queues at every node, one each for each
data stream, and running the back-off algorithm independently for each queue.

• In addition to the RTS and CTS control packets used in MACA, MACAW uses
another new control packet called acknowledgment (ACK) packet.

• In MACA, the responsibility of recovering from transmission errors lies with


the transport layer, which takes about 0.5sec.
MACAW (contd..)
• In MACAW, the error recovery responsibility is given to the data link layer
(DLL), where the recovery process can be made quicker.

• In MACAW, after successful reception of each data packet, the receiver node
transmits an ACK packet.

• If the sender does not receive the ACK packet, it reschedules the same data
packet for transmission.

• The back-off counter is incremented if the ACK packet is not received by the
sender.
MACAW (contd..)
• If the ACK packet got lost in transmission, the sender would retry by transmitting an RTS for
the same packet.

• But now the receiver, instead of sending back a CTS, sends an ACK for the packet received,
and the sender moves on to transmit the next data packet
MACAW (contd..)
Summary MACAW
The MACAW protocol has been designed based on four main observations.
1. Congestion at the receiver node solved by using the RTS-CTS-DS-DATA-ACK exchange
mechanism.
2. Instead of characterizing back-off by a single back-off parameter, separate back-off
parameters have been introduced for each flow.
3. Learning about congestion at various nodes must be a collective enterprise. Therefore, the
notion of copying back-off values from overheard packets has been introduced in MACA.
4. The synchronization information needs to be propagated to the concerned nodes at
appropriate times. This is done in MACAW through the DS and RRTS packets.
Floor Acquisition Multiple
Access Protocols
(FAMA)
FAMA
• The floor acquisition multiple access (FAMA) protocols are based on a
channel access discipline which consists of a carrier-sensing operation
and a collision-avoidance between sender and receiver.

• Floor acquisition refers to the process of gaining control of the channel.

• At any given point of time, the control of the channel is assigned to only
one node, and this node is guaranteed to transmit one or more data
packets to different destinations without suffering from packet collisions.
FAMA (contd..)
• Carrier-sensing by the sender, followed by the RTS-CTS control packet
exchange, enables the protocol to perform as efficiently as MACA in the
presence of hidden terminals.

• In FAMA, a node that wishes to transmit packets to first acquire the


floor (channel) before starting to transmit the packets.

• The floor is acquired by means of exchanging control packets.


FAMA (contd..)
• Two FAMA protocol variants :
• RTS-CTS exchange with no carrier sensing (MACA)

• RTS-CTS exchange with non-persistent carrier-sensing.

• The first uses ALOHA protocol for transmitting RTS packets, while the
second variant uses non-persistent CSMA.
CONTENTION-BASED
PROTOCOLS WITH RESERVATION
MECHANISMS
CONTENTION-BASED PROTOCOLS WITH
RESERVATION
MECHANISMS

• These protocols are contention-based, contention occurs only during


the resource (bandwidth) reservation phase.

• Once the bandwidth is reserved, the node gets exclusive access to the
reserved bandwidth.

• Hence, QoS support can be provided for real-time traffic.


Distributed Packet Reservation
Multiple Access
(D-PRMA)
D-PRMA
• D-PRMA is a TDMA-based scheme.

• The channel is divided into fixed- and equal-sized frames along the time
axis.

• Each frame is composed of s slots, and each slot consists of m mini-slots.

• Each mini-slot can be further divided into two control fields, RTS/BI and
CTS/BI(BI stands for busy indication), as shown in the figure.
D-PRMA (contd..)

Figure: Frame structure in D-PRMA.


D-PRMA (contd..)
• All nodes having packets ready for transmission contend for the first minislot
of each slot.

• The remaining (m - 1) minislots are granted to the node that wins the
contention.

• Also, the same slot in each subsequent frame can be reserved for this winning
terminal until it completes its packet transmission session.

• If no node wins the first minislot, then the remaining minislots are
continuously used for contention, until a contending node wins any minislot.
D-PRMA (contd..)
• Within a reserved slot, communication between the source and receiver
nodes takes place by means of either time division duplexing (TDD) or
frequency division duplexing (FDD).

• Two rules are followed in D-PRMA.

• First Rule : The voice nodes are allowed to start contending from minislot 1 with

probability p = 1; Data nodes can start contending only with probability p < 1.

For the remaining (m - 1) minislots, both the voice nodes and the data nodes are
allowed to contend with probability p < 1.
D-PRMA (contd..)
• Second Rule :

If the node winning the minislot contention is a voice node, is it permitted to
reserve the same slot in each subsequent frame until the end of the session.

If a data node wins the contention, then it is allowed to use only one slot, that is, the
current slot, and it has to make fresh reservations for each subsequent slot.

• Nodes that are located within the radio coverage of the receiver should not be
permitted to transmit simultaneously
D-PRMA (contd..)
• In D-PRMA, when a node wins the contention in minislot 1, other terminals
must be prevented from using any of the remaining (m - 1) minislots in the
same slot for contention (requirement 1).

• Also, when a slot is reserved in subsequent frames, other nodes should be


prevented from contending for those reserved slots (requirement 2).
Collision Avoidance Time Allocation
(CATA)
CATA
• The collision avoidance time allocation protocol (CATA) is based on dynamic
topology dependent transmission scheduling.

• Nodes contend for and reserve time slots by means of a distributed reservation
and handshake mechanism.

• CATA supports broadcast, unicast, and multicast transmissions


simultaneously.
CATA (contd..)
• The operation of CATA is based on two basic principles:

1. The receiver(s) of a flow must inform the potential source nodes about the
reserved slot on which it is currently receiving packets. Similarly, the source
node must inform the potential destination node(s) about interferences in
the slot.

2. Usage of negative acknowledgments for reservation requests, and control


packet transmissions at the beginning of each slot, for distributing slot
reservation information to senders of broadcast or multicast sessions.
• Time is divided into equal-sized frames, and each frame consists of S slots.
CATA (contd..)
• Each slot is further divided into five minislots.

• The first four minislots are used for transmitting control packets and are called
control minislots (CMS1, CMS2, CMS3, and CMS4).

• The fifth and last minislot, called data minislot (DMS), is meant for data
transmission.

• The data minislot is much longer than the control minislots as the control
packets are much smaller in size compared to data packets.
CATA (contd..)

Figure : Frame format in CATA.


CONTENTION-BASED MAC
PROTOCOLS WITH
SCHEDULING MECHANISMS
CONTENTION-BASED MAC PROTOCOLS WITH
SCHEDULING MECHANISMS

• These protocols focus on packet scheduling at the nodes and transmission


scheduling of the nodes.

• Scheduling decisions may take into consideration various factors such as


delay targets of packets, laxities of packets, traffic load at nodes, and
remaining battery power at nodes.
Distributed Priority Scheduling (DPS)
Distributed Priority Scheduling

• DPS protocol piggy-backs the priority tag of a node's current and head-of-
line packets on the control and data packets.

• By retrieving information from such packets transmitted in its


neighborhood, a node builds a scheduling table from which it determines its
rank (information regarding its position as per the priority of the packet to
be transmitted next) compared to other nodes in its neighborhood.
Distributed Priority Scheduling (contd..)

• This rank is incorporated into the back-off calculation mechanism in order


to provide an approximate schedule based on the ranks of the nodes.

• DPS provides a scheme called multi-hop coordination.

• The downstream nodes in the path to the destination increase the relative
priority of a packet in order to compensate for the excessive delays incurred
by the packet at the upstream nodes.
Distributed Priority Scheduling (contd..)

• DPS uses the same basic RTS-CTS-DATA-ACK packet exchange mechanism.

• The RTS packet transmitted by a ready node carries the priority tag/priority
index for the current DATA packet to be transmitted.

• On receiving the RTS packet, the intended receiver node responds with a CTS
packet.

• The receiver node copies the priority tag from the received RTS packet and
piggybacks it along with the source node id, on the CTS packet.
Distributed Priority Scheduling (contd..)

• Neighbor nodes receiving the RTS or CTS packets (including the hidden
nodes) retrieve the piggy-backed priority tag information and make a
corresponding entry for the packet to be transmitted, in their scheduling
tables (STs).

• Each node maintains an ST holding information about packets, which were


originally piggy-backed on control and data packets.

• The entries in the ST are ordered according to their priority tag values.
Distributed Priority Scheduling (contd..)
• When the source node transmits a DATA packet, its head-of-line packet information
(consisting of the destination and source ids along with the priority tag) is piggy-backed on
the DATA packet (head-of-line packet of a node refers to the packet to be transmitted next by
the node).

• This information is copied by the receiver onto the ACK packet it sends in response to the
received DATA packet.

• Neighbor nodes receiving the DATA or ACK packets retrieve the piggy-backed information
and update their STs accordingly.

• When a node hears an ACK packet, it removes from its ST any entry made earlier for the
corresponding DATA packet.
Figure. Piggy-backing and scheduling table update mechanism in DPS.

You might also like