Unit2A Final
Unit2A Final
Unit2A Final
• Single channel assumption is the heart of the model. This models is not a good model .
What is Multiple Access?
Broadcast link used in LAN Broadcast links Examples
consists of multiple sending and
receiving nodes connected to or
use a single shared link
G = Average number of frames generated by the system (all stations) during one frame
transmission time.
Flow Diagram for Pure ALOHA protocol
Slotted ALOHA:
• Time is divided into slots equal to a frame transmission time (Tfr)
• A station can transmit at the beginning of a slot only
• If a station misses the beginning of a slot, it has to wait until the beginning of the next time
slot.
• A central clock or station informs all stations about the start of a each slot
• Maximum channel utilization is 37%
Vulnerable or Critical time for Slotted ALOHA protocol
Efficiency of Aloha:
CARRIER SENSE MULTIPLE ACCESS (CSMA)
• To improve performance, avoid transmissions that are certain to cause collisions
• Based on the fact that in LAN propagation time is very small
• If a frame was sent by a station, All stations knows immediately so they can wait before
start sending
– A station with frames to be sent, should sense the medium for the presence of another
transmission (carrier) before it starts its own transmission
• This can reduce the possibility of collision but it cannot eliminate it.
– Collision can only happen when more than one station begin transmitting within a short
time (the propagation time period).
Vulnerable time for CSMA is the maximum propagation time
The longer the propagation delay, the worse the performance of the protocol
because of the above case.
Restrictions:
Packet transmission time should be at least as long as the time needed to
detect a collision (2 * maximum propagation delay + jam sequence
transmission time).
Otherwise, CSMA/CD does not have an advantage over CSMA
Flow diagram for the CSMA/CD
CSMA/CA
Flow chart for CSMA/CA
Performance of Random Access Protocols
• Simple and easy to implement
• Decentralized (no central device that can fail and bring down the entire system)
• However, limited throughput and in heavier traffic, packet delay has no limit.
• In some cases, a station may never have a chance to transfer its packet. (unfair
protocol)
• A node that has frames to be transmitted can transmit continuously at the full
rate of channel (R) if it is the only node with frames
• If (M) nodes want to transmit, many collisions can occur and the rate for each
node will not be on average R/M
Collision-Free Protocols
• Can we avoid collision all together, even during the contention period ?
Yes. But how ?
One assumption: there are N stations, each with a unique address from 0 to N –1 ``wired'' into
it.
Which station gets the channel after a successful transmission ?
A BASIC BIT-MAP PROTOCOL:
• Each contention period consists of exactly N slots, with one slot time being at least 2t.
• If station i (0 £ i £ N -1) has a frame to send, it transmits 1 bit during the ith slot; otherwise, it
transmits 0 bit during the ith slot.
• After all slots have passed by, stations begin transmitting in numerical order.
• After the last ready station has transmitted its frame, another N-bit contention period is begun.
• Protocols like this in which the desire to transmit is broadcast before the actual transmission
are called reservation protocols.
• Low load situation: the bit map repeats over and over.
TOKEN PASSING
• Form a circular list. Pass a token around. Whoever has
the token can transmit i.e., the token represents
permission to send.
• Only the station that wants to transmit, seize the token
and release it after successful transmission.
• Possession of the token allows a station to use the bus to
send one frame, as before. This protocol is called token
bus.
• The performance of token passing is similar to that of
the bit-map protocol, though the contention slots and
frames of one cycle are now intermingled. After sending
a frame, each station must wait for all N stations
(including itself) to send the token to their neighbors and
the other N − 1 stations to send a frame, if they have
one.
THE BINARY COUNTDOWN PROTOCOL
A problem with the basic bit-map protocol is that
overhead is 1 contention bit slot per station. We can
do better than that by using binary station addresses.
1. Each station has a binary address. All addresses
are the same length.
2. To transmit, a station broadcasts its address as a
binary bit string, starting with high order bit.
3. The bits in each address position from different
stations are BOOLEAN ORed together (so called
Binary countdown).
4. As soon as a station sees that a high-order bit
position that is 0 in its address has been
overwritten with a 1, it gives up.
5. After the winning station has transmitted its frame,
there is no information available telling how
many other stations to send, so the algorithm
begins all over with the next frame.
ETHERNET
• Types of Ethernets
1. Classical Ethernet
2. Switched Ethernet
• Physical layer
• MAC sublayer protocol
1. Preamble – 8 bytes
• Preamble is of 8 bytes, each containing the bit pattern 10101010 (with the exception of the
• last byte, in which the last 2 bits are set to 11). This last byte is called the Start of
• Frame delimiter for 802.3.
• The Manchester encoding of this pattern produces a10-MHz square wave for 6.4 μsec to allow the
receiver’s clock to synchronize with the sender’s.
• The last two 1 bits tell the receiver that the rest of the frame is about to start.
2. Two addresses each 6 bytes – destination + source
First bit of the destination address is 0 for ordinary addresses and 1 for group addresses.
Group address allow multiple destinations to listen to a single address – Multicasting.
Special address consisting of all 1 is reserved for broadcasting.
Uniqueness of the addresses:
First 3 bytes are used for OUI (Organizationally Unique Identifier)
Manufactures are assigned blocks of 224 addresses. The manufacturer assigns the last 3 bytes
of the address and programs the complete address into the NIC.
3. Type or Length field.
– Depending whether the frame is Ethernet or IEEE 802.3
– Ethernet uses a Type field to tell the receiver what to do with the frame.
– Multiple network-layer protocols may be in use at the same time on the same machine.
So when Ethernet frame arrives, the operating system has to know which one to hand
the frame to. The Type field specifies which process to give the frame to. E.g. 0x0800
indicates the frame contains IPv4 packet.
– Length of the field could be carried as well.
– Ethernet length was determined by looking inside the data – a layer violation.
– Added another header for the Logical Link Control (LLC) protocol within the data. It
uses 8 bytes to convey the 2 bytes of protocol type information.
– Rule: any number less than or equal to 0x600 is interpreted as Length, and any number
greater than 0x600 can be interpreted as Type.
4. Data Field and Pad Field
– Up to 1500 bytes.
– Minimum frame length – valid frames must be at least 64 bytes long – from destination
address to checksum.
– If data portion is less than 46 bytes the Pad field is used to fill out the frame to the
minimum size.
– Minimum frame length is also serves one very important role – prevents the sender to
complete transmission before the first bit arrives at the destination.
–In addition to there being a maximum frame length, there is also a minimum frame
length.
–While a data field of 0 bytes is sometimes useful, it causes a problem.
–When a transceiver detects a collision, it truncates the current frame, which means that
stray bits and pieces of frames appear on the cable all the time.
–To make it easier to distinguish valid frames from garbage, Ethernet requires that valid
frames must be at least 64 bytes long, from destination address to checksum, including
both.
–If the data portion of a frame is less than 46 bytes, the Pad field is used to fill out the
frame to the minimum size.
–10 Mbps LAN with a maximum length of 2500 m and four repeaters the round-trip
time has been determined to be nearly 50 msec in the worst case.
–Shortest allowed frame must take at least this long to transmit.
At 10 Mbps a bit takes 100 nsec
500 bits (numbit = 10 Mbps X 100 nsec) rounded up to 512 bits = 64 bytes.
5. Checksum
– It is a 32-bit CRC and is defined exactly for a generator polynomial.
– CRC is an error detecting code that is used to determine if the bits of the frame
have been received correctly.
– It just does error detection, with the frame dropped if an error is detected.
Reason for having a minimum length frame?
• Another (and more important) reason for having a minimum
length frame is to prevent a station from completing the
transmission of a short frame before the first bit has even reached
the far end of the cable, where it may collide with another frame.
It is illustrated in Fig.
• At time 0, station A, at one end of the network, sends off a
frame. Let us call the propagation time for this frame to reach the
other end τ. Just before the frame gets to the other end (i.e., at
time τ − ε), the most distant station, B, starts transmitting.
• When B detects that it is receiving more power than it is putting
out, it knows that a collision has occurred, so it aborts its
transmission and generates a 48-bit noise burst to warn all other
stations.
• In other words, it jams the ether to make sure the sender does not
miss the collision.
• At about time 2τ, the sender sees the noise burst and aborts its
transmission, too. It then waits a random time before trying
again.
• If a station tries to transmit a very short frame, it is conceivable
that a collision will occur, but the transmission will have
completed before the noise burst gets back to the station at 2τ.
• The sender will then incorrectly conclude that the frame was
successfully sent. To prevent this situation from occurring, all
frames must take more than 2τ to send so that the transmission
is still taking place when the noise burst gets back to the sender.
CSMA/CD with Binary Exponential Backoff
• Classic Ethernet uses the 1-persistent CSMA/CD algorithm.
• It means that stations sense the medium when they have a frame to send and send the
frame as soon as the medium becomes idle.
• They monitor the channel for collisions as they send. If there is a collision, they abort
the transmission with a short jam signal and retransmit after a random interval.
• In this method a random interval is determined when a collision occurs. The model is
shown in below figure.
• After a collision, time is divided into discrete slots whose length is equal to the worst-
case roundtrip propagation time on the ether (2τ). To accommodate the longest path
allowed by Ethernet, the slot time has been set to 512 bit times, or 51.2 μsec.
• After the first collision, each station waits either 0 or 1 slot times at random before trying
again. If two stations collide and each one picks the same random number, they will
collide again.
• After the second collision, each one picks either0, 1, 2, or 3 at random and waits that number
of slot times.
• If a third collision occurs (the probability of this happening is 0.25), the next time the number
of slots to wait is chosen at random from the interval 0 to 23 − 1.
• In general, after “i” collisions, a random number between 0 and 2i − 1 is chosen, and that
number of slots is skipped. However, after 10 collisions have been reached, the randomization
interval is frozen at a maximum of 1023 slots. After 16 collisions, the controller throws in the
towel and reports failure back to the computer. Further recovery is up to higher layers.
• This algorithm, called binary exponential backoff, was chosen to dynamically adapt to the
number of stations trying to send. If the randomization interval for all collisions were 1023,
the chance of two stations colliding for a second time would be negligible, but the average
wait after a collision would be hundreds of slot times, introducing significant delay.
• On the other hand, if each station always delayed for either 0 or 1 slots, then if 100 stations
ever tried to send at once they would collide over and over until 99 of them picked 1 and the
remaining station picked 0. This might take years.
• By having the randomization interval grow exponentially as more and more consecutive
collisions occur, the algorithm ensures a low delay when only a few stations collide but also
ensures that the collisions are resolved in a reasonable interval when many stations collide.
Truncating the backoff at 1023 keeps the bound from growing too large.
• If there is no collision, the sender assumes that the frame was probably successfully
delivered.
Figure 802.3 MAC frame
The least significant bit of the first byte defines the type of address.
If the bit is 0, the address is unicast;
otherwise, it is multicast.
The broadcast destination address is a special case of the multicast address in
which all bits are 1s.
Example
Solution
The address is sent left-to-right, byte by byte; for each byte,
it is sent right-to-left, bit by bit, as shown below:
13.46
Data Link Layer Switching
• Uses of bridges
• Learning bridges
• Spanning tree bridges
• Repeaters, hubs, bridges, switches,
routers, and gateways
• Virtual LANs
DATA LINK LAYER SWITCHING
• Many organizations have multiple LANs and it would it be convenient to join the
LANs together to make a larger LAN.
• These connections are made with devices called bridges. The Ethernet switches is
a modern name for bridges; they provide functionality that goes beyond classic
Ethernet and Ethernet hubs. It is easy to join multiple LANs into a larger and faster
network.
• Bridges operate in the data link layer, they examine the data link layer addresses to
forward frames. Since they not examine the payload field of the frames they
forward, they can handle IP packets.
• In contrast, routers examine the addresses in packets and route based on them. They
only work with the protocols that they are designed to handle.
Uses of Bridges
• Some common situations in which bridges are used.
• The three basic reasons:
– First, many university and corporate departments have their own LANs to connect their
own personal computers, servers, and devices such as printers. Since the goals of the
various departments differ, different departments may set up different LANs, without
regard to what other departments are doing. There is a need for interaction, so bridges are
needed. In this example, multiple LANs come into existence due to the autonomy of their
owners.
– Second, the organization may be geographically spread over several buildings separated
by considerable distances. It may be cheaper to have separate LANs in each building and
connect them with bridges and a few long-distance fiber optic links than to run all the
cables to a single central switch. Even if laying the cables is easy to do, there are limits
on their lengths (e.g., 200 m for twisted-pair gigabit Ethernet). The network would not
work for longer cables due to the excessive signal attenuation or round-trip delay. The
only solution is to partition the LAN and install bridges to join the pieces to increase the
total physical distance that can be covered.
– Third, it may be necessary to split what is logically a single LAN into separate LANs
(connected by bridges) to accommodate the load. At many large universities, for
example, thousands of workstations are available for student and faculty computing.
Companies may also have thousands of employees. The scale of this system precludes
putting all the workstations on a single LAN—there are more computers than ports on
any Ethernet hub and more stations than allowed on a single classic Ethernet.
• All of the stations share the same, fixed amount of bandwidth. The more stations there are,
the less average bandwidth per station.
• Two separate LANs have twice the capacity of a single LAN. Bridges let the LANs be joined
together while keeping this capacity. each LAN can run at full speed. This behavior also
increases reliability.
• By deciding what to forward and what not to forward, bridges act like fire doors in a
building, preventing a single node that has gone berserk from bringing down the entire
system.
• Ideally bridges are completely transparent. It should be possible to go out and buy bridges,
plug the LAN cables into the bridges, and have everything work perfectly, instantly.
• There should be no hardware changes required, no software changes required, no setting of
address switches, no downloading of routing tables or parameters, nothing at all. Just plug in
the cables and walk away.
• Further, the operation of the existing LANs should not be affected by the bridges at all. As far
as the stations are concerned, there should be no observable difference whether or not they
are part of a bridged LAN.
• It should be as easy to move stations around the bridged LAN as it is to move them around a
single LAN.
• Actually it is possible to create bridges that are transparent by using two algorithms.
– a backward learning algorithm to stop traffic being sent where it is not needed; and
– a spanning tree algorithm to break loops that maybe formed when switches are cabled
together.
LEARNING BRIDGES
(a) (b)
Bridge connecting two Bridges (and a hub) connecting seven point-
multidrop LANs to-point stations.
(c) Protocol processing at a bridge.
SPANNING TREE BRIDGES