Networking 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 75

Ethernet is a family of networking technologies used in local area

networks (LAN), which are defined under IEEE 802.2 and 802.3
standards. It is the most widely used LAN technology family used today.

Ethernet standards define both Layer 2 and Layer 1 technologies. At the


data-link layer, Ethernet relies on two separate sub-layers to operate, the
Logical Link Control sublayer and the Media Access Control sublayer.

LLC sublayer
The LLC sublayer is used to communicate with the upper protocol layers of
the OSI model. It takes the protocol data units (PDUs) from the upper
layers, which are typically IPv4 packets, and adds control information to
help deliver the data to its destination.
LLC sublayer is implemented in software, and its implementation is
hardware agnostic. An example of the LLC can be considered the network
driver software of a server's NIC. The NIC driver is a software program that
interacts directly with the NIC hardware and passes the data between the
MAC sublayer and the physical media.
MAC sublayer
MAC constitutes the lower sublayer of the data link layer. MAC sublayer is
implemented in hardware, typically in the server's NIC. Ethernet MAC
sublayer has two primary functions:
Data Encapsulation and Decapsulation
Frame Delimiting
Addressing
Error detection
Media Access Control
Control of media access
Media recovery
Data Encapsulation
The data encapsulation process includes forming the frame, adding Ethernet
Header and Trailer, and decapsulation of the frame upon receiving. It
provides three primary functions:
Frame delimiting - This process provides bit-level synchronization between
the sending and receiving nodes. It also signals the receiving node about the
start of a new frame.
Addressing - This process adds an Ethernet Header to the frame. The header
contains the physical addresses (MAC addresses) that are used by the
network devices
Error detection - Every frame has a trailer with a cyclic redundancy check
(CRC) of the frame contents. The receiving node calculates again the CRC sum
and compares it to the one in the frame. If these two CRC calculations match,
the frame must have been received without error.
Book traversal links for Ethernet Technologies Overview
Type of LANs
SOHO LANs
One of the most common local-area deployments is the Small-
Office / Home-Office LAN (SOHO). It is a small computer network
usually built of one Ethernet switch, one router, and one wireless
access point. The LAN uses Ethernet cables to connect different
end-devices to one of the switch ports.
Figure 1 shows a diagram of a SOHO Ethernet LAN with one switch, one
router, and one access point. Some of the end devices are connected to
the access switch with Ethernet cables and some of the mobile devices
are connected via wireless. The Access point act as an Ethernet switch
with the only difference that the clients are connected with radio waves
instead of cables, using the IEEE 802.11 standards. Typical SOHO users
primarily consume public services such an email and social media, so the
traffic pattern is primarily from the Internet to the end clients.
Although in figure 1, the switch, router, and AP are shown as separate
devices, many networking vendors combine them in one integrated
network device specifically built for the SOHO LAN market.
These types of devices, shown in figure 2, are typically referred to as a
"wireless router", but they combine 4-port Ethernet switch, wireless
access point, IP router, and a firewall into an all-in-one device. Usually,
these types of devices are easy to set up and ready to go after
unboxing, but the downside is that they have lower performance and
availability and most importantly, they don't scale as well as the
enterprise-grade dedicated devices. For example, the integrated device
shown in figure 2 has only one routing port and 4 switch ports. Imagine
if the company has three Internet providers or 30 PCs or is spread on
two building floors. For that kind of scale, enterprise-grade network
devices are required.
Enterprise LANs
Enterprise networks are much larger in scale than a typical SOHO LAN. The
network devices used are enterprise-grade, usually racked in wiring
closets. Clients typically connect the access switches through the
building's structure cabling and there is wireless access as well.

Figure 3 shows a typical part of an enterprise LAN. Each office has an


Ethernet switch and a wireless access point (AP). To allow communication
between the offices, all access switches connect to one centralized
aggregation switch. Note that if a client in office 1 wants to communicate
with a client in office 2, the data path goes from switch 1 to the
aggregation switch to switch 2. This is a very common pattern in
enterprise LAN networks. Access switches typically don't have connections
between each other but connect to centralized aggregation switch also
referred to as a distribution switch.
Copper cabling
Ethernet LANs are most widely built using UTP cables. UTP cable is made
of 8 copper wires, grouped together in four twisted pairs. Each pair has a
color scheme in which one wire is solid colored and the other one is the
same color but striped.
Typical Ethernet UTP cable has RJ-45 connectors on both ends. Each RJ-45
connector has eight pins into which the eight wires can be inserted into.
Based on the scheme that is used to tell each wire goes into which pin,
UTP cables are straight-through cable or cross-over cable.

Different cable types are required when you connect different devices
because some devices transmit data on pins 1&2 and receive on pins 3,6
and others transmit on pins 3,6 and receive on pins 1,2. So in order to
connect the transmit pair of pins on one device to the receiving pair of
pins on the other you have to use the correct cable type.
UTP Cabling Pinouts
In order to understand the difference between the two main types of
Ethernet cables: straight-through cable and crossover cable, we must first
understand how different types of devices transmit electrical signals on their
RJ45 ports. As an example, we will use some of those rules for the 10BASE-T
and 100BASE-T Ethernet standards.
Let's first have a look at the first group of devices such as Computers, Routers,
and Wireless Access Points. These devices use pins 1 and 2 to transmit data in
the form of electrical signals and pair of pins at positions 3 and 6 to receive
data.
The other group of devices such as Ethernet Hubs, Bridges and Switches use pins 3
and 6 to transmit data in the form of electrical signals and pair of pins at positions
1 and 2 to receive data. If you look closer you will see that this is exactly the
opposite of the devices shown in Figure 1.
A straight-through cable, as the name implies, connects the wire at pin 1 on
one end of the cable straight to pin 1 at the other end of the cable; the wire
at pin 2 to pin 2 on the other end of the cable; pin 3 on one end connects to
pin 3 on the other, and so on, as shown in Figure 3.
So let's look at what happens when we connect a device that transmits
on pins 1,2 with a device that receives on pins 1,2. For example, a PC
connected to a LAN switch using a straight-through UTP cable. As
shown in Figure 4, everything works correctly because the devices on
the right use the opposite pins to transmit and receive electrical signals.
But let's look at what will happen if we connect two like devices with a
straight cable as shown in Figure 5. For example, a router connected
to a router or computer's NIC card connected directly to a router. The
figure shows what happens on a link between the devices. The two
routers both transmit on the pair at pins 1 and 2, and they both
receive on the pair at pins 3 and 6. So the signal being transmitted on
both sides can't get to the respective receiving end and
communication is not possible.
The solution to this problem is to cross-connect the cable
wires in such a way, so the transmitting pins on one side to
connect to the receiving pins on the other side and vice
versa. If some of the wires are crossed, the cable is not
"straight" anymore, that's why it is called a crossover cable.
So in summary, the logic in choosing the correct cable to connect Ethernet
devices is:
Crossover cable: If both devices transmit on the same pin pair
Straight-through cable: If both devices transmit on different pin pairs
NOTE Nowadays, if you connect two Cisco devices together using whatever
cable you like, the link will still work because there is a feature called auto-
mdix that notices when the wrong cable is used and automatically changes its
logic to make the link work.
Fiber-Optic Cabling
Fiber-optic cabling is widely used for high-speed Ethernet links over relatively long
distances. It uses glass or plastic fiber as a medium through which light is "guided"
to the other end of the link. The fiber-optic cable itself has several layers made from
different materials and having different functions. The most important layer is
the core, which is the very center of the cable. A light source, called transmitter or
Tx, shines a light into the core. The core itself is surrounded by optical material
called the cladding that keeps the light in the core using an optical technique
called total internal reflection. Together the cladding and core create the
environment to allow transmission of light over the cable.

Depending primarily on the diameter of the core, fiber-optics are separated into
two main types: single-mode fiber (SMF) and multimode fiber (MMF).
Multi-Mode Fiber Optics
Due to its bigger core, some of the light beams may travel a direct route,
whereas others bounce off the cladding as shown in Figure 3. These alternate
paths cause the different groups of light beams, referred to as modes, to arrive
separately at the end of the link. Because of this, the strength of the light is
reduced over long distances.
Due to the large core size of multimode fiber, some low-cost light sources like
LEDs (light-emitting diodes) and VCSELs (vertical-cavity surface-emitting lasers)
are typically used. Because of this, transmission system costs (transmitters and
receivers) are lower than single-mode fiber. Typical light wavelengths used are
850 nm and 1300 nm.
In summary, multimode fiber gives high bandwidth at high speeds over medium
distances (up to 1km) at a lower cost.

Single-Mode Fiber Optics


Single-mode fiber cables have up to 5 times smaller diametral core than the
multi-mode cables. It allows only one mode of light to propagate through the
core as shown in Figure 4. Because of this, the number of light reflections is
lower than of multimode fiber, so the signal can travel further.
Single-mode fiber typically uses a laser or laser diodes to emit light into the
core. Because of this, transmission system costs (transmitters and receivers)
are higher than multimode fiber. Typical light wavelengths used are 1310 nm
and 1550 nm. Single-mode fiber bandwidth is theoretically unlimited
because it allows only one light mode to goes through at a time.
In summary, single-mode fiber gives unlimited bandwidth at very high
speeds over long distances (up to 80km) at a high cost.

How can I differentiate single mode from multimode cable?


Single-mode cables are coated with yellow outer sheath, and multimode
cables are coated either with orange or aqua jacket.
Can singlemode and multimode fiber be mixed?
If you mix them toghether in one link, for example a multimode cable is
connected to a single-mode patch pannel and on the other side multimode cable
again, it will result in link being unstable, flapping or being completely down.
Either way, different types of cabling must never be mixed.
Collision Domains
To understand what a collision domain is, we have to look a little bit in the
past when Ethernet LANs were built using devices like Hubs and Bridges.
Ethernet LAN with Hubs
Ethernet Hub is a network device that is used for connecting multiple nodes
and making them act as connected to a single network segment. It works
purely at the physical layer of the OSI model. It has multiple ports, in which
the incoming electrical signals on one port are repeated at the output of every
other port. There is no forwarding logic at all.
A Hub makes all connected devices to be part of one single network segment
because every electrical signal on every cable is replicated to all other cables.
This creates a single shared medium and a network collision occurs when
more than one device attempts to send a frame on the segment at the same
time.
Carrier-sense multiple access with Collision Detection (CSMA/CD)
So at that point, you may be wondering if collisions happen all the time, how
are devices connected to an Ethernet Hub even able to communicate? There is
a media access control method called CSMA/CD that devices use when trying to
communicate over a shared medium. CSMA/CD stands for Carrier Sense
Multiple Access with Collision Detection. The key here is the Collision
Detection. When a device wants to transmit a frame, it checks to see if the
segment is free. If the segment is not free, the device waits a random amount
of time before re-transmitting. If the network segment is free and two devices
send frames at the same time, their signals collide. When the collision is
detected, they both stop and wait a random amount of time before re-
transmitting.
To understand what is behind Carrier Sense Multiple Access with Collision
Avoidance let's look at each component individually:
Carrier sense (CA): The idea that nodes may only send data over the network if
the shared medium is free.

Multiple access (MA): Several nodes share a network segment so they need an
access method to resolve collisions.

Collision detection (CD): If a collision does occur, it will be detected and the
transmission will be tried again after a random amount of time.
The concept of collision domains applies also in wireless networks because the
radio signals traverse a shared medium which is the wi-fi radio spectrum. So all
things we have said by now apply to Wireless networks as well - only one node in
a wireless LAN may transmit at any one time otherwise collision occurs
Ethernet Bridges
Ethernet Bridges are the predecessor of modern LAN switches. They were
introduced to resolve the scaling problem with shared segments and
collisions. Bridges are layer 2 devices, which means they can read the
Ethernet Header of the frames they forward and take decisions based on
the information in the headers. This eliminated the need to send all frames
out all ports, which practically means to repeat all electrical signals out to
all ports. Therefore, Ethernet bridges split a network segment into two
collision domains
Ethernet Switches
LAN switches completely resolve the problem with collisions. They
operate at layer 2 of the OSI model, meaning that they look at the
ethernet header and trailer. Their main advantage is that all their ports
can operate in full-duplex, meaning they can simultaneously transmit
and receive frames on any given port at any given time. Because of this,
the media access algorithm for collision detection (CSMA/CD) is no
longer required and is disabled by default. Another big advantage of
switches is that they forward frames based on MAC addresses so any
given frame doesn't need to be sent to all ports as hubs do.
Broadcast Domains
What is a broadcast domain?
In Ethernet LANs, a broadcast is one-to-all communication, which means that
if a node sends a broadcast frame, everybody receives a copy of it. At the
Ethernet layer, broadcast frames have a destination MAC address of FF-FF-FF-
FF-FF-FF. When a switch receives a frame with this MAC, it sends a copy of the
frame out all its interfaces, except the one it received the broadcast on. An
example of this behavior is shown in Figure 1.

A broadcast domain is a segment of the network where all devices receive a


copy of every broadcast frame sent. To better understand the concept let's
look at the example in Figure 1. PC1 sends a broadcast frame with a
destination MAC address ff-ff-ff-ff-ff-ff. When the switch receives the frame, it
looks in the Ethernet header of the frame and understands, based on the
destination MAC, that this is a broadcast frame. The switch then floods the
frame out all its ports, except the one it received the frame on (the port
towards PC1). In the end, the broadcast frame reached all devices in the LAN -
PC1 originated the frame, and PC2,3 and 4 get a copy of it through the
switch. Therefore PC1,2,3 and 4 are in one broadcast domain.
At this point, you may be wondering when this flood of broadcast ends and
whether a whole network environment a one broadcast domain. If we want to
break down a broadcast domain into smaller domains, we use a router. Routers
don't flood broadcast frames but instead decapsulate the ethernet frames and
act upon the layer 3 information within the IP packets.
LAN switch logic
A LAN switch's ultimate role is to forward Ethernet frames. To achieve that goal,
switches make decisions based on the source and destination MAC addresses in
the Ethernet frames.
Figure 1. Ethernet Header StructureWhen a switch receives a frame, it follows a
set of rules and ultimately makes a decision out of which port or multiple ports to
towards the frame. The switch logic can be summarized in a few steps:
Receive an Ethernet frame, examine the source MAC address and update its MAC
address table.
Decide where and how to forward the frame based on the destination MAC
address.
Forward the frame out single port or forward a copy of the frame out multiple
ports, if the frame is either unknown unicast or broadcast one.
Let's start with the process of receiving frames and learning MAC addresses.
Learning MAC addresses
Switches build their MAC address tables by examining the source MAC address
of incoming Ethernet frames. When a frame is received on a switch port and
the source MAC address is not known, the switch creates a new entry in the
MAC table.
It is important to mention that the term switchport and switch interface are
used interchangeably. Also, switch's MAC address table is also called switching
table and CAM table (Content-Addressable Memory Table
The process starts when Client 1 sends an Ethernet frame to Client 4. Let's
look closely at each steps the switch takes:
An ethernet frame is received on switchport Gi0/1. Each frame starts with a
7 bytes preamble and 1 byte start frame delimiter (SFD) as shown in figure
1. These first 8 bytes of the frame are used to get the attention of the
receiving node. Essentially, they tell the receiving node to get ready to
receive a new frame.
The switch examines the source MAC address, which is the physical address
of Client 1 - 1111.1111.1111.
The switch then checks this MAC address against its MAC address table. If its
is not found in the table, the switch creates a new entry.
Then the switch checks the destination MAC address. If there is an entry in
the routing table for this address, then it sends out the frame out that
interface.
If you recall what we have learned in the previous lesson when a switch receives a
frame, it checks the destination MAC address against its MAC table and if there is
no matching entry, it forwards the frame out all interfaces except the incoming
interface. This process is often referred to as flooding and the frame whose
destination MAC is unknown is called an unknown unicast.

The main idea here is simple - if you don't know where exactly to deliver a
frame, send it out everywhere, and the recipient will eventually get it. And the
receiver will likely reply back, therefore the switch will learn both nodes' MAC
addresses and continue the future forwarding process as known unicast (not
flooding the frames).

Switches also flood two other types of frames:


broadcast frames - ones destined to the Ethernet broadcast address FF-FF-FF-FF-
FF-FF
multicast frames - ones destined to MAC address which starts with bits '1110'
Ethernet Loops (Broadcast storms)
If we apply this flooding logic to a switching topology with redundant links, a
strange effect takes place. Let's look at the example shown in Figure 2. PC1
sends out a broadcast frame. When switch 1 receives the broadcast, it sends it
out all ports, except the incoming one. Therefore, it sends a copy of the frame
to switch 2 and switch 3. The same happens when the copies are received by
SW2 and SW3. They see that this is a broadcast and send a copy of it out all
ports except the incoming one. In the end, the flooding of this broadcast results
in the frame repeatedly rotating around the three switches indefinitely until one
of them crashes because of high CPU or one of the links gets completely
congested and unusable. This effect is referred to as Ethernet Loop, Layer 2
Loop, or Broadcast Storm.
Redundant topology like Figure 2 is necessary for high availability, but switches
need to prevent the bad effect of those looping broadcast frames. To stop these
loops, Cisco switches use a protocol called Spanning-Tree (STP) that causes some
of the redundant links to go into a blocking state. Blocking means that the
interface doesn't receive or forwards frames until network failure occurs and the
link needs to be used.
KEY TOPIC LAN switching doesn't work in looped topologies (networks with
redundant links) without a mechanism that breaks the topology into a loop-free
one. The most widely used loop-preventing techniques are Spanning-tree (STP)
and link aggregation, but others exist as well.
Shown in Figure 3 is an example of the same network but with a mechanism that
breaks the looped topology. Note that the link between switch 2 and switch 3 is
not used for frame forwarding and therefore there is no way for the broadcast
frames to loop around indefinitely
VLAN Concept
LAN switches and BUM traffic
Before understanding the VLAN concept, you must first have an understanding of
two core concepts about the Ethernet standard - what is a broadcast domain and
what is BUM traffic. Let's start with the BUM data type. BUM stands for
broadcast, unknown unicast, and multicast. When a LAN switch receives a frame
that belongs to one of these types, it sends the frame to all its ports except the
port it received the frame on.
A broadcast domain includes all connected devices that get a copy of any
broadcast, unknown unicast, or multicast (BUM) frame being sent. In the
above figure, the blue LAN on the left is one broadcast domain and the green
LAN on the right side is another broadcast domain. A general rule of thumb is
that a single LAN is equal to a Broadcast Domain is equal to a Subnet.
LAN = Broadcast Domain = Subnet
By default, all interfaces on a Cisco switch are in the same broadcast domain.
Therefore, when a broadcast frame is received on any switch port, the switch
forwards it out to all its other ports. Having that logic in mind, to create two
separate LANs (like one for servers and one for users), you must use two
different switches as shown in figure 1. Тhis approach is not scalable, imagine
if your organization want to have thousand separate LANs, it has to have
thousands of physical switches. This scaling limitation is the reason why
Virtual LANs were introduced.
By using VLANs, a single switch can act as two logical switches or creating
two broadcast domains. This is done on a port-by-port basis. Using figure 2
as an example, the ports where the users are connected, are configured to
be part of VLAN10 (or in other words to be connected to virtual switch 10)
and the ports where the servers are connected to, are configured to be
part of VLAN20 (or in other words to be connected to virtual switch 20).
The switch will then never forward a frame send by any user to any of the
servers and vice-versa because they are part of different broadcast
domains.
Benefits of using VLANs
Using VLANs not only improves the scaling of the campus LAN. It has many more
advantages such as:
It improves the security by reducing the number of end-stations that receive
copies of BUM traffic.
It creates smaller fault domains by isolating different groups of devices in separate
broadcast domains.
It reduces the CPU overhead on each device in the LAN by limiting the number of
broadcast frames received.
It improves network performance and speed of failure recovering.
VLAN Trunking
Multiswitch broadcast domains
In the previous lesson, we explained that when a broadcast frame is received on
any switch port, the switch forwards it out to all its other ports. Having that in
mind, if we connect two default setting switches, as shown in figure 1, any
broadcast frame received by either switch is forwarded to the other one and
then out all its ports. Therefore a broadcast domain is not limited to one switch
only, it includes all devices that get a copy of any broadcast frame, even if they
are connected to other switches. If we scale this logic to a LAN with tens of
interconnected switches, we could have a broadcast domain consisting of
hundred of end devices. This at some point can contest the network with BUM
traffic to the point that the LAN becomes unusable. Thus splitting the single
broadcast domain into multiple smaller ones is even more important in large
topologies for interconnected switches.
VLAN on multiple switches
So if we apply the logic from the previous lesson, using virtual LANs
we can split the switch topology into multiple broadcast domains as shown in
figure 2. There are multiple ways of doing this but let's start with the simplest
one. This is done by configuring ports 1 through 4 of both switches to VLAN 10
and ports 5 through 9 to VLAN20. Although it is a valid design and it works, it
simply does not scale very well. It requires a physical link between the switches
per VLAN. If the topology has to have 10+ VLANs, it would need 10+ physical
cables between the switches, and you would use 10+ switch ports (on each
switch) for those links.

Obviously, this design is applicable in topologies, where there are a few VLANs
only. Nowadays in modern enterprise networks, there are tens of VLANs, so
this way of spanning VLANs between switches is not applicable at scale.
VLAN Trunking
In order to overcome this scaling limitation, we can use another Ethernet
technology called VLAN trunking. It creates only one link between the switches
that support as many VLAN as needed. At the same time, it also keeps the VLAN
traffic separate, so frames from VLAN 20 won't go to devices in VLAN 10 and vice-
versa. An example could be seen in figure 3. The link between switch 1 and switch
2 is a trunk link and you can see that both VLAN 10 and VLAN 20 pass through the
link.
Trunking protocols
Two trunking protocols have been used on Cisco switches over the years -
Inter-Switch Link (ISL) and IEEE 802.1Q. ISL was a Cisco proprietary tagging
protocol predecessor of 802.1Q, it has been deprecated and is not used
anymore. IEEE 802.1Q is the industry-standard trunking encapsulation at
present and is typically the only one supported on modern switches.

VLAN Tagging
VLAN trunking allows switches to forwards frames from different VLANs over a
single link called trunk. This is done by adding an additional header information
called tag to the Ethernet frame. The process of adding this small header is
called VLAN tagging. If you look at Figure 4, end-station 1 is sending a
broadcast frame. When switch 1 receives the frame, it knows that this is a
broadcast frame and it has to send it out all its ports. However, switch 1 must
tell switch 2 that this frame belongs to VLAN10. So before sending the frame to
switch 2, SW1 adds a VLAN header to the original ethernet frame, with VLAN
number 10
When switch 2 receives the frame, it sees that the frame belongs to VLAN 10,
then it removes the header and forwards to the original ethernet frame to all
its interfaces configured in VLAN10.
So in the given examples, when the ethernet frames are sent between the
switches over the trunk link, they are tagged with VLAN header. When the
receiving switch gets them, removes the VLAN tag and sends them to the
clients in the VLAN, the frames are untagged.
Switch interface modes
Each switch interface can operate as access or trunk port. Because in typical LAN
deployment, there are hundreds or even thousands of switch ports, there is a
protocol called Dynamic Trunking Protocol (DTP) that helps network
administrators set the operational mode of interfaces automatically. By default,
all Cisco switch ports are in operational state dynamic auto, which means that
this Dynamic Trunking Protocol (DTP) is listening and trying to understand what is
configured on the other side of the cable, and based on that to decide whether to
become an access or trunk port. For example, if we have a link between SW1 and
SW2, if we configure the interface on SW1 to be a trunk port, DTP will
advertise this to the other side and the interface on SW2 will automatically set
itelf in trunk mode and a trunk link will be formed between the switches.
Table 1. Switchport modes

Mode Behaviour
switchport mode dynamic auto DEFAULT MODE for layer 2 interfaces of Cisco
switches
Passively waiting to convert the port into a trunk.
(DTP listening for messages from the far side
saying "let's form a trunk")
Becomes a trunk if the other side of the link is
configured with trunk or dynamic desirable mode

switchport mode dynamic Actively trying to convert the link to a trunk. (DTP
desirable actively sending messages to the far side saying "let's
form a trunk")
Becomes a trunk if the other side of the link is
configured with trunk or dynamic
desirable or dynamic auto.
switchport mode access The interface becomes an access
port.
DTP negotiates the link as nontrunk
link.

switchport mode trunk The interface becomes a trunk port.


DTP negotiates the link as trunk
link. (DTP actively sending messages
to the far side saying "let's form a
trunk")
switchport mode nonegotiate Disables the Dynamic Trunking
Protocol (DTP).
Interface mode is configured
manually.
Forwarding Data Between VLANs
Connectivity between VLANs
LAN switches forward frames based on Layer 2 logic. This means that, when a
switch receives an Ethernet frame, it looks at the destination MAC address and
forwards the frame out another interface or multiple interfaces if it is a BUM
frame. This type of switch is often called Layer 2 switch.

Layer 2 forwarding logic is performed per VLAN. For example, in figure 1, all
end-stations on the left are configured in VLAN10 which is a separate broadcast
domain and different subnet. The servers on the right are configured in VLAN20
and are in their own broadcast domain and different subnet from VLAN10.
Because VLAN10 and VLAN20 are different broadcast domains, frames from one
VLAN will never leak over to the other. Therefore, the switch acts like two
separate switches as shown in figure 1.
Routing between VLANs with a router
Ultimately, when we design networks, we want to have any-to-any connectivity
between all devices. Following the logic that we have learned in the previous
lessons, that
VLAN = Broadcast Domain = Subnet
to enable connectivity between two VLANs means to enable connectivity between
IP subnets. Therefore, we need to have a device that acts as a router. There are
two possible solutions, we can use an actual router to do the routing or the switch
itself can perform routing functionalities. Switches that can perform Layer 3 routing
functions are called Layer 3 switches or Multilayer switches.
In the following example, we are using a router to route data between VLAN10
and VLAN20. The router has one physical interface connected to switchport in
VLAN10 and one physical interface connected to switchport in VLAN20. Thus,
the router has one interface in subnet 192.168.1.0/24 and one interface in
subnet 10.1.0.0/24 and it does what all routers do - route IP packets between
subnets.
The downside of this approach for forwarding data between VLANs is that
the router must have physical interfaces for every VLAN. Above example is a
feasible design option, but if we have 10+ VLANs for example, it will
obviously not scale well because we will use 10+ interfaces on both the
router and the switch.

Router on a stick (ROAS)


Almost all networks in the world use Virtual Local Area Networks nowadays.
In our lesson about VLANs we have learned a general rule of thumb that is:
VLAN = Broadcast Domain = SubnetThis means that if we have nodes
connected in different VLANs, they most probably are part of different
subnets. If we take the topology shown in Figure 1 for example, we have
four clients in VLAN10 with addresses in subnet 192.168.1.0/24 and four
servers in VLAN20 with addresses in 10.1.0.0/24. Therefore, the
communication between the two VLANs must be handled by a device that
can perform IP routing between subnets. This device must have an IP
address in each network and clients need to then use these addresses as
their default gateways respectively.
There are two typical devices that are used to perform routing between VLANs:
Multilayer Switch (Layer 3 switch) - MLS switches work at both Layer 2 and Layer 3 of
the OSI model. They can switch frames and perform IP routing between VLANs. We
are examining this InterVLAN routing technique in our next lesson.
Router - There are two ways to use a router as a device that performs IP routing
between VLANs.
Connecting separate router interface to each VLAN and give each interface an IP
address from the respective VLAN subnet. Then it is just a regular routing
between networks. We have already discussed this technique in detail in one of
our previous lessons.
Connecting a router with a single link to a switch trunk port and defining sub-
interfaces for each vlan. An IP address is then configured on each sub-interface
from the respective VLAN. This technique is called router-on-a-stick
(ROAS) because there is only one physical link between the router and the
switch as you can see in figure 1.
What is router-on-a-stick?
Router-on-a-stick (ROAS) is a technique to connect a router with a single physical
link to a switch and perform IP routing between VLANs. From the switch's
perspective, this physical link is configured as a trunk port allowing all VLANs that
are going to be routed. From the router's perspective, this physical interface is
represented as multiple virtual sub-interfaces, one for each VLAN. An IP address
from each VLAN is then configured on each sub-interface and the router performs IP
routing between connected networks.
Comparing this approach to the other scenario where we can use a physical
interface for each VLAN. It is obviously a better more scalable technique to use a
single trunk link between the switch and the router as shown in figure 1.

You might also like