HPE Related
HPE Related
HPE Related
#storage
SAN switches fall into two main categories: Fibre Channel (FC) and Ethernet. Fibre
Channel switches are the most common.
Ethernet-based SAN has been growing in popularity, especially with the
proliferation of 10 Gigabit Ethernet (GbE). Ethernet switches and other Ethernet
equipment are cheaper and easier to deploy and maintain because they don’t
require specialized hardware or administrative skills, as is the case with Fibre
Channel. In addition, 1 GbE switch ports can be aggregated to deliver higher
throughput, providing more deployment flexibility.
Ethernet networks also support iSCSI (Internet Small Computer Systems Interface),
a common storage protocol built on top of TCP/IP (Transmission Control
Protocol/Internet Protocol). Comparisons between FC SAN and Ethernet SAN
typically come down to Fibre Channel vs. iSCSI, as it’s implemented on an Ethernet
network.
FC Switches:
Fibre Channel has a reputation for delivering much better performance than
Ethernet, especially if the Ethernet SAN is being shared with non-storage traffic
Port Types
Fibre Channel (FC) switches primarily utilize two port types to connect devices within a Storage
Area Network (SAN):
F-Ports (Fabric Ports)
E-Ports (Expansion Ports)
Fibre Channel over Ethernet (FCoE) Switches: FCoE switches enable the
convergence of Fibre Channel storage traffic and Ethernet data traffic onto a single
network infrastructure. They are used in environments where both Fibre Channel
and Ethernet connectivity are required.
Standard Ethernet
RJ-45
Ethernet ports for Connects FCoE
connector,
(Host connecting FCoE- switch to host
Category 5e/6
Facing) capable servers or devices
cabling
CNAs
What if FCOE?
https://www.flackbox.com/fcoe-fibre-channel-ethernet-overview
FCoE (Fibre Channel over Ethernet) is a storage protocol (or language) that lets fiber
channel communications run over the Ethernet, specifically the 10G Ethernet.
FCoE became possible with the advent of 10Gb Ethernet which has enough
bandwidth to support both data and storage traffic on the same adapter.
This has gained popularity because it allows both the regular traffic and the storage
traffic to consolidate to just one network (instead of having one set of switches for
fiber and one for Ethernet). This also reduces the number of cables and switches,
thus reducing power and cooling costs as well.
Basically (and very simply) you will use converged network adapters (CNAs), a new
FCoE switch and a lossless Ethernet protocol. You will be able to use your 10G
Ethernet network while preserving the fiber channel protocol. Servers connect to
FCoE with CNAs which combine both fiber channel HBA and Ethernet NIC
functionality on a single adapter card.
One of the benefits of FCoE is that it preserves the high-speed data connections,
which a SAN requires. Additionally, since you are consolidating your SAN and
network infrastructures, your operational expenses will be lower and you will use
less power, cabling and data center space.
One of the main benefits we get from FCoE is the savings we get in our
network infrastructure. When we have redundant native Fibre Channel
storage and Ethernet data networks, there are 4 adapters on our hosts and 4
cables that are connected to 4 switches.
FCoE Networks
In FCoE, both the storage and the data traffic uses the same shared physical
interface on our hosts – the Converged Network Adapter (CNA). The CNA
replaces the traditional Network Interface Card (NIC) in the host. (See the
SAN and NAS Adapter Card Types post for a quick review.)
The storage traffic uses FCP so it requires a WWPN. The data traffic requires
a MAC address. The way that Ethernet data traffic and FCP storage traffic
works is totally different so how can we support them both on the same
physical interface?
The answer is we virtualize the physical interface into two virtual interfaces: a
virtual NIC with a MAC address for the data traffic and a virtual HBA with a
WWPN for the storage traffic. The storage and the data traffic are split into
two different VLANs, a data VLAN and a storage VLAN.
In the diagram below we have a single server, Server 1. It’s got two physical
interface cards, CNA1 and CNA2. Both CNAs are split into separate virtual
adapters for data and storage.
For the data traffic, we've got virtual NIC-1 on CNA1, and virtual NIC-2 on
CNA2. Those virtual NICs will both have MAC addresses assigned to them.
On the switches, we're trunking the data VLAN down to the physical part on
the CNA. We cross connect our switches for the data VLAN traffic.
FCoE Data VLAN
We also have virtual HBAs on the CNAs, we've got virtual HBA-1 on CNA1
and virtual HBA-2 on CNA2. We have WWPNs on our virtual HBAs. We're
trunking the storage VLAN down from the switches to the virtual HBAs on the
converged network adapters.
If we put the whole thing together, you can see that we're running both our
data and our storage traffic over the same shared infrastructure.
We’re trunking both the data and the storage VLANs down to single physical
parts on our CNAs, and then that traffic is split out into a virtual NIC for the
data traffic with our MAC address on it, and a virtual HBA for the storage
traffic with a WWPN on it.
FCoE Network
Ethernet is not lossless. TCP uses acknowledgments from the receiver back
to the sender to check that traffic reaches its destination. If an
acknowledgment is not received then the sender will resend that packet.