HPE Related

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

1. What are the 2 different type of SAN Switch, How does it work?

#storage
SAN switches fall into two main categories: Fibre Channel (FC) and Ethernet. Fibre
Channel switches are the most common.
Ethernet-based SAN has been growing in popularity, especially with the
proliferation of 10 Gigabit Ethernet (GbE). Ethernet switches and other Ethernet
equipment are cheaper and easier to deploy and maintain because they don’t
require specialized hardware or administrative skills, as is the case with Fibre
Channel. In addition, 1 GbE switch ports can be aggregated to deliver higher
throughput, providing more deployment flexibility.

Ethernet networks also support iSCSI (Internet Small Computer Systems Interface),
a common storage protocol built on top of TCP/IP (Transmission Control
Protocol/Internet Protocol). Comparisons between FC SAN and Ethernet SAN
typically come down to Fibre Channel vs. iSCSI, as it’s implemented on an Ethernet
network.

FC Switches:
Fibre Channel has a reputation for delivering much better performance than
Ethernet, especially if the Ethernet SAN is being shared with non-storage traffic
Port Types
Fibre Channel (FC) switches primarily utilize two port types to connect devices within a Storage
Area Network (SAN):
F-Ports (Fabric Ports)
E-Ports (Expansion Ports)
Fibre Channel over Ethernet (FCoE) Switches: FCoE switches enable the
convergence of Fibre Channel storage traffic and Ethernet data traffic onto a single
network infrastructure. They are used in environments where both Fibre Channel
and Ethernet connectivity are required.

Port Type Description Connector/Cable Use Case

Standard Ethernet
RJ-45
Ethernet ports for Connects FCoE
connector,
(Host connecting FCoE- switch to host
Category 5e/6
Facing) capable servers or devices
cabling
CNAs

FC Ports for SFP+ or QSFP+ Connects FCoE


(Storage connecting to a transceivers, switch to Fibre
Facing) Fibre Channel Fiber optic Channel SAN for
SAN (optional on broader network
(Optional) cables
some models) connectivity

HP 2408 FCoE Converged Network Switch


InfiniBand Switches: InfiniBand switches are high-speed, low-latency switches
often used in high-performance computing (HPC) and large-scale SAN deployments.
They provide a high-bandwidth, low-latency interconnect for storage and server
communication.,
iSCSI Switches: iSCSI switches are specialized Ethernet switches optimized for
iSCSI storage traffic. They are used in SANs that utilize iSCSI protocol for block-level
storage access over IP networks.

What if FCOE?
https://www.flackbox.com/fcoe-fibre-channel-ethernet-overview

FCoE (Fibre Channel over Ethernet) is a storage protocol (or language) that lets fiber
channel communications run over the Ethernet, specifically the 10G Ethernet.

FCoE became possible with the advent of 10Gb Ethernet which has enough
bandwidth to support both data and storage traffic on the same adapter.
This has gained popularity because it allows both the regular traffic and the storage
traffic to consolidate to just one network (instead of having one set of switches for
fiber and one for Ethernet). This also reduces the number of cables and switches,
thus reducing power and cooling costs as well.
Basically (and very simply) you will use converged network adapters (CNAs), a new
FCoE switch and a lossless Ethernet protocol. You will be able to use your 10G
Ethernet network while preserving the fiber channel protocol. Servers connect to
FCoE with CNAs which combine both fiber channel HBA and Ethernet NIC
functionality on a single adapter card.
One of the benefits of FCoE is that it preserves the high-speed data connections,
which a SAN requires. Additionally, since you are consolidating your SAN and
network infrastructures, your operational expenses will be lower and you will use
less power, cabling and data center space.

What is the Primary Benefit of the FCoE

One of the main benefits we get from FCoE is the savings we get in our
network infrastructure. When we have redundant native Fibre Channel
storage and Ethernet data networks, there are 4 adapters on our hosts and 4
cables that are connected to 4 switches.

Native Fibre Channel


With FCoE, we run the data and storage traffic through shared switches and
shared ports on our hosts. Now we just have 2 adapters, 2 cables, and 2
switches. The required infrastructure is cut in half. We save on the hardware
costs because of this and also require less rack space, less power, and less
cooling, which gives us more savings.

FCoE Networks
In FCoE, both the storage and the data traffic uses the same shared physical
interface on our hosts – the Converged Network Adapter (CNA). The CNA
replaces the traditional Network Interface Card (NIC) in the host. (See the
SAN and NAS Adapter Card Types post for a quick review.)

The storage traffic uses FCP so it requires a WWPN. The data traffic requires
a MAC address. The way that Ethernet data traffic and FCP storage traffic
works is totally different so how can we support them both on the same
physical interface?

The answer is we virtualize the physical interface into two virtual interfaces: a
virtual NIC with a MAC address for the data traffic and a virtual HBA with a
WWPN for the storage traffic. The storage and the data traffic are split into
two different VLANs, a data VLAN and a storage VLAN.

In the diagram below we have a single server, Server 1. It’s got two physical
interface cards, CNA1 and CNA2. Both CNAs are split into separate virtual
adapters for data and storage.

For the data traffic, we've got virtual NIC-1 on CNA1, and virtual NIC-2 on
CNA2. Those virtual NICs will both have MAC addresses assigned to them.
On the switches, we're trunking the data VLAN down to the physical part on
the CNA. We cross connect our switches for the data VLAN traffic.
FCoE Data VLAN

We also have virtual HBAs on the CNAs, we've got virtual HBA-1 on CNA1
and virtual HBA-2 on CNA2. We have WWPNs on our virtual HBAs. We're
trunking the storage VLAN down from the switches to the virtual HBAs on the
converged network adapters.

This time we do not cross-connect our switches because we need to comply


with SAN best practice of physically separate Fabric A and Fabric B.
FCoE Storage VLAN

If we put the whole thing together, you can see that we're running both our
data and our storage traffic over the same shared infrastructure.

We’re trunking both the data and the storage VLANs down to single physical
parts on our CNAs, and then that traffic is split out into a virtual NIC for the
data traffic with our MAC address on it, and a virtual HBA for the storage
traffic with a WWPN on it.
FCoE Network

Another thing we need to discuss is lossless FCoE. Fibre Channel is a


lossless protocol, meaning it ensures that no frames are lost in transit
between the initiator and the target. It uses a buffer to buffer credits to do this.

Ethernet is not lossless. TCP uses acknowledgments from the receiver back
to the sender to check that traffic reaches its destination. If an
acknowledgment is not received then the sender will resend that packet.

FCoE uses FCP which assumes a lossless network, so we need a way to


ensure our storage packets are not lost while traversing the Ethernet network.
Priority Flow Control (PFC) which is an FCoE extension for Ethernet, is used
to ensure that lossless delivery.
PFC works on a hop by hop rather than end to end basis, so we don't just
need to support it on our end hosts. Each NIC and switch in the path between
initiator and target must be FCoE capable. You can't use just standard 10Gb
Ethernet NICs and switches, you need to use CNAs and the switches have to
be FCoE capable.

What is HPE Performance Cluster Manager ?


What is Cloud Physics & Uses of it?
HPE CloudPhysics is a software-as-a-service (SaaS)-based Big Data analytics
platform for virtualized
infrastructures. HPE CloudPhysics provides data-driven insights to improve IT
performance, return on investment
(ROI), and data protection through analysis of IT metadata.
What is the major difference between FC & SCSI Protocol

You might also like