White Paper Storage Magazine June09
White Paper Storage Magazine June09
White Paper Storage Magazine June09
Tiered storage
directors
DR and dedupe
f
networks for years, but soon they’ll
Dueling directors
By Jacob Gsoedl
IBRE CHANNEL (FC) as a technology has been relatively static over the
past 10 years, and FC switch innovation has been incremental—from
bandwidth support and additional features to increased resilience
Self-healing storage
The Brocade DCX Backbone and Cisco MDS 9500 Series have much
in common. They’re both chassis based and can be scaled by adding or
changing hot-swappable line cards. With all components redundant and
hot-swappable (blades, fans, power supplies), they present no single point
of failure. From 1 Gbps/2 Gbps /4 Gbps /8 Gbps Fibre Channel to FICON,
FC over Internet Protocol (FCIP) and Internet Protocol over FC (IPFC), and
connectivity options for iSCSI (DCX via an iSCSI gateway and the MDS
DR and dedupe
PRODUCT OVERVIEW
In an attempt to establish a product category that resides above tradi-
tional directors, Brocade doesn’t categorize its DCX Backbone as a
Adapting to change
FEATURE COMPARISON
BROCADE DCX BACKBONE CISCO SYSTEMS INC. MDS 9500
Models *DCX Backbone (eight port blades) *MDS 9513 (11 port blades)
*DCX-4S Backbone (four port blades) *MDS 9509 (seven port blades)
*MDS 9506 (four port blades)
Maximum port count 384 512
Adapting to change
architectures that last for many Brocade DCX-4S with four 48-port $328,000
8 Gbps blades and full redundant
years,” noted Bob Laliberte, an configuration
analyst at Enterprise Strategy Group Cisco MDS 9506 with four 48-port 8 Gbps $320,000
(ESG) in Milford, Mass. blades and full redundant configuration
A case in point is the DCX Back- (List prices provided by Brocade/Cisco reseller)
bone. “The main difference between
the 48000 and DCX is the separation
and rearchitecture of the core switching and control processor blades
into separate blades, which required a new chassis design,” said Bill
Dunmire, senior product marketing manager at Brocade.
Dueling directors
COMPARING ARCHITECTURES
The Brocade DCX Backbone is based on a shared memory architecture
where data moves from switching ASIC to switching ASIC along multiple
internal ISLs that make up the path from an ingress port to an egress
port. To load balance between these inter-ASIC links within the switch,
the DCX Backbone relies on either exchange- or port-based routing.
“Besides fewer components on blades, which reduces the likelihood of
failure, in a shared memory architecture ASICs on the core switching
blades talk to ASICs on port blades using the same protocol, minimizing
Self-healing storage
said Omar Sultan, solution manager, data center switching, data center
solutions at Cisco.
Even though each vendor claims its architecture is superior, they
each have their pros and cons. With the exception of a few vendor
specific peculiarities, both platforms can be used to power the most
mission-critical and largest SANs with comparable results and user
experience; this is substantiated by Brocade and Cisco splitting the
director market almost evenly. “The two products work very well and
reducing the amount of traffic that has to pass through the core switching
blades. Although Cisco rebuffs the local switching benefit, emphasizing
bigger latency variances as a result of local switching, support for local
switching in its latest Nexus platform suggests that the lack of local
switching support in the MDS 9500 is a disadvantage.
In addition to reliability, performance and throughput are the most
relevant attributes of a director platform. The Brocade DCX Backbone
currently wins the raw throughput comparison with 256 Gbps through-
put per slot vs. 96 Gbps for the Cisco MDS 9500. When combined with
local switching, it can concurrently
Dueling directors
and QoS makes the throughput difference less significant. In the past,
increases in port and chassis throughput benefited mostly ISLs and, to
a lesser degree, servers; but now the proliferation of virtual server envi-
ronments definitely makes bandwidth capacity more relevant. “Server
virtualization is a game changer, making oversubscription more prob-
lematic because physical servers running many virtual machines are
more likely to fully utilize a SAN link,” Gartner’s Passmore said. Cisco
confirmed that it’s working on a next-generation switch fabric module
that will match the DCX’s 256 Gbps slot throughput; existing customers
will be able to upgrade by simply replacing the existing switch fabric
module. “Replacing the switch fabric module costs an order of magnitude
Adapting to change
marks like the December 2008 Miercom report (Report 081215B) have
shown slower performance if the switch is used with port-based routing
instead of the default exchange-based routing; and some array vendors
advise their customers to stay away from the DCX’s default exchange-
based routing for some of their arrays.
“HP does not typically make specific recommendations regarding
switch routing, but we recommend using port-based routing with the
StorageWorks Continuous Access EVA solution since exchange-based
routing doesn’t guarantee in-order frame delivery all the time across
exchanges,” said Kyle Fitze, marketing director for the StorageWorks
Dueling directors
two HBAs to a Fibre Channel switch, the two CNAs terminate into a
CEE/DCE-capable switch that delivers Ethernet traffic to the LAN and FC
traffic to the SAN. Although FCoE and CEE/DCE are expected to eventually
be used from core to edge, its initial use is primarily at the access layer
to connect servers to CEE/DCE-capable switches.
Both Brocade and Cisco are committed to FCoE, but with different
strategies. Brocade won’t ship Converged Enhanced Ethernet products
until the standard is ratified; at that point, Brocade will support FCoE and
CEE in its DCX Backbone via new blades. Older Brocade Fibre Channel
products, such as the 48000 Director, will connect through the DCX Back-
bone or a new top-of-rack switch to interface with CEE components.
With the Nexus 5000 Series top-of-rack switch, Cisco is the first
vendor to offer a pre-standard FCoE product. For the MDS 9500 director
family, as well as the Nexus 2000 Series Fabric Extenders and Nexus
7000 Series switches, DCE and FCoE support won’t be available until
DR and dedupe
MAKING A CHOICE
The most important director
With much in common, including selection criteria: which
pricing (see “The high cost of
platform fits best into your
Self-healing storage
off Brocade to Cisco for the sole purpose of taking advantage of the
modularity of the MDS switches. It’s much more cost-effective to get
to the next version with the Cisco platform. Unlike Brocade, it doesn’t
require expensive forklift upgrades,” Turner said.
Fernando Mejia, senior manager of IT infrastructure at the Independent
Purchasing Cooperative (IPCoop) Inc., the purchasing arm of the Subway
franchise in Miami, acquired a Cisco Nexus 7000 instead of a Catalyst 6500
because of its high performance, scalability and the ability to replace his
stackable Brocade FC switches once FCoE becomes available.
Dueling directors
MATURING DIRECTORS
Regardless of whose product you choose, both platforms will reliably
power your SAN, which is confirmed by the myriad storage-area networks
currently powered by Brocade and Cisco. Both vendors are embracing the
converged Ethernet paradigm in their product roadmaps, but unless you’re
willing to debug the initial CEE/DCE flaws as an early adopter, you’re well
advised to wait for at least another year until the standard and products
have matured. 2