Docu 95765
Docu 95765
Docu 95765
Version 3.4
Hardware Guide
08
February 2020
Copyright © 2019-2020 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 5
Tables 7
Welcome to ECS 9
Chapter 2 Switches 15
Dell EMC S5148F switch....................................................................................16
Front-end switch pair........................................................................... 16
Back-end switch pair............................................................................ 17
Front-end and back-end switch connections........................................ 18
1 Rack ID.............................................................................................................................. 12
2 Default node names........................................................................................................... 13
3 Switch port numbers......................................................................................................... 16
4 Switch port number........................................................................................................... 17
5 EX500 hardware components........................................................................................... 22
6 EX500 configurations ...................................................................................................... 24
7 EX500 Disk Upgrade Kit....................................................................................................26
8 Standard features of EX500 servers................................................................................. 27
9 Decoding of LEDs in light bar............................................................................................ 29
10 EX500 0U PDU single-phase zone A/B mapping...............................................................33
11 EX500 PDU three-phase Delta and WYE zone A/B mapping............................................ 34
12 EX500 2U PDU single-phase zone A/B mapping...............................................................35
13 EX500 2U PDU three-phase Delta and Wye zone A/B mapping........................................36
14 EX500 node iDRAC port to BE1 port mapping................................................................... 41
15 EX300 hardware components........................................................................................... 56
16 EX300 configurations ...................................................................................................... 58
17 Standard features of EX300 servers..................................................................................61
18 Decoding of LEDs in light bar............................................................................................ 63
19 Server Panel, Ports, and Slots.......................................................................................... 64
20 EX300 0U PDU cabling .................................................................................................... 67
21 EX300 2U PDU cabling .................................................................................................... 67
22 EX300 node iDRAC port to BE1 port mapping................................................................... 75
23 EX3000 hardware components......................................................................................... 88
24 EX3000S single node chassis configurations .................................................................... 91
25 EX3000D dual node chassis configurations ......................................................................92
26 EX3000 physical dimensions............................................................................................. 93
27 Server indicators, buttons, or connectors......................................................................... 96
28 Server indicators, buttons, or connectors......................................................................... 99
29 EX3000S PDU cabling......................................................................................................101
30 EX3000D PDU cabling..................................................................................................... 101
31 EX3000S node iDRAC port to BE1 port mapping.............................................................. 111
32 EX3000D node iDRAC port to BE1 port mapping..............................................................113
33 Legend for Back-end switch, front-end switch, and iDRAC ports on EX30000S node..... 118
ECS provides a complete software-defined cloud storage platform that supports the storage,
manipulation, and analysis of unstructured data on a massive scale on commodity hardware. ECS
can be deployed as a turnkey storage appliance or as a software product that can be installed on
qualified commodity servers and disks. ECS offers all the cost advantages of commodity
infrastructure with the enterprise reliability, availability, and serviceability of traditional arrays.
The ECS online documentation comprises the following guides:
l Administration Guide
l Monitoring Guide
l Data Access Guide
l Hardware Guide
l API Guide
Administration Guide
The Administration Guide supports the initial configuration of ECS and the provisioning of
storage to meet requirements for availability and data replication. Also, it supports the
ongoing management of tenants and users, and the creation and configuration of buckets.
Monitoring Guide
The Monitoring Guide supports the ECS administrator's use of the ECS Portal to monitor the
health and performance of ECS and to view its capacity utilization.
Hardware Guide
The Hardware Guide describes the supported hardware configurations and upgrade paths and
details the rack cabling requirements.
API Guide
The API Guide use the ECS Management API to configure, manage, and monitor ECS.
PDF versions of these online guides and links to other PDFs, such as the ECS Security Configuration
Guide and the ECS Release Notes, are available from support.emc.com.
l Introduction........................................................................................................................... 12
l Rack and node host names.....................................................................................................12
Introduction
This guide describes the hardware components that make up the ECS appliance Generation 3
(Gen3) hardware models.
ECS Gen3 appliance series
The ECS Gen3 appliance series include:
l EX500 series: A dense object storage solution of hyper-converged nodes for small to medium-
sized ECS deployments.
The EX500 supports node expansion in increments of one when the capacity is the same as the
previous node. If the capacity is different from the previous node when expanding, EX500
supports expansion in a minimum of five-node increments. The recommended expansion is five
nodes. The EX500 series supports from 5 to 16 nodes per rack. With different drive sizes/
quantity and the flexibility of node additions, this platform can scale from 480TB RAW to
4.6PB RAW per rack.
l EX300 series: A dense object storage solution of hyper-converged nodes for small to medium-
sized ECS deployments. With different drive sizes and the flexibility of single node addition,
this platform can scale from 60 TB RAW to 1.5 PB RAW per rack.
l EX3000 series: An ultra-dense object storage solution of hyper-converged nodes for medium
to large-sized ECS deployments. This platform starts at a 2.2 PB RAW minimum configuration
and scales to 8.6 PB RAW per rack.
Note: In this document, the term node is used interchangeably with server, and the term appliance
refers to a cluster of nodes running ECS software.
Hardware generations
ECS appliances are characterized by hardware generation.
Gen3
l EX500 Gen3 models featuring 8TB or 12 TB disks (12 or 24 x HDD per node) became available
in September 2019.
l EX300 Gen3 models featuring 1 TB, 2 TB, 4 TB, or 8 TB disks (12 HDD per 2U node) became
available in August 2018.
l EX3000 Gen3 models featuring 12 TB disks (4U chassis with single or dual node
configurations) became available in August 2018.
Gen2
For documentation on Gen2 hardware, see the Dell EMC ECS D- and U-Series Hardware Guide.
l U-Series Gen2 models featuring 12 TB disks became available in March 2018.
l The D-Series was introduced in October 2016 featuring 8 TB disks. D-Series models featuring
10 TB disks became available March 2017.
l The original U-Series appliance (Gen1) was replaced in October 2015 with second generation
hardware (Gen2).
Table 1 Rack ID
17 silver 34 eggplant
Nodes are assigned node names based on their order within the server chassis and within the rack
itself. The following table lists the default node names.
1 provo 9 boston
2 sandy 10 chicago
3 orem 11 houston
4 ogden 12 phoenix
5 layton 13 dallas
6 logan 14 detroit
7 lehi 15 columbus
8 murray 16 austin
Nodes that are positioned in the same slot in different racks at a site have the same node name.
For example, node 4 is called ogden, assuming that you use the default node names.
The getrackinfo command identifies nodes by a unique combination of node name and rack
name. For example, node 4 in rack 1 is identified as ogden-red and can be pinged using its NAN
resolvable (through mDNS) name: ogden-red.nan.local.
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
41-44 In from EX-Series rack when the ECS system has more than one rack (10/25
GbE)
45-48 Out to EX-Series rack when the ECS system has more than one rack (10/25 GbE)
41-44 In from EX-Series rack when the ECS system has more than one rack (10/25
GbE)
45-48 Out to EX-Series rack when the ECS system has more than one rack (10/25 GbE)
Figure 4 Connections between front-end and back-end switches within an EX-Series rack
For back-end switch connections between EX-Series racks, port 41 on the back-end switches is
used for the inbound connection and port 45 is used for the outbound connection. These ports are
used for linear and ring topology rack-to-rack connectivity. For more information, see Network
connections between multiple ECS appliances in a single site on page 50.
Figure 5 Back-end switch connections between EX-Series racks
Component Description
Back-end (BE) switches l Two Dell EMC S5148F 25 GbE 1U ethernet switches with 48 x 25 GbE SFP ports
for private network and 6 x 100 GbE uplink ports.
connection
l 2 x 100 GbE VLT cables per HA pair.
Front-end (FE) switches l Two optional Dell EMC S5148F 25 GbE 1U ethernet switches can be obtained for
for customer public network connection or the customer can provide their own 10 GbE or 25 GbE HA
network connection pair for the front end.
l If the customer provides their own front-end switches, they must supply all VLT
cables, SFPs, or external connection cables.
l If Dell EMC S5148F 25 GbE front-end switches are used, the 25 GbE ports are
configured to run at 25G to connect to the EX500 nodes, and 2 x 100 GbE VLT
cables are provided.
EX500 configurations
Learn about the EX500 ECS appliance configurations.
The front view of an EX500 rack with the minimum node configuration and an EX500 rack with the
maximum node configuration is shown in the following diagram. The rear view requires a brace
wherever there is empty space above or below an EX500 server.
Figure 7 EX500 minimum and maximum configurations
The EX500 appliance is available in the following configurations within a Dell EMC rack or a
customer-provided rack.
>
EX500 supports 5 nodes in the same configuration. You can combine 8 and 12 TB disks, with a
minimum of 5 nodes of the same disk configuration.
Lists the nodes, disks in each node, disk size, and the RAW storage capacity.
Table 6 EX500 configurations
5 (minimum 12 l 8 TB l 96 TB
configuration)
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
6 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
7 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
8 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
9 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
10 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
11 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
12 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
13 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
14 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
15 12 l 8 TB l 96 TB
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
16 (maximum 12 l 8 TB l 96 TB
configuration)
l 12 TB l 144 TB
24 l 8 TB l 192 TB
l 12 TB l 288 TB
EX500 server
Learn about the EX500 servers standard features.
Lists the features of EX500 servers.
Table 8 Standard features of EX500 servers
Features 8 TB 12 TB
Riser l Each EX500 server supports up to eight PCI express (PCIe) Gen3
Configuration expansion cards that can be installed on the system board using three
expansion card risers.
l Config 1 (1B + 2B)
l Four x8 slots and rear storage
PCIe Slot 2 BOSS controller card + with 1 M.2 stick 480 GB, LP
HDDs in front 8TB 7.2k RPM 512e SATA HDD 12TB 7.2k RPM 512e
slots SATA HDD
LED indicators are on the left and right side of the server front panels.
Figure 10 Left control panel
The left control panel LED behavior is broken into two subsets, the light bar and the status LEDs.
The light bar also functions as a button. The light bar indicates chassis health and also functions as
System ID when pressed.
Lists the status of LEDs in light bar.
Table 9 Decoding of LEDs in light bar
Status ID button
Status ID button
There are two status LEDs to indicate and identify any failed hardware components.
Figure 11 Status LEDs decoded view
1. Power button
2. USB 2.0-compliant port
3. Micro-USB for iDRAC Direct
4. iDRAC LED indicator
The EX500 appliance connections to 0U PDU and 2U PDU outlets are listed in the following tables.
The table describes EX500 0U PDU single-phase zone A/B mapping.
Table 10 EX500 0U PDU single-phase zone A/B mapping
37 12 Empty 3
36 12 Empty 3
35 12 Empty 3
34 12 Empty 3
33 11 Node 16 3
32 11 Node 15 3
31 11 Node 14 3
30 10 Empty 2
29 10 Empty 2
28 10 Empty 2
27 9 Empty 2
26 9 Empty 2
25 9 Node 13 2
24 8 Node 12 2
23 8 Node 11 2
22 8 Node 10 2
21 7 Node 9 2
20 7 Node 8 2
19 7 Node 7 2
18 6 Empty 1
17 6 Empty 1
16 6 Empty 1
15 5 Empty 1
14 5 Empty 1
13 5 Empty 1
12 4 FE Switch 2 1
11 4 FE Switch 1 1
10 4 Tray/Light Bar 1
9 3 BE Switch 2 1
8 3 BE Switch 1 1
7 3 Empty 1
6 2 Node 6 1
5 2 Node 5 1
4 2 Node 4 1
3 1 Node 3 1
2 1 Node 2 1
1 1 Node 1 1
The table describes EX500 PDU three-phase Delta and WYE zone A/B mapping.
Table 11 EX500 PDU three-phase Delta and WYE zone A/B mapping
37 12 Empty 1
36 12 Empty 1
35 12 Empty 1
34 12 Empty 1
33 11 Empty 1
32 11 Empty 1
31 11 Empty 1
30 10 Empty 1
29 10 Empty 1
28 10 Empty 1
27 9 Empty 1
26 9 Empty 1
25 9 Empty 1
24 8 Empty 1
23 8 Empty 1
22 8 Empty 1
21 7 Node 16 1
20 7 Node 15 1
19 7 Node 14 1
18 6 Node 13 1
17 6 Node 12 1
16 6 Node 11 1
15 5 Node 10 1
14 5 Node 9 1
Table 11 EX500 PDU three-phase Delta and WYE zone A/B mapping (continued)
13 5 Node 8 1
12 4 FE Switch2 1
11 4 FE Swtich 1 1
10 4 Tray/Light Bar 1
9 3 BE Switch 2 1
8 3 BE Switch 1 1
7 3 Node 7 1
6 2 Node 6 1
5 2 Node 5 1
4 2 Node 4 1
3 1 Node 3 1
2 1 Node 2 1
1 1 Node 1 1
24 6 Empty 3
23 6 Empty 3
22 6 Empty 3
21 6 Node 16 3
20 5 Node 11 2
19 5 Node 10 2
18 5 Node 9 2
17 5 Node 8 2
16 4 Node 4 1
15 4 Node 3 1
14 4 FE Switch2 1
13 4 FE Swtich 1 1
12 3 Node 15 3
11 3 Node 14 3
10 3 Node 13 3
9 3 Node 12 3
8 2 Tray/Light Bar 2
7 2 Node 7 2
6 2 Node 6 2
5 2 Node 5 2
4 1 Node 2 1
3 1 Node 1 1
2 1 BE Switch 2 1
1 1 BE Switch 1 1
The table describes EX500 2U PDU three-phase Delta and Wye zone A/B mapping.
Table 13 EX500 2U PDU three-phase Delta and Wye zone A/B mapping
24 6 Empty 1
23 6 Empty 1
22 6 Empty 1
21 6 Node 16 1
20 5 Node 11 1
19 5 Node 10 1
18 5 Node 9 1
17 5 Node 8 1
16 4 Node 4 1
15 4 Node 3 1
14 4 FE Switch2 1
13 4 FE Swtich 1 1
12 3 Node 15 1
11 3 Node 14 1
10 3 Node 13 1
9 3 Node 12 1
8 2 Tray/Light Bar 1
7 2 Node 7 1
6 2 Node 6 1
5 2 Node 5 1
4 1 Node 2 1
3 1 Node 1 1
2 1 BE Switch 2 1
Table 13 EX500 2U PDU three-phase Delta and Wye zone A/B mapping (continued)
1 1 BE Switch 1 1
In the following diagrams, the switches plug into the front of the rack and route through the rails
to the rear.
Figure 15 EX500 Single Phase AC Cabling Diagram
17 19 21 23 25 27 29 31
18 20 22 24 26 28 30 32
iDRAC port
The EX500 node iDRAC port to the Fox switch (BE1) port connections are listed in the following
table.
Table 14 EX500 node iDRAC port to BE1 port mapping
The front-end switch and back-end switch connections to an EX500 node are shown in the
following diagram.
Figure 20 Front-end and back-end switch connections to an EX500 node
The numbered front-end switch ports used for connecting to the ports on the EX500 nodes are
shown in the following diagram. Port 1 on the Hare switch (FE2) connects to port 4 on Node 1.
Port 2 on the Hare switch (FE2) connects to port 4 on Node 2, and so on. Similarly, Port 1 on the
Rabbit switch (FE1) connects to port 3 on Node 1. Port 2 on the Rabbit switch (FE1) connects to
port 3 on Node 2, and so on.
Figure 21 Node ports on front-end switches
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
customer ports
Rabbit Switch (FE1)
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
The numbered back-end switch ports used for connecting to the ports on the EX500 nodes are
shown in the following diagram. Port 1 on the Hound switch (BE2) connects to port 2 on Node 1.
Port 2 on the Hound switch (BE2) connects to port 2 on Node 2, and so on. Similarly, Port 1 on the
Fox switch (BE1) connects to port 1 on Node 1. Port 2 on the Fox switch (BE1) connects to port 1
on Node 2, and so on.
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
The EX500 node port to the iDRAC, back-end switch, and front-end switch port connections are
listed in the following table.
Figure 24 EX500 node network port cabling connections
The EX500 node network cable labeling is listed in the following table.
Figure 25 EX500 node network cable labeling
The ECS intra-rack backend management networks are connected together to create the inter-
rack topology. By connecting either port channel 100 or 101 to another private switch from another
ECS intra-rack network, the inter-rack network is created. Through these connections, nodes
from any intra-rack network can communicate to any other node on the inter-rack network. There
are three types of topologies you can use to connect the intra-rack LANs into an inter-rack
network:
l Daisy chain or line topology
l Ring topology
l Star topology
Linear or daisy chain topology
The simplest topology to connect the intra-racks together does not require any extra equipment.
All the private switches can be connected together in a linear or Daisy chain fashion as
demonstrated below.
Figure 27 Linear or Daisy Chain topology
This linear or daisy-chain topology is the least dependable setup and is easily susceptible to split-
brain topologies as demonstrated below.
Figure 28 Split-brain topology
The inter-rack linear topology between EX-Series racks are shown in the following figure.
Figure 29 Inter-rack switch connectivity - linear topology (daisy-chain) between EX Series racks
The inter-rack linear topology between an EX-Series rack and a Gen2 rack is shown in the
following figure. For a mixed Gen2 and Gen3 environment, use Turtle switch port 51 for inbound,
and port 52 for outbound. On the Gen3 Fox switch, use port 39 for inbound and port 40 for
outbound. This is a requirement for mixing Gen2 and Gen3 racks in the same VDC. Ports 39 and 40
are not used in an all Gen3 environment with EX-Series racks.
Figure 30 Inter-rack switch connectivity - linear topology (daisy-chain) between an EX-Series rack and
a Gen2 U-Series, D-Series, or C-Series rack
Ring topology
For a more reliable network, the ends of the daisy chain topology can be connected together to
create a ring network as demonstrated below. The ring topology would require two physical link
breaks in the topology to create split-brain issue in the private.4 network.
The ring topology is very similar to the daisy chain/line topology, except that it is more robust
since it requires two points of failure to break the topology which would cause a split-brain issue.
The inter-rack ring topology between EX-Series racks are shown in the following figure.
Figure 32 Inter-rack switch connectivity - ring topology
Star topology
The limitation with the daisy chain or ring topologies is that they do not scale well for large
installations. For ten or more ECS racks, one or two aggregation switches should be added support
the large installation. For high availability, the recommended topology is to use two aggregation
switches with port channel (VLT, vPC, MLAG) connectivity between them. If you use a single
aggregation switch, both the Fox switch (BE1) and the Hound switch (BE2) are connected to the
single switch. The impact of using only one aggregation switch is the loss of high availability for
the aggregation switch. For connectivity to aggregation switches, use the inbound port 41 on each
back-end switch to link to the aggregation switch(es).
By using aggregation switch(es) to connect to all the intra-rack networks, the star topology
provides better protection against the split-brain issue than both the daisy chain/linear or ring
topologies. With aggregation switch(es), link failures are isolated to a single intra-rack network in
the private.4 network.
The aggregation switch(es) connecting to the intra-rack networks must be set up as a trunk and
allow VLAN traffic to flow between all ports in the inter-rack network.
The inter-rack star topology between EX-Series racks are shown in the following figure.
Figure 34 Inter-rack switch connectivity - star topology
Component Description
Back-end (BE) switches l Two Dell EMC S5148F 25 GbE 1U ethernet switches with 48 x 25 GbE SFP ports
for private network and 6 x 100 GbE uplink ports.
connection
l 2 x 100 GbE VLT cables per HA pair.
Front-end (FE) switches l Two optional Dell EMC S5148F 25 GbE 1U ethernet switches can be obtained for
for customer public network connection or the customer can provide their own 10 GbE or 25 GbE HA
network connection pair for the front end.
l If the customer provides their own front-end switches, they must supply all VLT
cables, SFPs, or external connection cables.
l If Dell EMC S5148F 25 GbE front-end switches are used, the 25 GbE ports are
configured to run at 10 GbE to connect to the EX300 nodes, and 2 x 100 GbE VLT
cables are provided.
EX300 configurations
Describes the EX300 ECS appliance configurations.
The front view of an EX300 rack with the minimum node configuration and an EX300 rack with the
maximum node configuration is shown in the following diagram. The rear view requires a brace
wherever there is empty space above or below an EX300 server.
Figure 35 EX300 minimum and maximum configurations
The EX300 appliance is available in the following configurations within a Dell EMC rack or a
customer-provided rack.
Lists the nodes, disks in each node, disk size, and RAW storage capacity.
5 (minimum 12 l 1 TB l 60 TB
configuration)
l 2 TB l 120 TB
l 4 TB l 240 TB
l 8 TB l 480 TB
6 12 l 1 TB l 72 TB
l 2 TB l 144 TB
l 4 TB l 288 TB
l 8 TB l 576 TB
7 12 l 1 TB l 84 TB
l 2 TB l 168 TB
l 4 TB l 336 TB
l 8 TB l 672 TB
8 12 l 1 TB l 96 TB
l 2 TB l 192 TB
l 4 TB l 384 TB
l 8 TB l 768 TB
9 12 l 1 TB l 108 TB
l 2 TB l 216 TB
l 4 TB l 432 TB
l 8 TB l 864 TB
10 12 l 1 TB l 120 TB
l 2 TB l 240 TB
l 4 TB l 480 TB
l 8 TB l 960 TB
11 12 l 1 TB l 132 TB
l 2 TB l 264 TB
l 4 TB l 528 TB
l 8 TB l 1.06 PB
12 12 l 1 TB l 144 TB
l 2 TB l 288 TB
l 4 TB l 576 TB
l 8 TB l 1.15 PB
13 12 l 1 TB l 156 TB
l 2 TB l 312 TB
l 4 TB l 624 TB
l 8 TB l 1.25 PB
14 12 l 1 TB l 168 TB
l 2 TB l 336 TB
l 4 TB l 672 TB
l 8 TB l 1.34 PB
15 12 l 1 TB l 180 TB
l 2 TB l 360 TB
l 4 TB l 720 TB
l 8 TB l 1.44 PB
16 (maximum 12 l 1 TB l 192 TB
configuration)
l 2 TB l 384 TB
l 4 TB l 768 TB
l 8 TB l 1.54 PB
EX300 server
EX300 servers have the following standard features:
Features 1 TB 2 TB 4 TB 8 TB
Riser l Each EX300 server supports up to eight PCI express (PCIe) Gen3
Configuration expansion cards that can be installed on the system board using three
expansion card risers.
l Config 1 (1B + 2B)
l Four x8 slots and rear storage
Processors 1 x Intel Xeon Bronze 3106 8 core/8 thread, 1.7Ghz, 85W, 11 MB cache,
2400Mhz DIMMs
PCIe Slot 1 BOSS controller card + with 1 M.2 stick 480 GB, FH
HDDs in front 1 TB 7.2k RPM 2 TB 7.2k RPM 4 TB 7.2k RPM 512n 8 TB 7.2k RPM 512e
slots 512n SATA 512n SATA SATA HDD SATA HDD
HDD HDD
LED indicators are on the left and right side of the server front panels.
The left control panel LED behavior is broken into two subsets, the light bar and the status LEDs.
The light bar also functions as a button. The light bar indicates chassis health and also functions as
System ID when pressed.
Status ID button
There are five status LEDs to indicate and identify any failed hardware components.
Figure 39 Status LEDs decoded view
1 Full-height PCIe expansion card The PCIe expansion card slot (riser 1) connects up to three full-
slot (3) height PCIe expansion cards to the system. The Boot Optimized
Storage Subsystem (BOSS) controller card with one M.2 stick is in
the top PCIe slot (slot 1).
2 Half-height PCIe expansion card The PCIe expansion card slot (riser 2) connects one half-height PCIe
slot expansion cards to the system.
3 Rear handle The rear handle can be removed to enable any external cabling of
PCIe cards that are installed in the PCIe expansion card slot 6.
4 Drive slots The two rear 3.5 inch rear drive slots contain fillers.
6 NIC ports The NIC ports that are integrated on the network daughter card
(NDC) provide network connectivity. The two left 10 GbE data ports
of each node connect to one of the data ports on the back-end
switches. The two right 10 GbE data ports of each node connect to
one of the data ports on the front-end switches.
7 USB port (2) The USB ports are 9-pin and 3.0-compliant. These ports enable you
to connect USB devices to the system.
10 iDRAC9 dedicated port Enables you to remotely access iDRAC. For more information, see
the Integrated Dell Remote Access Controller 9 (iDRAC9) User’s
Guide.
11 System identification button The System Identification (ID) button is available on the front and
back of the systems. Press the button to identify a system in a rack
by turning on the system ID button. You can also use the system ID
button to reset iDRAC and to access BIOS using the step through
mode.
The EX300 appliance connections to 0U PDU and 2U PDU outlets are listed in the following tables.
PDU cabling PS1 outlet PS2 outlet Line cord per zone (Single
numbers numbers phase)
Zone B Zone A
Node 16 21 21 2
Node 15 20 20 2
Node 14 19 19 2
Node 13 18 18 1
Node 12 17 17 1
Node 11 16 16 1
Node 10 15 15 1
Node 9 14 14 1
Node 8 13 13 1
FE 2 12 12 1
FE 1 11 11 1
BE 2 9 9 1
BE 1 8 8 1
Node 7 7 7 1
Node 6 6 6 1
Node 5 5 5 1
Node 4 4 4 1
Node 3 3 3 1
Node 2 2 2 1
Node 1 1 1 1
2U PDU cabling PS1 outlet PS2 outlet Line cord per zone (Single
numbers numbers phase)
Zone B Zone A
Node 16 21 21 3
Node 15 12 12 3
2U PDU cabling PS1 outlet PS2 outlet Line cord per zone (Single
numbers numbers phase)
Zone B Zone A
Node 14 11 11 3
Node 13 10 10 3
Node 12 9 9 3
Node 11 20 20 2
Node 10 19 19 2
Node 9 18 18 2
Node 8 17 17 2
FE 2 13 13 1
FE 1 1 1 1
BE 2 14 14 1
BE 1 2 2 1
Node 7 7 7 2
Node 6 6 6 2
Node 5 5 5 2
Node 4 16 16 1
Node 3 15 15 1
Node 2 4 4 1
Node 1 3 3 1
In the following diagrams, the switches plug into the front of the rack and route through the rails
to the rear.
17 19 21 23 25 27 29 31
18 20 22 24 26 28 30 32
iDRAC port
The EX300 node iDRAC port to the Fox switch (BE1) port connections are listed in the following
table.
The front-end switch and back-end switch connections to an EX300 node are shown in the
following diagram.
Figure 51 Front-end and back-end switch connections to an EX300 node
The numbered front-end switch ports used for connecting to the ports on the EX300 nodes are
shown in the following diagram. Port 1 on the Hare switch (FE2) connects to port 4 on Node 1.
Port 2 on the Hare switch (FE2) connects to port 4 on Node 2, and so on. Similarly, Port 1 on the
Rabbit switch (FE1) connects to port 3 on Node 1. Port 2 on the Rabbit switch (FE1) connects to
port 3 on Node 2, and so on.
Figure 52 Node ports on front-end switches
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
customer ports
Rabbit Switch (FE1)
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
Figure 53 Back-end switch, front-end switch, and iDRAC ports on an EX300 node
The numbered back-end switch ports used for connecting to the ports on the EX300 nodes are
shown in the following diagram. Port 1 on the Hound switch (BE2) connects to port 2 on Node 1.
Port 2 on the Hound switch (BE2) connects to port 2 on Node 2, and so on. Similarly, Port 1 on the
Fox switch (BE1) connects to port 1 on Node 1. Port 2 on the Fox switch (BE1) connects to port 1
on Node 2, and so on.
Figure 54 Node ports on back-end switches
Hound Switch (BE2)
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
The EX300 node port to the iDRAC, back-end switch, and front-end switch port connections are
listed in the following table.
Figure 55 EX300 node network port cabling connections
The EX300 node network cable labeling is listed in the following table.
Figure 56 EX300 node network cable labeling
The ECS intra-rack backend management networks are connected together to create the inter-
rack topology. By connecting either port channel 100 or 101 to another private switch from another
ECS intra-rack network, the inter-rack network is created. Through these connections, nodes
from any intra-rack network can communicate to any other node on the inter-rack network. There
are three types of topologies you can use to connect the intra-rack LANs into an inter-rack
network:
l Daisy chain or line topology
l Ring topology
l Star topology
Linear or daisy chain topology
The simplest topology to connect the intra-racks together does not require any extra equipment.
All the private switches can be connected together in a linear or Daisy chain fashion as
demonstrated below.
Figure 58 Linear or Daisy Chain topology
This linear or daisy-chain topology is the least dependable setup and is easily susceptible to split-
brain topologies as demonstrated below.
Figure 59 Split-brain topology
The inter-rack linear topology between EX-Series racks are shown in the following figure.
Figure 60 Inter-rack switch connectivity - linear topology (daisy-chain) between EX Series racks
The inter-rack linear topology between an EX-Series rack and a Gen2 rack is shown in the
following figure. For a mixed Gen2 and Gen3 environment, use Turtle switch port 51 for inbound,
and port 52 for outbound. On the Gen3 Fox switch, use port 39 for inbound and port 40 for
outbound. This is a requirement for mixing Gen2 and Gen3 racks in the same VDC. Ports 39 and 40
are not used in an all Gen3 environment with EX-Series racks.
Figure 61 Inter-rack switch connectivity - linear topology (daisy-chain) between an EX-Series rack and
a Gen2 U-Series, D-Series, or C-Series rack
Ring topology
For a more reliable network, the ends of the daisy chain topology can be connected together to
create a ring network as demonstrated below. The ring topology would require two physical link
breaks in the topology to create split-brain issue in the private.4 network.
The ring topology is very similar to the daisy chain/line topology, except that it is more robust
since it requires two points of failure to break the topology which would cause a split-brain issue.
The inter-rack ring topology between EX-Series racks are shown in the following figure.
Figure 63 Inter-rack switch connectivity - ring topology
Star topology
The limitation with the daisy chain or ring topologies is that they do not scale well for large
installations. For ten or more ECS racks, one or two aggregation switches should be added support
the large installation. For high availability, the recommended topology is to use two aggregation
switches with port channel (VLT, vPC, MLAG) connectivity between them. If you use a single
aggregation switch, both the Fox switch (BE1) and the Hound switch (BE2) are connected to the
single switch. The impact of using only one aggregation switch is the loss of high availability for
the aggregation switch. For connectivity to aggregation switches, use the inbound port 41 on each
back-end switch to link to the aggregation switch(es).
By using aggregation switch(es) to connect to all the intra-rack networks, the star topology
provides better protection against the split-brain issue than both the daisy chain/linear or ring
topologies. With aggregation switch(es), link failures are isolated to a single intra-rack network in
the private.4 network.
The aggregation switch(es) connecting to the intra-rack networks must be set up as a trunk and
allow VLAN traffic to flow between all ports in the inter-rack network.
The inter-rack star topology between EX-Series racks are shown in the following figure.
Figure 65 Inter-rack switch connectivity - star topology
Component Description
Customer-provided The requirements for customer-provided racks to accommodate the EX3000 nodes are
40U rack described in the ECS EX3000 Third-Party Rack Installation Guide.
Back-end switches for l Two Dell EMC S5148F 25 GbE 1U ethernet switches with 48 x 25 GbE SFP ports and 6
private network x 100 GbE uplink ports
connection
l 2 x 100 GbE VLT cables per HA pair
Front-end switches for l Two optional Dell EMC S5148F 25 GbE 1U ethernet switches can be obtained for
customer public network connection or the customer can provide their own 25 GbE HA pair for the
network connection front end.
l If the customer provides their own front-end switches, they must supply all VLT cables,
SFPs, or external connection cables.
l If Dell EMC S5148F 25 GbE front-end switches are used, 25 GbE ports connect to the
EX3000 nodes and 2 x 100 GbE VLT cables are provided.
Component Description
l EX3000S node has 2 x 1600W power supplies (hot swappable). EX3000D node has 4 x
1100W power supplies (hot swappable).
l LSI 9361-8i SAS Controller
l Each node has 4 x 25 GbE networking
EX3000 configurations
Describes the EX3000 ECS appliance configurations.
The front views of an EX3000S appliance and an EX3000D appliance in both DellEMC and
customer-provided racks with the minimum and maximum node configurations are shown in the
following diagrams.
Figure 66 EX3000S minimum and maximum configurations for single node chassis
Figure 67 EX3000D minimum and maximum configurations for dual node chassis
There are five SKUs of EX3000 nodes that all have the same configuration except for their drive
load and number of nodes.
l EX3000S single node with 45, 60, or 90 12 TB HDDs
l EX3000D dual nodes with 60 or 90 12 TB HDDs total
EX3000S and EX3000D chassis cannot be mixed within a customer-provided rack.
The EX3000 appliance is available in the following configurations in both DellEMC and customer-
provided rack.
Lists the nodes, disks in each node, disk size, and RAW storage capacity.
5 (minimum l 45 12 TB l 2.7 PB
configuration)
l 60 l 3.6 PB
l 90 l 5.4 PB
6 l 45 12 TB l 3.24 PB
l 60 l 4.32 PB
l 90 l 6.48 PB
7 l 45 12 TB l 3.78 PB
l 60 l 5.04 PB
l 90 l 7.56 PB
8 (maximum l 45 12 TB l 4.32 PB
configuration)
l 60 l 5.76 PB
l 90 l 8.64 PB
Lists the nodes, disks in each node, disk size, and RAW storage capacity.
Table 25 EX3000D dual node chassis configurations
6 (minimum l 30 12 TB l 2.16 PB
configuration)
l 45 l 3.24 PB
8 l 30 12 TB l 2.88 PB
l 45 l 4.32 PB
10 l 30 12 TB l 3.60 PB
l 45 l 5.40 PB
12 l 30 12 TB l 4.32 PB
l 45 l 6.48 PB
14 l 30 12 TB l 5.04 PB
l 45 l 7.56 PB
16 (maximum l 30 12 TB l 5.76 PB
configuration)
l 45 l 8.64 PB
EX3000 server
The EX3000 4U server contains the EX3000 chassis and either one server sled (in the EX3000S
single node configuration) or two server sleds (in the EX3000D dual node configuration). EX3000
servers have the following standard features:
l One-node or two-node servers (4U) with two CPUs per node
l Dual 8-core Broadwell CPU per node. E5-2620v4 8-core/16-thread 2.1GHz 20M cache 85W
l 4x16GB RDIMM, 2400MT/s, Dual Rank, x8 Data Width
l One system disk per node (480 GB SSD)
l LED indicators for each node
l Dual hot-swap chassis power supplies per node
l One SAS adapter with two SAS ports per node
The EX3000 physical dimensions are listed in the following table.
1. Server sled (one or two depending on whether the EX3000 system is single- or dual-node
chassis configuration)
2. Fan module (6)
3. 3.5-inch HDDs (up to 90)
4. PSU unit (2 for an EX3000S single node and 4 for an EX3000D dual node)
Figure 71 External view of server sled A in a EX3000D dual node chassis with two sleds
For the EX3000S single-node system, a dummy sled is installed over the bottom sled A
compartment and there are air flow covers over the two empty power supply slots.
Figure 72 Front-panel features and indicators
1 Power indicator The power indicator glows when the system is turned on.
3 Sled A HDD fault status indicator. The indicator blinks amber if an HDD
experiences an issue.
4 System board status indicator If the system is on, and in good health, the indicator glows solid blue.
The indicator blinks amber if the system is in standby, and if any
issue exists (for example, a failed fan or HDD).
5 Power button l The power button controls the PSU output to the system.
l Note: On ACPI-compliant operating systems (OSs), turning
off the system using the power button causes the system to
6 System identification button l The identification button can be used to locate a particular
system within a rack.
l Press to toggle the system ID on and off.
l If the system stops responding during POST, press and hold the
system ID button for more than five seconds to enter BIOS
progress mode.
l To reset iDRAC (if not disabled in F2 iDRAC setup) press and
hold the button for more than 15 seconds.
7 Power indicator The power indicator glows when the system is turned on.
9 Sled B HDD fault status indicator l The indicator blinks amber if an HDD experiences an issue.
l Note: Features of Sled B are for dual-node systems only.
10 System board status indicator If the system is on, and in good health, the indicator glows solid blue.
The indicator blinks amber if the system is in standby, and if any
issue exists (for example, a failed fan or HDD).
11 Power button l The power button controls the PSU output to the system.
l Note: On ACPI-compliant operating systems (OSs), turning
off the system using the power button causes the system to
perform a graceful shutdown before power to the system is
turned off.
12 System identification button l The identification button can be used to locate a particular
system within a rack.
l Press to toggle the system ID on and off.
l If the system stops responding during POST, press and hold the
system ID button for more than five seconds to enter BIOS
progress mode.
l To reset iDRAC (if not disabled in F2 iDRAC setup) press and
hold the button for more than 15 seconds.
6 USB connector Enables you to connect USB devices to the system. The port is USB
2.0-compliant.
7 SD vFlash card slot Provides persistent on-demand local storage and a custom
deployment environment that allows automation of server
configuration, scripts and imaging. For more information, see the
Integrated Dell Remote Access Controller 9 (iDRAC9) User’s Guide.
8 USB connector Enables you to connect USB devices to the system. The port is USB
3.0-compliant.
9 Dedicated Ethernet port Dedicated management port on the iDRAC ports card.
10 System identification button l The identification button can be used to locate a particular
system within a rack.
l Press to toggle the system ID on and off.
l If the system stops responding during POST, press and hold the
system ID button for more than five seconds to enter BIOS
progress mode.
l To reset iDRAC (if not disabled in F2 iDRAC setup) press and
hold the button for more than 15 seconds.
13 Power button l The power button controls the PSU output to the system.
l Note: On ACPI-compliant operating systems (OSs), turning
off the system using the power button causes the system to
perform a graceful shutdown before power to the system is
turned off.
16 Power supply units Two redundant power supply units (PSUs) for sled A.
17 Power supply units Two redundant power supply units (PSUs) for sled B.
Note: A dummy sled (sled B) will be installed over the sled A compartment and two dummy
PSUs over the PSU slots for sled B in the EX3000S single-node system.
1 - release button
2 - 3.5-inch HDD
3 - HDD carrier handle
4 - HDD carrier
The EX3000S and EX3000D appliance connections to PDU outlets are listed in the following
tables.
Lists the EX3000S PDU cabling.
Table 29 EX3000S PDU cabling
PDU Node A - PS1 Node A - PS2 Switch PS2 Switch PS1 Line cord number
cabling outlet numbers outlet numbers outlet numbers outlet numbers per zone (single
phase)
FE 2 13 13 1
FE 1 1 1 1
Chassis 8 22 22 3
Chassis 7 11 11 3
Chassis 6 9 9 3
Chassis 5 18 18 2
BE 2 14 14 1
BE 1 2 2 1
Service tray 8 2
Light Bar 20 2
Chassis 4 7 7 2
Chassis 3 5 5 2
Chassis 2 15 15 1
Chassis 1 3 3 1
PDU Node B - Node B - Node A - Node A - Switch PS2 Switch PS1 Line cord
cabling PS1 outlet PS2 outlet PS1 outlet PS2 outlet outlet outlet number per
numbers numbers numbers numbers numbers numbers zone (single
phase)
FE 2 13 13 1
FE 1 1 1 1
Chassis 8 22 23 23 22 3
Chassis 7 11 21 21 11 3
Chassis 6 9 10 10 9 3
Chassis 5 18 19 19 18 2
BE 2 14 14 1
BE 1 2 2 1
Service tray 8 2
Light Bar 20 2
Chassis 4 7 17 17 7 2
Chassis 3 5 6 6 5 2
Chassis 2 15 16 16 15 1
Chassis 1 3 4 4 3 1
Each EX3000 install kit contains 93" black and gray power cords and 118" gray and black power
cords. The 93'' and 118" cords are sent with third party and Dell EMC expansion nodes. However,
Dell EMC racks are shipped from the factory with 93" AC cables. You should only use the
appropriate length cable per the cabinet position being installed as shown in the following
diagrams. EX3000 systems shipped for a third party rack require the extra 118" cables while Dell
EMC racks only need the 93" cables. After you complete the power cabling for the EX3000
appliance, you will always have an extra pair of unused power cords. In the following diagram, the
switches plug into the front of the rack and route through the rails to the rear.
The legends in the following diagrams map colored cables to part numbers and cable lengths.
The EX3000D node iDRAC port to the Fox switch (BE1) port connections are listed in the following
table.
The front-end switch and back-end switch connections to an EX3000S node are shown in the
following diagram.
Figure 87 Front-end and back-end switch connections to an EX3000S node
The front-end switch and back-end switch connections to an EX3000D node are shown in the
following diagram.
Figure 88 Front-end and back-end switch connections to an EX3000D node
The numbered front-end switch ports used for connecting to the ports on the EX3000 nodes are
shown in the following diagram. Port 1 on the Hare switch (FE2) connects to port 4 on Node 1.
Port 2 on the Hare switch (FE2) connects to port 4 on Node 2, and so on. Similarly, Port 1 on the
Rabbit switch (FE1) connects to port 3 on Node 1. Port 2 on the Rabbit switch (FE1) connects to
port 3 on Node 2, and so on.
Figure 89 Node ports on front-end switches
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
customer ports
Rabbit Switch (FE1)
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
Figure 90 Back-end switch, front-end switch, and iDRAC ports on an EX3000S node
Table 33 Legend for Back-end switch, front-end switch, and iDRAC ports on EX30000S node
Legend Description
5 iDRAC Port
The numbered back-end switch ports used for connecting to the ports on the EX3000 nodes are
shown in the following diagram. Port 1 on the Hound switch (BE2) connects to port 2 on Node 1.
Port 2 on the Hound switch (BE2) connects to port 2 on Node 2, and so on. Similarly, Port 1 on the
Fox switch (BE1) connects to port 1 on Node 1. Port 2 on the Fox switch (BE1) connects to port 1
on Node 2, and so on.
Figure 91 Node ports on back-end switches
Hound Switch (BE2)
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 VLT1 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 VLT2 52 54
The EX3000 node port to the iDRAC, back-end switch, and front-end switch port connections are
listed in the following table.
Figure 92 EX3000 node network port cabling connections
The EX3000 node network cable labeling is listed in the following table.
Figure 93 EX3000 node network cable labeling
For information on connecting multiple ECS appliances, see Network connections between
multiple ECS appliances in a single site on page 50.