Flexpod Deploy
Flexpod Deploy
Flexpod Deploy
David Antkowiak
Ramesh Isaac
Jon Benedict
Chris Reno
http://www.netapp.com/us/technology/flexpod/ http://www.cisco.com/en/US/netsol/ns964/index.html
Audience
This document describes the basic architecture of FlexPod as well as the general procedures for deploying the base FlexPod system. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services personnel, IT managers, partner engineering personnel, and customers who want to deploy the base FlexPod architecture.
Note
For more detailed deployment information, Cisco and NetApp partners should contact their local account teams or visit http://www.netapp.com/us/technology/flexpod/.
Corporate Headquarters: Cisco Systems, Inc., 170 West Tasman Drive, San Jose, CA 95134-1706 USA
FlexPod Overview
FlexPod Architecture
Cisco and NetApp have provided documentation around best practices for building the FlexPod shared infrastructure stack. As part of the FlexPod offering, Cisco and NetApp designed a reference architecture with a technical specifications sheet and bill of materials that is highly modular or pod-like. Although each customers FlexPod system may vary in its exact configuration, once a FlexPod unit is built it can easily be scaled as requirements and demand change. This includes scaling both up (adding additional resources within a FlexPod unit) and out (adding additional FlexPod units). Specifically, FlexPod is a defined set of hardware and software that serves as a foundation for data center deployments. FlexPod includes NetApp storage, Cisco networking, and the Cisco Unified Computing System in a single package in which the computing and storage fit in one data center rack and the networking resides in a separate rack. Due to port density the networking components can accommodate multiple instances of FlexPod systems. Figure 1 shows the FlexPod components. The solution can be scaled while still maintaining its integrity, either by adding more FlexPod units or by adding to the solution components. A number of solutions can be built on top of one or more FlexPod units, providing enterprise flexibility, supportability, and manageability. Figure 1 and Figure 2 outline the possible NetApp Filer interconnect choices. The first topology in Figure 1 is an FCoE-only implementation, while the second in Figure 2 adds the option of native FC connectivity. These interconnects are not interdependent and may be deployed together or separately to meet customer hypervisor or application support requirements. Both deployments are fully supported.
FlexPod Overview
Figure 1
Access
10 11
12 13
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
10 11
12 13
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
3
ID STAT
N55-M8P8FP
3
ID
STAT
N55-M8P8FP
PS2
PS1
PS2
PS1
SLOT2
CONSOLE
L1 L2 MGMT 0 MGMT 1
10 11
12
13
14
15
16
17
18
19
20
SLOT2
N10-E0440
CONSOLE
L1 L2 MGMT 0 MGMT 1
10 11
12
13
14
15
16
17
18
19
20
N10-E0440
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
Qty 2: Cisco UCS 6120 Fabric Interconnect 2 10 GbE with FCoE per Fabric Extender Qty 6: Cisco UCS B200 M2 Qty 9: Cisco UCS B250 M2 Qty 6: Cisco UCS 2140 Fabric Extender Qty 3: Cisco UCS 5108 Chassis
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
10
11
12
13
14
15
16
17
18
19
20
21
22
23
DS 2 2 4 6
291691
600G B
0
600G B
1
600G B
2
600G B
3
600G B
4
600G B
5
600G B
6
600G B
600G B
8
600G B
9
600G B
10
600G B
11
12
600G B
13
600G B
14
600G B
15
600G B
16
600G B
600G B
17
18
600G B
600G B
19
20
600G B
600G B
21
22
600G B
600G B
23
DS 2 2 4 6
600G B
0
600G B
1
600G B
2
600G B
3
600G B
4
600G B
5
600G B
6
600G B
600G B
8
600G B
9
600G B
10
600G B
11
12
600G B
13
600G B
14
600G B
15
600G B
16
600G B
600G B
17
18
600G B
600G B
19
20
600G B
600G B
21
22
600G B
600G B
FA S3270 FA S3270
23
DS 2 2 4 6
600G B
0
600G B
1
600G B
2
600G B
3
600G B
4
600G B
5
600G B
6
600G B
600G B
8
600G B
9
600G B
10
600G B
11
12
600G B
13
600G B
14
600G B
15
600G B
16
600G B
17
600G B
18
600G B
19
600G B
20
600G B
21
600G B
22
600G B
23
600G B
DS 2246
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
600G B
FlexPod Overview
Figure 2
Access
10 11
12 13
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
10 11
12 13
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
3
ID STAT
N55-M8P8FP
3
ID
STAT
N55-M8P8FP
PS2
PS1
PS2
PS1
SLOT2
CONSOLE
L1 L2 MGMT 0 MGMT 1
10 11
12
13
14
15
16
17
18
19
20
SLOT2
N10-E0440
CONSOLE
L1 L2 MGMT 0 MGMT 1
10 11
12
13
14
15
16
17
18
19
20
N10-E0440
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
Qty 2: Cisco UCS 6120 Fabric Interconnect 2 10 GbE with FCoE per Fabric Extender Qty 6: Cisco UCS B200 M2 Qty 9: Cisco UCS B250 M2 Qty 6: Cisco UCS 2140 Fabric Extender Qty 3: Cisco UCS 5108 Chassis
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
UCS B200 M1
! ! ! !
UCS B200 M1
UCS 5108
!
SLOT
SLOT
Console
Reset
Console
Reset
UCS B250 M1
SLOT
SLOT
Console
Reset
UCS B250 M1
SLOT
SLOT 6
! Console Reset
UCS B250 M1
SLOT 7
SLOT
8
! Console Reset
OK
FAIL
OK
FAIL
OK
FAIL
OK
FAIL
10
11
12
13
14
15
16
17
18
19
20
21
22
23
DS 2 2 4 6
600G B
0
600G B
1
600G B
2
600G B
3
600G B
4
600G B
5
600G B
6
600G B
600G B
8
600G B
9
600G B
10
600G B
11
12
600G B
13
600G B
14
600G B
15
600G B
16
600G B
600G B
17
18
600G B
600G B
19
20
600G B
600G B
21
22
600G B
600G B
23
DS 2 2 4 6
Ether Channel: 2 x 10 GbE NetApp FAS3210 HA Pair w/Qty. 4 DS2246 Disk Shelves
600G B
0
600G B
1
600G B
2
600G B
3
600G B
4
600G B
5
600G B
6
600G B
600G B
8
600G B
9
600G B
10
600G B
11
12
600G B
13
600G B
14
600G B
15
600G B
16
600G B
600G B
17
18
600G B
600G B
19
20
600G B
600G B
21
22
600G B
600G B
FA S3270 FA S3270
23
DS 2 2 4 6
10
11
12
13
14
15
16
17
18
19
20
21
22
23
The default hardware is detailed in the FlexPod technical specifications and includes two Cisco Nexus 5548 switches, two Cisco UCS 6120 fabric interconnects, and three chassis of Cisco UCS blades with two fabric extenders per chassis. Storage is provided by a NetApp FAS3210CC (HA configuration within a single chassis) with accompanying disk shelves. All systems and fabric links feature redundancy, providing end-to-end high availability. This is the default base design, but each of the components can be scaled flexibly to support a specific customers business requirements. For example, more (or different) blades and chassis could be deployed to increase compute capacity, additional disk shelves could be deployed to improve IO capacity and throughput, or special hardware or special hardware or software features may be added to introduce new features (such as NetApp FlashCache for dedupe-aware caching). The remainder of this document guides the reader through the steps necessary to deploy the base architecture as shown above. This includes everything from physical cabling to compute and storage configuration.
291692
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B 600G B
600G B
DS 2246
600G B
Cabling Information
The following information is provided as a reference for cabling the physical equipment in a FlexPod environment. The tables include both local and remote device and port locations to simplify cabling requirements.
Note
The following tables are for the prescribed and supported configuration of the FAS3210 running Data ONTAP 8.0.1. This configuration leverages a dual-port 10GbE add-on adapter and the on-board SAS ports for disk shelf connectivity. Onboard FC storage target ports are still supported for legacy implementations. For any modifications of this prescribed architecture, consult the currently available Interoperability Matrix Tool (IMT): http://now.netapp.com/matrix.
Note
See the Site Requirements guide when deploying a storage system to ensure power and cooling requirements are met: http://now.netapp.com/NOW/public/knowledge/docs/hardware/NetApp/site/pdf/site.pdf.
Note
The FlexPod deployment guide assumes that out-of-band management ports are plugged into existing management infrastructure at the deployment site.
Note
Be sure to cable as detailed below, because failure to do so will necessitate changes to the following deployment procedures as specific port locations are mentioned.
Note
It is possible to order a FAS3210A system in a different configuration than the one prescribed below. Ensure that your configuration matches the one described in the tables and diagrams below before starting.
Note
The tables below indicate recommended cabling for both FC- and FCoE-based architectures.
Note
For FCoE-based architectures, the Fibre Channel protocol is addressed with the FCoE storage target adapters as indicated in Table 1.
Note
For FC-based architectures, the Fibre Channel protocol is addressed with native FC storage target ports as indicated in Table 2.
Table 1
Connection 10GbE or FCoE 10GbE or FCoE 10GbE 10GbE 10GbE 10GbE 10GbE or FCoE 10GbE or FCoE 10GbE 10GbE 10GbE 10GbE 100MbE 1GbE 10GbE or FCoE 10GbE or FCoE 100MbE 1GbE 10GbE or FCoE 10GbE or FCoE 10GbE 10GbE 10GbE 10GbE
Remote Device NetApp Controller A NetApp Controller B Cisco Nexus 5548 B Cisco Nexus 5548 B Cisco UCS Fabric Interconnect A Cisco UCS Fabric Interconnect B 100MbE Management Switch NetApp Controller A NetApp Controller B Cisco Nexus 5548 A Cisco Nexus 5548 A Cisco UCS Fabric Interconnect A Cisco UCS Fabric Interconnect B 100MbE Management Switch 100MbE Management Switch SAS shelves Nexus 5548 A Nexus 5548 B 100MbE Management Switch SAS shelves Nexus 5548 A Nexus 5548 B Nexus 5548 A Nexus 5548 B Nexus 5548 A Nexus 5548 B
Remote Port e2a e2a Eth1/5 Eth1/6 Eth1/7 Eth1/7 Any e2b e2b Eth1/5 Eth1/6 Eth1/8 Eth1/8 Any Any ACP port Eth1/1 Eth1/1 Any ACP port Eth1/2 Eth1/2 Eth1/9 Eth1/9 Eth1/9 Eth1/9
MGMT0 100MbE Cisco Nexus 5548 B Eth1/1 Eth1/2 Eth1/5 Eth1/6 Eth1/7 Eth1/8 e0M e0P NetApp Controller A e2a e2b e0M e0P NetApp Controller B e2a e2b Eth1/7 Eth1/7 Eth1/7 Eth1/8
MGMT0 100MbE
10
Table 1
Connection
Remote Device
Remote Port port 1 port 2 port 1 port 2 port 1 port 2 Any L1 L2 port 1 port 2 port 1 port 2 port 1 port 2 Any L1 L2 Eth1/7 Eth1/7 Eth1/8 Eth1/8
10GbE/FCoE Chassis 1 FEX A 10GbE/FCoE Chassis 1 FEX A 10GbE/FCoE Chassis 2 FEX A 10GbE/FCoE Chassis 2 FEX A 10GbE/FCoE Chassis 3 FEX A 10GbE/FCoE Chassis 3 FEX A 100MbE Management Switch UCS Fabric Interconnect B UCS Fabric Interconnect B 1GbE 1GbE
MGMT0 100MbE
Eth1/1 Eth1/2 Eth1/3 Eth1/4 Eth1/5 Eth1/6 L1 L2 Eth1 Eth2 Eth1 Eth2
10GbE/FCoE Chassis 1 FEX B 10GbE/FCoE Chassis 1 FEX B 10GbE/FCoE Chassis 2 FEX B 10GbE/FCoE Chassis 2 FEX B 10GbE/FCoE Chassis 3 FEX B 10GbE/FCoE Chassis 3 FEX B 100MbE Management Switch UCS Fabric Interconnect A UCS Fabric Interconnect A Nexus 5548 A (only used with vSphere) Nexus 5548 B (only used with vSphere) Nexus 5548 A (only used with vSphere) Nexus 5548 B (only used with vSphere) 1GbE 1GbE 1GbE 1GbE 1GbE 1GbE
MGMT0 100MbE
Table 2
Local Port Connection Remote Device FC2/1 FC2/2 FC2/3 FC2/4 FC FC FC FC FC FC FC FC FC FC NetApp Controller A NetApp Controller B Cisco UCS Fabric Interconnect A UCS Fabric Interconnect A NetApp Controller A NetApp Controller B Cisco UCS Fabric Interconnect B UCS Fabric Interconnect B Cisco Nexus 5548 A Cisco Nexus 5548 B
NetApp Controller A
0c 0d
11
Table 2
Local Device NetApp Controller B Cisco UCS Fabric Interconnect Cisco UCS Fabric Interconnect
Local Port Connection Remote Device 0c 0d FC2/1 FC2/2 FC2/1 FC2/2 FC FC FC FC FC FC Cisco Nexus 5548 A Cisco Nexus 5548 B Cisco Nexus 5548 A Cisco Nexus 5548 A Cisco Nexus 5548 B Cisco Nexus 5548 B
12
Figure 3
CISCO NEXUS N5548P
FlexPod Cabling
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1 /10 GIGABIT ETHERNET 1 /2/ 4/8 G FIBRE CHANNEL
3
ID STAT
N55-M8P8FP
10 11
12 13
14 15
16
17
18 19
20 21
22 23
24 25
26 27
28 29
30 31
32
3
ID STAT
N55-M8P8FP
PS2
PS1
SLOT2
CONSOLE
L1 L2 MGMT0 MGMT1
10 11
12
13
14
15
16
17
18
19
20
N10-E0440
PS2
PS1
SLOT2
CONSOLE
L1 L2 MGMT0 MGMT1
10 11
12
13
14
15
16
17
18
19
20
N10-E0440
UCS 2104XP
FAN STATUS FAN STATUS
UCS 2104XP
FAN STATUS FAN STATUS
FAN 1
FAN 5
FAN 2
FAN 6
FAN 3
FAN 7
FAN 4
FAN 8
FAN STATUS
FAN STATUS
FAN STATUS
FAN STATUS
CHS A56
X1938A
1
c a 0 0 c e 0 a
0 a N K L
0 b N K L
2
c b 0 0 d e 0 b
1
c a 0 0 c e 0 a
0 a N K L
0 b N K L
2
c b 0 0 d e 0 b
= Used FCoE Port = Used 1 GbE Port = Used SAS Port (Disk Shelf Connectivity)
Establishment of a functional Data ONTAP 8.0.1 failover cluster with proper licensing Creation of data aggregates Creation of flexible volumes Configuration of NFS exports if using NFS for infrastructure volumes
13
Creation of infrastructure vFiler unit Assign the storage controller disk wwnership. Ensure Data ONTAP 8.0.1 is installed.
Note
Upgrade or downgrade to Data ONTAP 8.0.1 if necessary. Set up Data ONTAP8.0.1. Install Data ONTAP to the on-board flash storage. Install required licenses. Start FCP service and enable proper FC port configuration. Enable active-active configuration between the two storage systems. Create the data aggregate aggr1. Enable 802.1q VLAN trunking and add the NFS VLAN. Harden storage system logins and security. Create SNMP requests role and assign SNMP login privileges. Create SNMP management group and assign SNMP request role to it. Create SNMP user and assign to SNMP management group. Enable SNMP on the storage controllers. Delete SNMP v1 communities from storage controllers. Set SNMP contact information for each of the storage controllers. Set SNMP location information for each of the storage controllers. Establish SNMP trap destinations. Re-Initialize SNMP on the storage controllers. Enable FlashCache. Create the necessary infrastructure volumes (flexible volumes) for infrastructure services. Create the infrastructure IP space. Create the infrastructure vFiler units. Map the necessary infrastructure volumes to the infrastructure vFiler units. Set the priority levels for the volumes.
Establish a functional pair of Cisco Nexus 5548 switches with proper licensing and features enabled. Establish connectivity between FlexPod elements including the use of traditional and virtual port channels. Establish connectivity to existing data center infrastructure.
14
The following actions are necessary to configure the Cisco Nexus 5548 switches for use in a FlexPod environment.
Execute the Cisco Nexus 5548 setup script. Enable the appropriate Cisco Nexus features and licensing. Set global configurations. Create necessary VLANs including NFS and management. Add individual port descriptions for troubleshooting. Create necessary port channels including the vPC peer-link. Add Port Channel configurations. Configure virtual Port Channels (vPCs) to UCS fabric interconnects and NetApp controllers. Configure uplinks into existing network infrastructure, preferably by using vPC. Save the configuration.
Creates a functional Cisco UCS fabric cluster Creates the logical building blocks for UCS management model including MAC, WWNN, WWPN, UUID and server pools, vNIC and vHBA templates, and VLANs and VSANs via UCSM Defines policies enforcing inventory discovery, network control, and server boot rules via UCSM Creates Service Profile templates Instantiates Service Profiles by association templates to physical blades Execute the initial setup of the Cisco UCS 6100 Fabric Interconnects. Log into the Cisco UCS Manager via Web browser. Edit the Chassis Discovery Policy to reflect the number of links from the chassis to the fabric interconnects. Enable Fibre Channel Server and Uplink Ports. Create an Organization that manages the FlexPod infrastructure and owns the logical building blocks. Create MAC Address Pools under infrastructure organization. Create global VLANs, including NFS and OS data VLANs. Create a Network Control Policy under infrastructure Organization. Set Jumbo Frames in UCS Fabric. Create global VSANs. Create WWNN Pool under infrastructure Organization. Create WWPN Pools under infrastructure Organization. Create vNIC Template under infrastructure Organization using previously defined pools.
15
Create vHBA Templates for Fabric A and B under infrastructure Organization. Create necessary Ethernet and SAN uplink Port-Channels to the Cisco Nexus 5548 Switches. Create WWNN Pool under infrastructure Organization. Create WWPN Pools under infrastructure Organization. Create global VSANs. Create vHBA Templates for Fabric A and B under infrastructure Organization. Create Boot Policies under infrastructure Organization. Create Server Pools under infrastructure Organization. Create UUID Suffix Pools under infrastructure Organization. Create Service Profile Templates under infrastructure Organization. Create Service Profiles under infrastructure Organization. Add a block of IP Addresses for KVM access. Backup the configuration of the running system, taking into consideration the backup location, the types of backup operations, the methods of backing up the configuration, and the need for scheduled backups.
0c or 2a 0d or 2b 0c or 2a 0d or 2b
Note
On each NetApp controller use the show fcp adapters command to gather the above information.
Table 4
vHBA_A WWPN
vHBA_B WWPN
16
Creates VSANs and VFCs, assigns FC ports to SAN Port Channels and appropriate VSANs, and turns on FC ports Defines Fibre Channel aliases for Service Profiles and NetApp controller target ports Establishes Fibre Channel Zoning and working sets Create VSANs for fabric A or B on respective Nexus platform. Create necessary SAN port channels to be connected to UCS Fabric Interconnect. Assign to VSAN appropriate FC interfaces or, alternatively for FCoE use, create vFC ports and map to the defined VSANs. Create device aliases on each Cisco Nexus 5548 for each service profile using corresponding fabric PWWN. Create device aliases on each Cisco Nexus 5548 for each service NetApp controller using corresponding fabric PWWN. Create zones for each service profile and assign devices as members by using Fibre Channel aliases. Activate the zoneset. Save the configuration. Backup the configuration of the running system, taking into consideration the backup location, the types of backup operations, the methods of backing up the configuration, and the need for scheduled backups
Fibre Channel target ports defined Fibre Channel interface groups (iGroups) defined for each service profile Boot LUNs allocated for each Cisco UCS service profile Boot LUN mapped to associated Cisco UCS service profile Create the necessary volume for boot of the UCS hosts. Create LUNs for booting of the UCS hosts and house them within the newly created volume. Create any necessary iGroups. For those OSes that support ALUA, NetApp recommends enabling ALUA on the iGroups for the host. Map the newly created iGroups to their respective LUNs in a 1:1 fashion. Following the necessary zoning, LUN creation, and mapping, you can boot the UCS host.
17
Backup the configuration of the running system, taking into consideration the backup location, the types of backup operations, the methods of backing up the configuration, and the need for scheduled backups.
A Microsoft Windows 2008 virtual machine (VM or bare metal) running NetApp DataFabric Manager Suite including:
Operations Manager Provisioning Manager Protection Manager
The following section provides the procedures for configuring NetApp Operations Manager for use in a FlexPod environment.
Install DFM on the same Windows virtual machine hosting the virtual storage controller through a Web browser (Windows).
Note
DFM is available at: http://now.netapp.com/NOW/download/software/dfm_win/Windows/. Generate a secure SSL key for the DFM HTTPs server. Enable HTTPs. Add a license in DFM server. Enable SNMP v3 configuration. Configure AutoSupportTM information. Run diagnostics to verify DFM communication with FlexPod controllers. Configure an SNMP trap host. Configure Operations Manager to generate e-mails for every Critical or higher event and send e-mails
18
Table 5
Customized Value
Description Provide the appropriate VLAN ID used for NFS traffic throughout the FlexPod environment Provide the network address for NFS VLAN traffic in CIDR notation (that is, 192.168.30.0/24) Provide the appropriate VLAN ID used for Management traffic throughout the FlexPod environment Provide the appropriate VLAN ID that will be used for the native VLAN ID throughout the FlexPod environment. Provide the default password that will be used in initial configuration of the environment. NOTE: It is recommended to change this password as needed on each device once the initial configuration is complete. Provide the IP address of the appropriate nameserver for the environment. Provide the appropriate domain name suffix for the environment. The VSAN ID that will be associated with Fabric A. This will be associated with both FC and FCoE traffic for Fabric A. The VSAN ID that will be associated with Fabric B. This will be associated with both FC and FCoE traffic for Fabric B. Provide the VLAN ID of the VLAN that will be mapped to the FCoE traffic on fabric A. Provide the VLAN ID of the VLAN that will be mapped to the FCoE traffic on fabric B. Provide the appropriate SSL country name code. Provide the appropriate SSL state or province name. Provide the appropriate SSL locality name (city, town, etc.). Provide the appropriate SSL organization name (company name). Provide the appropriate SSL organizational unit (division).
Default password
FCoE VLAN ID for Fabric A FCoE VLAN ID for Fabric B SSL country name code SSL state or province name SSL locality name SSL organization name SSL organizational unit
19
Customized Value
Description Provide the hostname for NetApp FAS3210 A. Provide the hostname for NetApp FAS3210 B. Designate the appropriate interface to use for initial netboot of each controller. Interface e0M is the recommended interface. Provide the full TFTP path to the 8.0.1 Data ONTAP boot image. Provide the IP Address for the management interface on NetApp FAS3210 A Provide the IP Address for the management interface on NetApp FAS3210 B Provide the subnet mask for the management interface on NetApp FAS3210 A Provide the subnet mask for the management interface on NetApp FAS3210 B. Provide the gateway IP address for the management interface on NetApp FAS3210 A. Provide the gateway IP address for the service processor interface on NetApp FAS3210 B. Provide the IP address of the host that will be used for administering the NetApp FAS3210A. Provide a description of the physical location where the NetApp chassis resides. Provide the IP address for the service processor interface on NetApp FAS3210 A. Provide the IP address for the service processor interface on NetApp FAS3210 B. Provide the subnet mask for the service processor interface on NetApp FAS3210 A. Provide the subnet mask for the service processor interface on NetApp FAS3210 B. Provide the gateway IP address for the service processor interface on NetApp FAS3210 A.
NetApp Data ONTAP 8.0.1 Netboot kernel location NetApp FAS3210 A management interface IP address NetApp FAS3210 B management interface IP address NetApp FAS3210 A management interface subnet mask NetApp FAS3210 B management interface subnet mask NetApp FAS3210 A management interface gateway IP address NetApp FAS3210 B management interface gateway IP address NetApp FAS3210A administration host IP address NetApp FAS3210A location NetApp FAS3210 A service processor interface IP address NetApp FAS3210 B service processor interface IP address NetApp FAS3210 A service processor interface subnet mask NetApp FAS3210 B service processor interface subnet mask NetApp FAS3210 A service processor interface gateway IP address
20
Table 6
Name NetApp FAS3210 B service processor interface gateway IP address NetApp FAS3210A Mailhost name NetApp FAS3210A Mailhost IP address NetApp DataONTAP 8.0.1 flash image location NetApp FAS3210A administrators e-mail address NetApp FAS3210A infrastructure vFiler IP address
Customized Value
Description Provide the gateway IP address for the service processor interface on NetApp FAS3210 B. Provide the appropriate Mailhost name. Provide the appropriate Mailhost IP address. Provide the http or https Web address of the NetApp Data ONTAP 8.0.1 flash image to install the image to the on-board flash storage. Provide the e-mail address for the NetApp administrator to receive important alerts/messages via e-mail. Provide the IP address for the infrastructure vFiler unit on FAS3210A. Note: This interface will be used for the export of NFS datastores and possibly iSCSI LUNs to the necessary ESXi hosts.
Provide the IP address of the host that will be used to administer the infrastructure vFiler unit on FAS3210A. This variable might have the same IP address as the administration host IP address for the physical controllers as well. Provide the IP address for the infrastructure vFiler unit on FAS3210B. Keep in mind that this interface will be used for the export of NFS datastores and possibly iSCSI LUNs to the necessary ESXi hosts. Provide the IP address of the host that will be used to administer the infrastructure vFiler unit on FAS3210B. This variable might possibly have the same IP address as the administration host IP address for the physical controllers as well.
Table 7
Name NetApp Cluster license code NetApp Fibre Channel license code NetApp Flash Cache license code
Customized Value
Description Provide the license code to enable cluster mode within the FAS3210 A configuration. Provide the license code to enable the Fibre Channel protocol. Provide the license code to enable the installed Flash Cache adapter.
21
Table 7
Customized Value
Description Provide the license code to enable the NearStore capability which is required to enable deduplication. Provide the license code to enable deduplication. Provide the license code to enable the NFS protocol. Provide the license code to enable MultiStore. Provide the license code to enable FlexClone.
NetApp Deduplication license code NetApp NFS license code NetApp MultiStore license code NetApp FlexClone license code
Table 8
Customized Value
Description Number of disks assigned to controller A using software ownership. NOTE: Do not include the three disks used for the root volume in this number. Number of disks assigned to controller B using software ownership. NOTE: Do not include the three disks used for the root volume in this number. Number of disks to be assigned to aggr1 on controller A. Number of disks to be assigned to aggr1 on controller B. Each UCS server will boot using the FC protocol. Each FC LUN will be stored in a volume on either controller A or controller B. Choose the appropriate volume size depending on the environment. Each UCS server will boot using the FC protocol. Each FC LUN will be stored in a volume on either controller A or controller B. Choose the appropriate volume size depending on the environment.
NetApp FAS3210 A total disks in Aggregate 1 NetApp FAS3210 B total disks in Aggregate 1 NetApp FAS3210 A ESXi boot volume size
Table 9
Customized Value
Description Provide the hostname for the NetApp DFM server instance. Provide the IP address to be assigned to the NetApp DFM server.
22
Table 9
Name NetApp DFM server license key Mailhost IP address or hostname SNMP community string SNMP username SNMP password SNMP Traphost SNMP request role SNMP managers SNMP site name Enterprise SNMP trap destination
Customized Value
Description Provide the license key for the NetApp DFM server. Provide address of the mailhost that will be used to relay AutoSupport e-mails. Provide the appropriate SNMP community string. Provide the appropriate SNMP username. Provide the appropriate SNMP password. Provide the IP address or hostname for the SNMP traphost. Provides the request role for SNMP. Users who have the ability to manage SNMP. Provides the site name as required by SNMP. Provides the appropriate enterprise SNMP trap destination.
Name Cisco Nexus 5548 A hostname Cisco Nexus 5548 B hostname Cisco Nexus 5548 A Management Interface IP Address Cisco Nexus 5548 B Management Interface IP Address Cisco Nexus 5548 A Management Interface Subnet Mask Cisco Nexus 5548 B Management Interface Subnet Mask Cisco Nexus 5548 A Management Interface Gateway IP Address Cisco Nexus 5548 B Management Interface Gateway IP Address Cisco Nexus 5548 Virtual Port Channel (vPC) Domain ID
Customized Value
Description Provide the hostname for the Cisco Nexus 5548 A. Provide the hostname for the Cisco Nexus 5548 B. Provide the IP address for the mgmt0 interface on the Cisco Nexus 5548 A. Provide the IP address for the mgmt0 interface on the Cisco Nexus 5548 B. Provide the subnet mask for the mgmt0 interface on the Cisco Nexus 5548 A. Provide the subnet mask for the mgmt0 interface on the Cisco Nexus 5548 B. Provide the gateway IP address for the mgmt0 interface on the Cisco Nexus 5548 A. Provide the gateway IP address for the mgmt0 interface on the Cisco Nexus 5548 B. Provide a unique vpc domain id for the environment.
23
Table 11
Name Cisco UCS Fabric Interconnect A hostname Cisco UCS Fabric Interconnect B hostname Cisco UCS Name
Customized Value
Description Provide the hostname for Fabric Interconnect A. Provide the hostname for Fabric Interconnect B. Both Cisco UCS Fabric Interconnects will be clustered together as a single Cisco UCS. Provide the hostname for the clustered system. Both Cisco UCS Fabric Interconnects will be clustered together as a single Cisco UCS. Provide the IP address for the clustered system. Provide the IP address for Fabric Interconnect As Management Interface. Provide the IP address for Fabric Interconnect Bs Management Interface. Provide the subnet mask for Fabric Interconnect As Management Interface. Provide the subnet mask for Fabric Interconnect Bs Management Interface. Provide the gateway IP address for Fabric Interconnect As Management Interface. Provide the gateway IP address for Fabric Interconnect Bs Management Interface. A Cisco UCS organization will be created for the necessary Infrastructure resources. Provide a descriptive name for this organization. A pool of MAC addresses will be created for each fabric. Depending on the environment, certain MAC addresses may already be allocated. Identify a unique MAC address as the starting address in the MAC pool for Fabric A. It is recommended, if possible, to use either 0A or 0B as the second to last octet in order to distinguish from MACs on fabric A or fabric B. A pool of MAC addresses will be created for each fabric. Depending on the environment, certain MAC addresses may already be allocated. Identify a unique MAC address as the starting address in the MAC pool for Fabric B. It is recommended, if possible, to use either 0A or 0B as the second to last octet in order to more easily distinguish from MACs on fabric A or fabric B.
Cisco UCS IP
Cisco UCS Fabric Interconnect A Management Interface IP Address Cisco UCS Fabric Interconnect B Management Interface IP Address Cisco UCS Fabric Interconnect A Management Netmask Cisco UCS Fabric Interconnect B Management Interface Netmask Cisco UCS Fabric Interconnect A Management Interface Gateway Cisco UCS Fabric Interconnect B Management Interface Gateway Cisco UCS Infrastructure Organization Starting MAC Address for Fabric A
24
Table 11
Customized Value
Description A pool of wwpns will be created for each fabric. Depending on the environment, certain wwpns may already be allocated. Identify a unique wwpn as the starting point in the wwpn pool for Fabric A. It is recommended, if possible, to use either 0A or 0B as the second to last octet in order to more easily distinguish from wwpns on fabric A or fabric B. A pool of wwpns will be created for each fabric. Depending on the environment, certain wwpns may already be allocated. Identify a unique wwpn as the starting point in the wwpn pool for Fabric B. It is recommended, if possible, to use either 0A or 0B as the second to last octet in order to more easily distinguish from wwpns on fabric A or fabric B.
25
26
infrastructure_1_vfiler
running
Refs 0 2 2 2 2 0
role network-admin
27
class-map type qos match-all Silver_Traffic match access-group name classify_COS_4 class-map type qos match-all Platinum_Traffic match access-group name classify_COS_5 class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 policy-map type qos Global_Classify class Platinum_Traffic set qos-group 2 class Silver_Traffic set qos-group 4 class class-fcoe set qos-group 1 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos Silver_Traffic_NQ match qos-group 4 class-map type network-qos class-ip-multicast match qos-group 2 class-map type network-qos Platinum_Traffic_NQ match qos-group 2 policy-map type network-qos Setup_QOS class type network-qos Platinum_Traffic_NQ set cos 5 mtu 9000 class type network-qos Silver_Traffic_NQ set cos 4 mtu 9000 class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default multicast-optimize system qos service-policy type qos input Global_Classify service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos Setup_QOS snmp-server user admin network-admin auth md5 0x91d2518e00e2d50e9e5d213bee818692 priv 0x91d2518e00e2d50e9e5d213bee818692 localizedkey snmp-server enable traps entity fru ntp server 10.61.185.11 use-vrf management vrf context management ip route 0.0.0.0/0 10.61.185.1 vlan 1 vlan 101 fcoe vsan 101 name FCoE_Fabric_A vlan 186 name MGMT-VLAN vlan 3101 name NFS-VLAN vlan 3102 name vMotion-VLAN vlan 3103 name Packet-Control-VLAN vlan 3104 name VM-Traffic-VLAN
28
spanning-tree port type edge bpduguard default spanning-tree port type edge bpdufilter default spanning-tree port type network default vpc domain 23 role priority 10 peer-keepalive destination 10.61.185.70 source 10.61.185.69 vsan database vsan 101 name "Fabric_A" device-alias database device-alias name ice3270-1a_2a pwwn 50:0a:09:81:8d:dd:92:bc device-alias name ice3270-1b_2a pwwn 50:0a:09:81:9d:dd:92:bc device-alias name esxi41_host_ice3270-1a_2a1_A pwwn 20:00:00:25:b5:00:0a:0f device-alias name esxi41_host_ice3270-1b_2b1_A pwwn 20:00:00:25:b5:00:0a:1f device-alias commit fcdomain fcid database vsan 1 wwn 20:42:00:05:9b:79:7a:80 fcid 0x800000 dynamic vsan 1 wwn 20:41:00:05:9b:79:7a:80 fcid 0x800001 dynamic vsan 101 wwn 50:0a:09:81:8d:dd:92:bc fcid 0x4e0000 dynamic ! [ice3270-1a_2a] vsan 101 wwn 50:0a:09:81:9d:dd:92:bc fcid 0x4e0001 dynamic ! [ice3270-1b_2a] vsan 101 wwn 20:42:00:05:9b:79:7a:80 fcid 0x4e0002 dynamic vsan 101 wwn 20:41:00:05:9b:79:7a:80 fcid 0x4e0003 dynamic vsan 101 wwn 20:00:00:25:b5:00:0a:0f fcid 0x4e0004 dynamic ! [esxi41_host_ice3270-1a_2a1_A] vsan 101 wwn 20:00:00:25:b5:00:0a:1f fcid 0x4e0005 dynamic ! [esxi41_host_ice3270-1b_2b1_A]
interface san-port-channel 1 channel mode active interface port-channel10 description vPC peer-link switchport mode trunk vpc peer-link switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104 spanning-tree port type network interface port-channel11 description ice3270-1a switchport mode trunk vpc 11 switchport trunk native vlan 2 switchport trunk allowed vlan 101-102,186,3101 spanning-tree port type edge trunk interface port-channel12 description ice3270-1b switchport mode trunk vpc 12 switchport trunk native vlan 2 switchport trunk allowed vlan 101-102,186,3101 spanning-tree port type edge trunk interface port-channel13 description iceucsm-2a-m switchport mode trunk vpc 13 switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104
29
spanning-tree port type edge trunk interface port-channel14 description iceucsm-2b-m switchport mode trunk vpc 14 switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104 spanning-tree port type edge trunk interface port-channel20 description Po20:iceds-1:Po12 switchport mode trunk vpc 20 switchport trunk native vlan 2 switchport trunk allowed vlan 186 spanning-tree port type !Command: show running-config !Time: Wed Aug 10 11:36:57 2011 version 5.0(3)N1(1c) feature fcoe feature npiv feature fport-channel-trunk no feature telnet no telnet server enable cfs eth distribute feature lacp feature vpc feature lldp username admin password 5 $1$PBq/n2.b$g8jK3jqj8MelNDQKGRBD50 ip domain-lookup switchname ice5548-2 system jumbomtu 9000 logging event link-status default ip access-list classify_COS_4 10 permit ip 192.168.102.0/24 any 20 permit ip any 192.168.102.0/24 ip access-list classify_COS_5 10 permit ip 192.168.101.0/24 any 20 permit ip any 192.168.101.0/24 class-map type qos class-fcoe class-map type qos match-all Silver_Traffic match access-group name classify_COS_4 class-map type qos match-all Platinum_Traffic match access-group name classify_COS_5 class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 policy-map type qos Global_Classify class Platinum_Traffic set qos-group 2 class Silver_Traffic set qos-group 4 class class-fcoe set qos-group 1 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2
role network-admin
30
class-map type network-qos Silver_Traffic_NQ match qos-group 4 class-map type network-qos class-ip-multicast match qos-group 2 class-map type network-qos Platinum_Traffic_NQ match qos-group 2 policy-map type network-qos Setup_QOS class type network-qos Platinum_Traffic_NQ set cos 5 mtu 9000 class type network-qos Silver_Traffic_NQ set cos 4 mtu 9000 class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos class-default multicast-optimize system qos service-policy type qos input Global_Classify service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos Setup_QOS snmp-server user admin network-admin auth md5 0x7021e5331f25b481ed3ad26b96ccd729 priv 0x7021e5331f25b481ed3ad26b96ccd729 localizedkey snmp-server enable traps entity fru ntp server 10.61.185.11 use-vrf management vrf context management ip route 0.0.0.0/0 10.61.185.1 vlan 1 vlan 102 fcoe vsan 102 name FCoE_Fabric_B vlan 186 name MGMT-VLAN vlan 3101 name NFS-VLAN vlan 3102 name vMotion-VLAN vlan 3103 name Packet-Control-VLAN vlan 3104 name VM-Traffic-VLAN spanning-tree port type edge bpduguard default spanning-tree port type edge bpdufilter default spanning-tree port type network default vpc domain 23 role priority 20 peer-keepalive destination 10.61.185.69 source 10.61.185.70 vsan database vsan 102 name "Fabric_B" device-alias database device-alias name ice3270-1a_2b pwwn 50:0a:09:82:8d:dd:92:bc device-alias name ice3270-1b_2b pwwn 50:0a:09:82:9d:dd:92:bc device-alias name esxi41_host_ice3270-1a_2a1_B pwwn 20:00:00:25:b5:00:0b:0f device-alias name esxi41_host_ice3270-1b_2b1_B pwwn 20:00:00:25:b5:00:0b:1f device-alias commit fcdomain fcid database vsan 1 wwn 20:42:00:05:9b:6f:7a:40 fcid 0x590000 dynamic vsan 1 wwn 20:41:00:05:9b:6f:7a:40 fcid 0x590001 dynamic vsan 102 wwn 50:0a:09:82:9d:dd:92:bc fcid 0xae0000 dynamic ! [ice3270-1b_2b]
31
vsan 102 wwn 50:0a:09:82:8d:dd:92:bc fcid 0xae0001 [ice3270-1a_2b] vsan 102 wwn 20:42:00:05:9b:6f:7a:40 fcid 0xae0002 vsan 102 wwn 20:41:00:05:9b:6f:7a:40 fcid 0xae0003 vsan 102 wwn 20:00:00:25:b5:00:0b:0f fcid 0xae0004 ! [esxi41_host_ice3270-1a_2a1_A] vsan 102 wwn 20:00:00:25:b5:00:0b:1f fcid 0xae0005 ! [esxi41_host_ice3270-1b_2b1_A] !
interface san-port-channel 2 channel mode active interface port-channel10 description vPC peer-link switchport mode trunk vpc peer-link switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104 spanning-tree port type network interface port-channel11 description ice3270-1a switchport mode trunk vpc 11 switchport trunk native vlan 2 switchport trunk allowed vlan 101-102,186,3101 spanning-tree port type edge trunk interface port-channel12 description ice3270-1b switchport mode trunk vpc 12 switchport trunk native vlan 2 switchport trunk allowed vlan 101-102,186,3101 spanning-tree port type edge trunk interface port-channel13 description iceucsm-2a-m switchport mode trunk vpc 13 switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104 spanning-tree port type edge trunk interface port-channel14 description iceucsm-2b-m switchport mode trunk vpc 14 switchport trunk native vlan 2 switchport trunk allowed vlan 186,3101-3104 spanning-tree port type edge trunk interface port-channel20 description Po20:iceds-1:Po12 switchport mode trunk vpc 20 switchport trunk native vlan 2 switchport trunk allowed vlan 186 spanning-tree port type net
32
33
Figure 5
Create an Organization
The use of organizations allows the physical UCS resources to be logically divided. Each organization can have its own policies, pools, and quality of service definitions. Organizations are hierarchical in nature, allowing sub-organizations to inherit characteristics from higher organizations or establish their policies, pools, and service definitions. To create an Organization, go to the Main panel New menu drop-down list and select Create Organization to create an organization which manages the FlexPod infrastructure and owns the logical building blocks.
34
Figure 7
Pools
Create MAC Address Create vNIC Create VLANs Create VSANs Create WWPNs Servers WWNN UUID
Templates
vNIC vHBA Service Profile
Polices
QoS
35
References
Network Control Pin Group Boot Power Control Firmware BIOS Adapter
References
Cisco Nexus 5548 Switch: http://www.cisco.com/en/US/products/ps11215/index.html Cisco Unified Computing System: http://www.cisco.com/en/US/netsol/ns944/index.html NetApp FAS3210 Storage Controller: http://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml#Storage%20applia nces%20and%20V-series%20systems/gFilers NetApp Support (formerly NetApp on the Web (NOW) site: http://.now.netapp.com
36