IBM PureFlex System and IBM Flex System - IBM Redbooks
IBM PureFlex System and IBM Flex System - IBM Redbooks
IBM PureFlex System and IBM Flex System - IBM Redbooks
IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details about available I/O modules and expansion options Explains networking and storage configurations
ibm.com/redbooks
International Technical Support Organization IBM PureFlex System and IBM Flex System Products and Technology February 2013
SG24-7984-01
Note: Before using this information and the product it supports, read the information in Notices on page ix.
Second Edition (February 2013) This edition applies: IBM PureFlex System IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System x220 Compute Node IBM Flex System x240 Compute Node IBM Flex System x440 Compute Node IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node IBM Flex System p460 Compute Node IBM Flex System V7000 Storage Node IBM 42U 1100mm Enterprise V2 Dynamic Rack IBM PureFlex System 42U Rack and 42U Expansion Rack
Copyright International Business Machines Corporation 2012, 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv February 2013, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 IBM Flex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 IBM Flex System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Expansion nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 This book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2. IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 IBM PureFlex System capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 IBM PureFlex System Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.6 IBM Storwize V7000 and IBM V7000 Storage Node. . . . . . . . . . . . . . . . . . . . . . . 2.2.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 IBM PureFlex System Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.6 IBM Flex System V7000 Storage Node and IBM Storwize V7000 . . . . . . . . . . . . 2.3.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2012, 2013. All rights reserved.
1 2 3 4 4 4 5 5 5 5 6 7 8 8 9
11 12 13 13 14 14 14 17 18 19 20 22 23 23 24 25 25 28 29 31 32 iii
2.3.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 IBM PureFlex System Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Top of rack Ethernet switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Top of rack SAN switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 IBM Flex System V7000 Storage Node and IBM Storwize V7000 . . . . . . . . . . . . 2.4.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.9 Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 PureFlex services offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 IBM SmartCloud Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Compute node management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Integrated Management Module II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Flexible service processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Overview and part numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Software features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Supported agents, hardware, operating systems, and tasks . . . . . . . . . . . . . . . . 3.5.5 User interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34 34 35 36 36 36 40 40 43 43 46 47 48 51 52 53 53 54 56 57 57 58 59 60 60 63 66 69 72
Chapter 4. Chassis and infrastructure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.1.1 Front of the chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.1.2 Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.1.3 Rear of the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.5 Air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1.6 Compute node shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.1.7 Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2 Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.3 Fan modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4 Fan logic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.5 Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.7 Power supply and fan module requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.7.1 Fan module population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.7.2 Power supply population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.8 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.9 I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.10 I/O modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.10.1 I/O module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.10.2 Serial access cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
iv
IBM PureFlex System and IBM Flex System Products and Technology
4.10.3 I/O module naming scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.4 Switch to adapter compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.5 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . 4.10.6 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch . . . . . 4.10.7 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module . . . . . . . . . . . . . . 4.10.8 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . 4.10.9 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . . 4.10.10 IBM Flex System FC3171 8Gb SAN Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.11 IBM Flex System FC3171 8Gb SAN Pass-thru. . . . . . . . . . . . . . . . . . . . . . . . 4.10.12 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Infrastructure planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.1 Supported power cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.2 Supported PDUs and UPS units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.3 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.4 UPS planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.5 Console planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.6 Cooling planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.7 Chassis-rack cabinet compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 IBM 42U 1100mm Enterprise V2 Dynamic Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 IBM PureFlex System 42U Rack and 42U Expansion Rack . . . . . . . . . . . . . . . . . . . 4.14 IBM Rear Door Heat eXchanger V2 Type 1756 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 IBM Flex System x220 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Internal disk storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.8 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 Embedded 1 Gb Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.10 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 IBM Flex System x240 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Local storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.10 Embedded 10 Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 IBM Flex System x440 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116 117 119 127 134 136 141 149 151 153 154 155 155 156 160 161 162 163 164 169 172 177 178 178 178 182 182 183 185 185 193 198 200 200 202 202 206 207 207 211 211 212 214 217 229 230 236 238 239 240 244 245
Contents
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Internal disk storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Embedded 10Gb Virtual Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 I/O expansion options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.10 Network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Storage host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.12 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.13 Light path diagnostics panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.14 Operating systems support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 IBM Flex System p260 and p24L Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 IBM Flex System p24L Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.8 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.9 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.13 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.14 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 IBM Flex System p460 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.8 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.9 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.10 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.11 Hardware RAID capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.13 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.14 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.15 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 IBM Flex System PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Supported PCIe adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Supported I/O expansion cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 IBM Flex System Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Supported nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Features on Demand upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
IBM PureFlex System and IBM Flex System Products and Technology
245 248 249 249 251 251 254 258 260 263 263 264 264 266 266 267 269 269 270 272 272 273 274 277 280 282 284 285 285 286 286 288 289 290 291 292 293 296 298 298 300 300 302 303 303 304 305 308 309 310 311 312 314
5.8.3 Cache upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Supported HDD and SSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 Naming structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.3 Supported compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.4 Supported switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter. . . . . . . . . . . . . . . . . . . 5.9.6 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . . 5.9.7 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . . 5.9.8 IBM Flex System CN4054 10Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . . 5.9.9 IBM Flex System CN4058 8-port 10Gb Converged Adapter . . . . . . . . . . . . . . . 5.9.10 IBM Flex System EN4132 2-port 10Gb RoCE Adapter. . . . . . . . . . . . . . . . . . . 5.9.11 IBM Flex System FC3172 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . . 5.9.12 IBM Flex System FC3052 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . . 5.9.13 IBM Flex System FC5022 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 5.9.14 IBM Flex System IB6132 2-port FDR InfiniBand Adapter . . . . . . . . . . . . . . . . . 5.9.15 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . . . . . . . . . . Chapter 6. Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Ethernet switch module selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Scalable switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Redundant network topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Layer 2 failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Virtual Link Aggregation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Virtual Router Redundancy Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Routing protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Jumbo frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Server Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 IBM switch stacking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 IBM Virtual Fabric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Virtual Fabric mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Switch independent mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 VMready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. Storage integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 IBM Flex System V7000 Storage Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 V7000 Storage Node types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Controller Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Expansion Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 SAS cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 Fibre Channel over Ethernet with a V7000 Storage Node . . . . . . . . . . . . . . . . . 7.1.7 V7000 Storage Node drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.8 Features and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.9 Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.10 Configuration restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
315 316 318 319 319 320 320 322 324 325 327 330 333 336 337 339 341 342 345 346 347 348 349 350 351 351 352 353 354 354 354 355 355 355 356 358 359 359 359 361 362 366 367 372 374 376 376 377 377 380 380
Contents
vii
7.2 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 IBM XIV Storage System series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 IBM System Storage DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 IBM System Storage DS5000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 IBM System Storage DS3000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 IBM System Storage N series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 IBM System Storage TS3500 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 IBM System Storage TS3310 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.9 IBM System Storage TS3100 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Fibre Channel requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 FC switch selection and fabric interoperability rules . . . . . . . . . . . . . . . . . . . . . . 7.4 FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 High availability and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Dedicated server for centralized LAN backup. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 LAN-free backup for nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Implementing Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 iSCSI SAN Boot specific considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
381 382 383 384 384 385 385 387 387 388 388 388 389 393 394 396 397 398 398 399 400 400 401
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Related publications and education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 405 406 406 407
viii
IBM PureFlex System and IBM Flex System Products and Technology
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
ix
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Active Cloud Engine Active Memory AIX BladeCenter BNT DB2 DS8000 Easy Tier EnergyScale eServer FlashCopy IBM Flex System IBM Flex System Manager IBM SmartCloud IBM iDataPlex Netfinity Power Systems POWER6+ POWER6 POWER7+ POWER7 PowerPC PowerVM POWER PureFlex RackSwitch Real-time Compression Redbooks Redbooks (logo) ServerProven ServicePac Storwize System Storage System x Tivoli Storage Manager FastBack Tivoli VMready WebSphere X-Architecture XIV
The following terms are trademarks of other companies: Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Linear Tape-Open, LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. SnapMirror, SnapManager, NearStore, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.
IBM PureFlex System and IBM Flex System Products and Technology
Preface
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.
xi
Dave Ridley is the System x, BladeCenter, and IBM iDataPlex Product Manager for IBM in the United Kingdom and Ireland. His role includes product transition planning, supporting marketing events, press briefings, management of the UK loan pool, running early ship programs, and supporting the local sales and technical teams. He is based in Horsham in the United Kingdom, and has been working for IBM since 1998. In addition, he has been involved with IBM x86 products for some 27 years.
Thanks to the authors of the previous editions of this book. Authors of the first edition, IBM PureFlex System and IBM Flex System Products and Technology, published in July 2012, were: David Watts Randall Davis Richard French Lu Han Dave Ridley Cristian Rojas Thanks to the following people for their contributions to this project: From IBM marketing: TJ Aspden Michael Bacon John Biebelhausen Mark Cadiz Bruce Corregan Mary Beth Daughtry Meleata Pinto Mike Easterly Diana Cunniffe Kyle Hampton From IBM development: Mike Anderson Sumanta Bahali Wayne Banks Barry Barnett Keith Cramer Mustafa Dahnoun Dean Duff Royce Espey Kaena Freitas Jim Gallagher Dottie Gardner Sam Gaver Phil Godbolt Mike Goodman John Gossett Tim Hiteshew Andy Huryn Bill Ilas Don Keener Caroline Metry Meg McColgan Mark McCool Rob Ord Greg Pruett Mike Solheim Fang Su Vic Stankevich Tan Trinh Rochelle White Dale Weiler Mark Welch Al Willard Botond Kiss Shekhar Mishra Justin Nguyen Sander Kim Dean Parker Hector Sanchez David Tareen David Walker Randi Wood Bob Zuber
xii
IBM PureFlex System and IBM Flex System Products and Technology
From the International Technical Support Organization: Kevin Barnes Tamikia Barrow Mary Comianos Shari Deiana Cheryl Gera Others from IBM around the world: Kerry Anders Bill Champion Fabiano Matassa Others from other companies: Tom Boucher, Emulex Brad Buland, Intel Jeff Lin, Emulex Chris Mojica, QLogic Brent Mosbrook, Emulex Jimmy Myers, Brocade Haithuy Nguyen, Mellanox Brian Sparks, Mellanox Matt Wineberg, Brocade Michael L. Nelson Matt Slavin Ilya Krutov Karen Lawrence Julie OShea Linda Robinson
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xiii
xiv
IBM PureFlex System and IBM Flex System Products and Technology
Summary of changes
This section describes the technical changes that were made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7984-01 for IBM PureFlex System and IBM Flex System Products and Technology as created or updated on February 18, 2013 3:13 pm.
New information
The following new products and options were added to the book: IBM SmartCloud Entry V2.4 IBM Flex System Manager V1.2 IBM Flex System Fabric EN4093R 10Gb Scalable Switch IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch FoD license upgrades for the IBM Flex System FC5022 16Gb SAN Scalable Switch IBM PureFlex System 42U Rack 2100-W power supply option for the Enterprise Chassis New options and models of the IBM Flex System x220 Compute Node IBM Flex System x440 Compute Node Additional solid-state drive options for all x86 compute nodes IBM Flex System p260 Compute Node, model 23X with IBM POWER7+ processors New memory options for the IBM Power Systems compute nodes IBM Flex System Storage Expansion Node IBM Flex System PCIe Expansion Node IBM Flex System CN4058 8-port 10Gb Converged Adapter IBM Flex System EN4132 2-port 10Gb RoCE Adapter IBM Flex System V7000 Storage Node
Changed information
The following updates were made to existing product information: Updated the configurations of IBM PureFlex System Express, Standard, and Enterprise Switch stacking feature of Ethernet switches FCoE and iSCSI support
xv
xvi
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 1.
Introduction
During the last 100 years, information technology moved from a specialized tool to a pervasive influence on nearly every aspect of life. From tabulating machines that counted with mechanical switches or vacuum tubes to the first programmable computers, IBM has been a part of this growth. The goal has always been to help customers to solve problems. IT is a constant part of business and of general life. The expertise of IBM in delivering IT solutions has helped the planet become more efficient. As organizational leaders seek to extract more real value from their data, business processes, and other key investments, IT is moving to the strategic center of business. To meet these business demands, IBM has introduced a new category of systems. These systems combine the flexibility of general-purpose systems, the elasticity of cloud computing, and the simplicity of an appliance that is tuned to the workload. Expert integrated systems are essentially the building blocks of capability. This new category of systems represents the collective knowledge of thousands of deployments, established guidelines, innovative thinking, IT leadership, and distilled expertise. The offerings are designed to deliver value in the following ways: Built-in expertise helps you to address complex business and operational tasks automatically. Integration by design helps you to tune systems for optimal performance and efficiency. Simplified experience, from design to purchase to maintenance, creates efficiencies quickly. These offerings are optimized for performance and virtualized for efficiency. These systems offer a no-compromise design with system-level upgradeability. The capability is built for cloud, containing built-in flexibility and simplicity. IBM PureFlex System is an expert integrated system. It is an infrastructure system with built-in expertise that deeply integrates with the complex IT elements of an infrastructure. This chapter describes the IBM PureFlex System and the components that make up this compelling offering.
IBM PureFlex System and IBM Flex System Products and Technology
Component IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System FC3171 8Gb SAN Switcha IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch a IBM Flex System Manager Node IBM Flex System Manager software license Chassis Management Module Chassis power supplies (std/max) Chassis 80 mm fan modules (std/max) IBM Flex System V7000 Storage Nodeb IBM Storwize V7000 Disk Systemb IBM Storwize V7000 Software
1 IBM Flex System Manager with 1-year service and support 2 2/6 4/8 Yes (redundant controller) Yes (redundant controller) Base with 1-year software maintenance agreement Optional Real Time Compression
1 IBM Flex System Manager Advanced with 3-year service and support 2 4/6 6/8 Yes (redundant controller) Yes (redundant controller) Base with 3-year software maintenance agreement Real Time Compression
1 Flex System Manager Advanced with 3-year service and support 2 6/6 8/8 Yes (redundant controller) Yes (redundant controller) Base with 3-year software maintenance agreement Real Time Compression
a. Select either the IBM Flex System FC3171 8Gb SAN Switch or IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch module b. Select either the IBM Flex System V7000 Storage Node installed inside the Enterprise chassis or the external IBM Storwize V7000 Disk System
The fundamental building blocks of the three IBM PureFlex System solutions are the compute nodes, storage nodes, and networking of the IBM Flex System Enterprise Chassis.
Chapter 1. Introduction
1.2.1 Management
IBM Flex System Manager is designed to optimize the physical and virtual resources of the IBM Flex System infrastructure while simplifying and automating repetitive tasks. It provides easy system setup procedures with wizards and built-in expertise, and consolidated monitoring for all of your resources, including compute, storage, networking, virtualization, and energy. IBM Flex System Manager provides core management functionality along with automation. It is an ideal solution that allows you to reduce administrative expense and focus your efforts on business innovation. A single user interface controls these features: Intelligent automation Resource pooling Improved resource utilization Complete management integration Simplified setup
1.2.3 Storage
The storage capabilities of IBM Flex System give you advanced functionality with storage nodes in your system, and take advantage of your existing storage infrastructure through advanced virtualization. Storage is available either within the chassis using the IBM Flex System V7000 Storage Node that integrates with the Flex System Chassis, or externally with the IBM Storwize V7000. IBM Flex System simplifies storage administration with a single user interface for all your storage. The management console is integrated with the comprehensive management system. These management and storage capabilities allow you to virtualize third-party storage with nondisruptive migration of your current storage infrastructure. You can also take advantage of intelligent tiering so you can balance performance and cost for your storage needs. The solution also supports local and remote replication, and snapshots for flexible business continuity and disaster recovery capabilities.
IBM PureFlex System and IBM Flex System Products and Technology
1.2.4 Networking
The range of available adapters and switches to support key network protocols allow you to configure IBM Flex System to fit in your infrastructure. However, you can do so without sacrificing being ready for the future. The networking resources in IBM Flex System are standards-based, flexible, and fully integrated into the system. This combination gives you no-compromise networking for your solution. Network resources are virtualized and managed by workload. And these capabilities are automated and optimized to make your network more reliable and simpler to manage. IBM Flex Systems gives you these key networking capabilities: Supports the networking infrastructure that you have today, including Ethernet, Fibre Channel, FCoE, and InfiniBand Offers industry-leading performance with 1 Gb, 10 Gb, and 40 Gb Ethernet, 8 Gb and 16 Gb Fibre Channel, and FDR InfiniBand Provides pay-as-you-grow scalability so you can add ports and bandwidth when needed
1.2.5 Infrastructure
The IBM Flex System Enterprise Chassis is the foundation of the offering, supporting intelligent workload deployment and management for maximum business agility. The 14-node, 10U chassis delivers high-performance connectivity for your integrated compute, storage, networking, and management resources. The chassis is designed to support multiple generations of technology, and offers independently scalable resource pools for higher utilization and lower cost per workload.
Beyond the physical world of inventory, configuration, and monitoring, IBM Flex System Manager enables virtualization and workload optimization for a new class of computing: Resource utilization: Detects congestion, notification policies, and relocation of physical and virtual machines that include storage and network configurations within the network fabric Resource pooling: Pooled network switching, with placement advisors that consider VM compatibility, processor, availability, and energy Intelligent automation: Automated and dynamic VM placement that is based on utilization, energy, hardware predictive failure alerts, and host failures Figure 1-1 shows the IBM Flex System Manager.
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 1. Introduction
The nodes are complemented with leadership I/O capabilities of up to 16 channels of high-speed I/O lanes per half wide node and 32 lanes per full wide node. Various I/O adapters are available.
Figure 1-4 IBM Flex System Fabric EN4093 10Gb Scalable Switch
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 1. Introduction
10
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 2.
11
12
IBM PureFlex System and IBM Flex System Products and Technology
2.2.1 Chassis
Table 2-2 lists the major components of the IBM Flex System Enterprise Chassis, including the switches and options. Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
Table 2-2 Components of the chassis and switches AAS feature code 7893-92X 3593 3282 EB29 3771 5370 3595 3286 9059 3590 4558 XCC feature code 8721-HC1 A0TB 5053 3268 A2RQ A2B9 A0TD 5075 A0UC A0UD 6252 Description IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10Gb Scalable Switch 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Brocade 8Gb SFP+ optical Transceiver IBM Flex System FC3171 8Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver PSU 2500 W Additional PSU 2500 W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord 1 2 5 1 2 1 2 2 0 2 Minimum quantity
13
Description 4.3m 16A/208V C19 to NEMA L6-20P (US) power cord Base Chassis Management Module Additional Chassis Management Module Base Fan Modules (four) Additional Fan Modules (two)
Minimum quantity 0 1 1 1 0
EB25
A1PJ
IBM Flex System p260 Compute Node (IBM POWER7 and POWER7+ based) Or a minimum of one of the following compute nodes through the XCC route: IBM Flex System x240 Compute Node (Intel Xeon based) IBM Flex System x220 Compute Node (Intel Xeon Based - XCC only) IBM Flex System x440 Compute Node (Intel Xeon Based - XCC only) Table 2-5 lists the major components of the IBM Flex System p260 Compute Node.
Table 2-5 Components of IBM Flex System p260 Compute Node POWER7 AAS feature code 7895-22x 1764 1762 Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Minimum quantity 1 1 1
Base processor 1 Required, select only one, minimum 1, maximum 1 EPR1 EPR3 EPR5 8 Cores, 2 x 4 core, 3.3 GHz + 2-socket system board 16 Cores, 2 x 8 core, 3.2 GHz + 2-socket system board 16 Cores, 2 x 8 core, 3.55 GHz + 2-socket system board 1
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type EEMF EEME EEMD 64 GB (2x 32 GB), 1066 MHz, LP RDIMMs (1.35 V) 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35 V) 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35 V)
Table 2-6 lists the major components of the IBM Flex System p260 Compute Node, model 23X.
Table 2-6 Components of IBM Flex System p260 Compute Node POWER7+ AAS Feature code 7895-23X 1764 1762 Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter 1 1 Minimum Quantity
Base processor 1 Required, select only one, minimum 1, maximum 1 EPRA EPRB EPRD 16 Cores, 2 x 8core, 4.116 GHz + 2-socket system board 16 Cores, 2 x 8 core, 3.612 GHz + 2-socket system board 8 Cores, 2 x 4 core, 4.088 GHz + 2-socket system board 1
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type EEMF EEME 64 GB (2x 32 GB), 1066 MHz, LP RDIMMs (1.35 V) 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35 V)
15
Description 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35 V) 8 GB (2x 4 GB), 1066 MHz, VLP RDIMMs (1.35 V)
Minimum Quantity
Table 2-7 lists the major components of the IBM Flex System p24L Compute Node.
Table 2-7 Components of IBM Flex System p24L Compute Node AAS feature code 1457-7FL 1764 1762 Description IBM Flex System p24L Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Minimum quantity 1 1 1
Base processor 1 Required, select only one, minimum 1, maximum 1 EPR7 EPR8 EPR9 12 cores, 2x 6core, 3.7 GHz + 2-socket system board 16 cores, 2x 8 core, 3.2 GHz + 2-socket system board 16 cores, 2x 8 core, 3.55 GHz + 2-socket system board 1
Memory - 2 GB per core minimum with all DIMM slots filled with same memory type EEMF EEME EEMD 8196 EM04 64 GB (2x 32 GB), 1066 MHz, LP RDIMMs (1.35 V) 32 GB (2x 16 GB), 1066 MHz, LP RDIMMs (1.35 V) 16 GB (2x 8 GB), 1066 MHz, VLP RDIMMs (1.35 V) 8 GB (2x 4 GB), 1066 MHz, DDR3, VLP RDIMMS(1.35V) 4 GB (2 x2 GB), 1066 MHz, DDR3 DRAM, RDIMM (1Rx8)
Table 2-8 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-8 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 1764 1759 XCC feature code 8737AC1 A1BC A1BD A2N5 A1R1 Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected - EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0) 1 Minimum quantity
1 1
EBK2 EBK3
49Y8119 41Y8300
16
IBM PureFlex System and IBM Flex System Products and Technology
Table 2-9 list the major components of the IBM Flex System x220 Compute Node.
Table 2-9 Components of IBM Flex System x220 Compute Node AAS feature code 7906-25X A1VM A1VN A1R1 A1BM A1BP A33Q A3VC XCC feature code 7906AC1 A1VM A1VN A1R1 A1BM A1BP A33Q A2VC Description Minimum quantity 1
IBM Flex System x220 Compute Node IBM Flex System Compute Node with embedded 1Gb Ethernet IBM Flex System Compute Node (LOM-Less) (select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter ServeRAID C105 for IBM Flex System IBM USB key for VMware ESXi 5.0
1 1 1 1 1
Table 2-10 lists the major components of the IBM Flex System x440 Compute Node
Table 2-10 Major components of IBM Flex System x440 Compute Node AAS feature code 7917-45X A2BC A2BD 1759 A1BM A1BP A2VC XCC feature code 7917-AC1 A2BC A2BD A1R1 A1BM A1BP A2VC Description IBM Flex System x440 Compute Node IBM Flex System Compute Node with embedded (this has 2 x LOM for full-wide 10Gb Virtual Fabric) IBM Flex System Compute Node (LOMless) (select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter 2x required if LOMless ordered IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM USB Memory key for VMware ESXi 5.0 Minimum quantity 1 1
2 1 1 1
17
Description Intel Xeon E5-2650 8C 2.0 GHz 20 MB 1600 MHz 95W 200 GB, 1.8", SATA MLC SSD 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD FSM Embedded 10Gb Virtual Fabric
Minimum quantity 1 2 1 1
a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.
1 2 1 2
a. The default is two 200 GB or two 400 GB drives. These drives may be deselected. b. The number of drives that is selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. See Table 2-14.
Table 2-13 shows the components of the V7000 Storage Node, which is the default selection within AAS. HVEC/AAS V7000 is mandatory. It is also possible to expand V7000 Storage Node by using the V7000 Storage Node Expansion.
Table 2-13 Components of the V7000 Storage Node AAS feature code 4939-A49 AD41 AD43 XCC feature code 4939-X49 AD41 AD43 Description IBM Flex System V7000 Storage Node 200 GB 2.5-inch SSD or 400 GB 2.5-inch SSD Minimum quantity 1 2a
18
IBM PureFlex System and IBM Flex System Products and Technology
Description 300 GB 2.5 10K 300 GB 2.5 15 K 600 GB 2.5 10K 900 GB 2.5 10K 8Gb FC 4 Port Daughter Card
a. The default is two 200 GB or two 400 GB drives. These drives may be deselected. b. The number of drives that is selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. See Table 2-14. Table 2-14 Drive quantity Type of configuration Power only nodes Intel nodes without SmartCloud Entry Intel nodes with SmartCloud Entry Power nodes, Intel Nodes, and no SmartCloud Entry Power nodes, System x nodes, with SmartCloud Entry Power nodes, System x nodes, with SmartCloud Entry, and more than one PureFlex System 300 GB drives 16 0 8 16 16 24 600 GB drives 8 0 8 8 16 16 900 GB drives 8 0 8 8 16 16
a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on customers country and utility power requirements.
19
2.2.8 Software
This section lists the software features of IBM PureFlex System Express.
5765-PVS PowerVM Standard 5771-PVS 1-year SWMA 5765-PSE PowerSC Standard 5660-PSE 1-year SWMA Not applicable Not applicable
Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 RM (Remote Mirroring) 539-CP1 V7000 Real Time Compression 5639-EX1 - Flex System V7000 External Virtualization software 5639-EXA - 1yr External Virtualization SWMA 5639-RE1 - Flex System V7000 Remote Mirroring 5639-REA - 1yr RM SWMA 5639-CM1 - Flex System V7000 RTC 5639-CMA - 1 yr - RTC SWMA 5765-FMS IBM Flex System Manager Advanced 5765-AEZ AIX V6 Enterprise 5765-G99 AIX V7 Enterprise IBM i V6.1 IBM i V7.1
20
IBM PureFlex System and IBM Flex System Products and Technology
AIX V6 Security (PowerSC) Cloud Software (optional) Not applicable 5765-SCP SmartCloud Entry 5660-SCP 1-year SWMA Requires upgrade to 5765-FMS IBM Flex System Manager Advanced
AIX V7 Not applicable 5765-SCP SmartCloud Entry 5660-SCP 1-year SWMA Requires upgrade to 5765-FMS IBM Flex System Manager Advanced
Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5639-CP1 - V7000 RTC 639-EX1 - Flex System V7000 EV 5639-EXA - 1yr EV SWMA 5639-RE1 - Flex System V7000 RM 5639-REA - 1yr RM SWMA 5639-CM1 - Flex System V7000 RTC 5639-CMA - 1 yr - RTC SWMA 5765-FMS IBM Flex System Manager Advanced 5765-PVE PowerVM Enterprise
21
Optional components - Express Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5639-EX1 - Flex System V7000 EV 5639-EXA - 1yr EV SWMA 5639-RE1 - Flex System V7000 RM 5639-REA - 1yr RM SWMA 5639-CM1 - Flex System V7000 RTC 5639-CMA - 1 yr - RTC SWMA 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access 00D7993 - Flex System V7000 EV 00D7993 - 1yr EV SWMA 00D7988 - Flex System V7000 RM 00D7988 - 1yr RM SWMA 00D7983 - Flex System V7000 RTC 00D7983 - 1 year - RTC SWMA 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access
VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5660-SCP 1-year SWMA 5641-SC1 SmartCloud Entry with 1-year SWMA
2.2.9 Services
IBM PureFlex System Express includes the following services: Service and Support offerings: Software Maintenance: 1 year of 9x5 (9 hours per day, 5 days per week) Hardware Maintenance: 3 years of 9x5 Next Business Day service
22
IBM PureFlex System and IBM Flex System Products and Technology
Technical Support Services Essential minimum service level offering for every IBM PureFlex System Express configuration: Three years with one microcode analysis per year Optional TSS offerings for IBM PureFlex System Express: Three years of Warranty Service upgrade to 24x7x4 service Three years of SWMA on applicable products Three years of Software Support on Windows Server / Linux and VMware environments. Three years of Enhanced Technical Support
Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
2.3.1 Chassis
Table 2-20 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.
Table 2-20 Components of the chassis and switches AAS feature code 7893-92X XCC feature code 8721-HC1 Description IBM Flex System Enterprise Chassis Minimum quantity 1
23
AAS feature code 3593 3282 EB29 3771 5370 3595 3286 3590 4558 4560 9039 3592 9038 7805
XCC feature code A0TB 5053 3268 A2RQ A2B9 A0TD 5075 A0UD 6252 6275 A0TM A0UE None A0UA
Description IBM Flex System Fabric EN4093 10Gb Scalable Switch 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Brocade 8Gb SFP+ Transceiver IBM Flex System FC3171 8Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver Additional PSU 2500W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord 4.3m 16A/208V C19 to NEMA L6-20P (US) power cord Base Chassis Management Module Additional Chassis Management Module Base Fan Modules (four) Additional Fan Modules (two)
Minimum quantity 1 4 5 1 4 2 4 2 4 1 1 1 1 1
EB25
A1PJ
24
IBM PureFlex System and IBM Flex System Products and Technology
Base processor 1 Required, select only one, minimum 1, maximum 1 EPR2 EPR4 EPR6 16 Cores, (4 x 4 core), 3.3 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.2 GHz + 4-socket system board 32 Cores, (4 x 8 core), 3.55 GHz + 4-socket system board 1
Memory - 8 GB per core minimum with all DIMM slots filled with same memory type 8145 8199 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35V) 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35V)
25
Table 2-24 list the major components of the IBM Flex System p24L Compute Node.
Table 2-24 Components of the IBM Flex System p24L Compute Node AAS feature code 1457-7FL 1764 1762 EPR8 EPR9 EPR7 Description IBM Flex System p24L Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Processor 16 W 3.220 GHz, 8 core (2x) + 2S system board Processor 16 W 3.556 GHz, 8 core (2x) + 2S system board Processor 12 W 3.72 GHz, 6 core (2x) + 2S system board Minimum quantity 1 1 1
Memory - 2 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 8196 EM04 64 GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4Rx8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8) 8 GB (2 x 4 GB), 1066 MHz, DDR3, VLP RDIMMS (1.35 V) 4 GB (2 x 2 GB), 1066 MHz, DDR3 DRAM, (RDIMM, 1Rx8)
Table 2-25 lists the major components of the IBM Flex System p260 Compute Node for are POWER7 configurations.
Table 2-25 Components of IBM Flex System p260 Compute Node AAS feature code 7895-22X 1764 1762 EPR1 EPR3 EPR5 Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Base processor 8 W 3.304 GHz, 4 core (2x) + 2S system board Base processor 16 W 3.220 GHz, 8 core (2x) + 2S system board Base processor 16 W 3.556 GHz, 8 core (2x) + 2S system board Minimum quantity 1 1 1 1 1 1
Memory - 8 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 64 GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4Rx8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8)
26
IBM PureFlex System and IBM Flex System Products and Technology
Table 2-26 lists the major components of the IBM Flex System p260 Compute Node POWER7+.
Table 2-26 Components of IBM Flex System p260 Compute Node POWER7+ AAS feature code 7895-23X 1764 1762 EPRA EPRB EPRD Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter 16 W 4.116 GHz 8 core (2x) + 2S system board 16 W 3.612 GHz 8 core (2x) + 2S system board 8 W 4.088 GHz 4 core (2x)+ 2S system board Minimum quantity 1 1 1 1 1 1
Memory - 8 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 8196 64GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4Rx8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8 8 GB (2 x 4 GB), 1066 MHz, DDR3, VLP RDIMMS (1.35 V)
Table 2-27 lists the major components of the IBM Flex System x220 Compute Node.
Table 2-27 Components of IBM Flex System x220 Compute Node AAS feature code 7906-25X A1VM A1VN A1R1 A1BM A1BP A33Q A3VC XCC feature code 7906AC1 A1VM A1VN A1R1 A1BM A1BP A33Q A2VC Description Minimum quantity 1
IBM Flex System x220 Compute Node IBM Flex System Compute Node with embedded 1Gb Ethernet IBM Flex System Compute Node (LOM-Less) (select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter ServeRAID C105 for IBM Flex System IBM USB key for VMware ESXi 5.0
1 1 1 1 1
27
Table 2-28 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-28 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 1764 1759 XCC feature code 8737AC1 A1BC A1BD A2N5 A1R1 Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected: EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0) 1 Minimum quantity
1 1
EBK2 EBK3
49Y8119 41Y8300
Table 2-29 lists the major components of the IBM Flex System x440 Compute Node.
Table 2-29 Major components IBM Flex System x440 Compute Nodex440 AAS feature code 7917-45X A2BC A2BD 1759 A1BM A1BP A2VC XCC feature code 7917-AC1 A2BC A2BD A1R1 A1BM A1BP A2VC Description IBM Flex System x440 Compute Node IBM Flex System Compute Node with embedded (This has 2 x LOM for full-wide 10Gb Virtual Fabric) IBM Flex System Compute Node (LOMless) (Select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter (two are required if LOMless is ordered) IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM USB Memory key for VMware ESXi 5.0 Minimum quantity 1 1
2 1 1 1
28
IBM PureFlex System and IBM Flex System Products and Technology
Description 200 GB, 1.8", SATA MLC SSD 1TB 2.5 SATA 7.2K RPM hot-swap 6 Gbps HDD
Minimum quantity 2 1
a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.
2.3.6 IBM Flex System V7000 Storage Node and IBM Storwize V7000
IBM Flex System V7000 Storage Node is selected by default in the PureFlex Standard configuration. The node is installed within the Flex System chassis. It may be deselected only in AAS, in which case the IBM Storwize V7000 must be selected in its place. Table 2-31 lists the major components of the IBM Flex System V7000 Storage Node.
Table 2-31 Components of the V7000 Storage Node AAS feature code 4939-A49 AD41 AD43 AD21 AD32 AD23 AD24 ADB2 XCC feature code 4939-X49 Ad41 AD43 AD21 AD32 AD23 AD24 ADB2 Description IBM Flex System V7000 Storage Node 200 GB 2.5-inch SSD or 400 GB 2.5-inch SSD 300 GB 2.5 10K 300 GB 2.5 15 K 600 GB 2.5 10K 900 GB 2.5 10K 8Gb FC 4 Port Daughter Card Minimum quantity 1 2a 0 - 24b
a. The default is two 200 GB or two 400 GB drives. These drives may be deselected. b. The number of drives that are selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. See Table 2-14.
It is possible to add up to nine V7000 Expansion Enclosures per IBM Storwize V7000 controller for a maximum of 18. Table 2-32 lists the major components of the IBM Storwize V7000 Expansion Enclosure.
Table 2-32 V7000 Storage Expansion AAS feature code 2076-224 EFD7 5406 5401 9730 XCC feature code None None None None Description Minimu m quantity
IBM Storwize V7000 Expansion Standard Expansion Indicator SAS Cable 6M SAS Cable 1M Power cord to PDU (quantity 2) 2 2 2 1
29
Table 2-33 lists the major components of the IBM Storwize V7000 storage server, which is externally connected to a chassis.
Table 2-33 Components of the IBM Storwize V7000 storage server AAS feature code 2076-124 5305 3512 3514 3543 3253 3546 3549 0010 6008 9730 9801 XCC feature code 2076-124 5305 None None None None None None 0010 6008 9730 9801 Description IBM Storwize V7000 Controller 5m Fiber-optic Cable 200 GB 2.5-inch SSD or 400 GB 2.5-inch SSD 300 GB 2.5 10K 300 GB 2.5 15 K 600 GB 2.5 10K 900 GB 2.5 10K Storwize V7000 Software Preload 8 GB Cache Power cord to PDU (includes two power cords) Power supplies Minimum quantity 1 2 2a 0-24b
1 2 1 2
a. A minimum quantity of two, either 3512 or 3514. b. The number of drives that is selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. For more details, see Table 2-34. Table 2-34 Drive configuration table Type of configuration Power only nodes Intel nodes with no SmartCloud Entry Intel nodes with SmartCloud Entry Power nodes, Intel nodes, and no SmartCloud Entry) Power nodes, System x nodes, SmartCloud Entry Power nodes, System x nodes, SmartCloud Entry, and more than one PureFlex System 300 GB drives 16 0 8 16 16 24 600 GB drives 8 0 8 8 16 16 900 GB drives 8 0 8 8 16 16
30
IBM PureFlex System and IBM Flex System Products and Technology
Table 2-35 lists the options that you use when you configure a IBM Flex System V7000 Storage Node Expansion Enclosure that is installed internally within a chassis. This enclosure connects to the existing controller through the SAS cables, which are connected from the front of the V7000 Storage Node.
Table 2-35 IBM Flex System V7000 Storage Node Expansion AAS feature code 4939-A29 EFD2 ADA1 5406 XCC feature code 4939-X29 EFD2 ADA1 None Description Minimum quantity 1 1 2 2
IBM Flex System V7000 Storage Node Expansion Enclosure Standard Expansion Indicator 0.3 m SAS Cable 6m SAS Cable
a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU, which is quantity = 4. The selection depends on your country and utility power requirements.
31
2.3.8 Software
This section lists the software features of IBM PureFlex System Standard.
5765-PVE PowerVM Enterprise 5773-PVE 3-year SWMA 5765-PSE PowerSC Standard 5662-PSE 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA Not applicable Not applicable Not applicable Not applicable
Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Operating system Virtualization Security (PowerSC) Cloud Software (optional) 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable 5765-AEZ AIX V6 Enterprise 5765-G99 AIX V7 Enterprise
5765-PVE PowerVM Enterprise Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable
32
IBM PureFlex System and IBM Flex System Products and Technology
Optional components - Standard Expansion IBM Storwize V7000 Software IBM Flex System Manager Virtualization 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring Not applicable Not applicable
VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5662-SCP 3 yr SWMA 5641-SC3 SmartCloud Entry, 3 yr SWMA
Optional components - Standard Expansion IBM Storwize V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring
33
Intel Xeon based compute nodes (AAS) IBM Flex System Manager Operating system 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access
Intel Xeon based compute nodes (HVEC) 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access
VMware ESXi selectable in the hardware configuration Not applicable Not applicable
2.3.9 Services
IBM PureFlex System Standard includes the following services: Service & Support offerings: Software Maintenance: 1 year of 9x5 (9 hours per day, 5 days per week) Hardware Maintenance: 3 years of 9x5 Next Business Day service Technical Support Services Essential minimum service level offering for every IBM PureFlex System Standard configuration: Three years with one microcode analysis per year Three years of Warranty Service upgrade to 24x7x4 service Three years of Account Advocate or Enhanced Technical Support (9x5) and software support prerequisites.
34
IBM PureFlex System and IBM Flex System Products and Technology
Remember: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
2.4.1 Chassis
Table 2-41 lists the major components of the IBM Flex System Enterprise Chassis including the switches and options.
Table 2-41 Components of the chassis and switches AAS feature code 7893-92X 3593 3596 3597 3771 5370 3282 EB29 3595 3286 3590 4558 4560 9039 3592 9038 7805 XCC feature code 8721-HC1 A0TB A1EL A1EM A2RQ A2B9 5053 3268 A0TD 5075 A0UD 6252 6275 A0TM A0UE None A0UA Description IBM Flex System Enterprise Chassis IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System Fabric EN4093 10Gb Scalable Switch Upgrade 1 IBM Flex System Fabric EN4093 10Gb Scalable Switch Upgrade 2 IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch 8Gb SFP+ Optical Transceiver 10 GbE 850 nm Fiber SFP+ Transceiver (SR) IBM BNT SFP RJ45 Transceiver IBM Flex System FC3171 8Gb SAN Switch IBM 8 GB SFP+ Short-Wave Optical Transceiver Additional PSU 2500 W 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord 4.3m 16A/208V C19 to NEMA L6-20P (US) power cord Base Chassis Management Module Additional Chassis Management Module Base Fan Modules (four) Additional Fan Modules (two) Minimum quantity 1 2 2 2 2 8 4 6 2 8 4 6 1 1 1 1 2
35
3m IBM QSFP+ DAC Break Out Cable 1.5m CAT5E Blue Ethernet Cable 1m QSFP+ to QSFP+ Cable 3m Passive DAC SFP+ Cable 1.5m CAT5E Blue Ethernet Cable
a. For IBM Power Systems configurations, two are required. For System x configurations, two are required when two or more Enterprise Chassis are configured.
Table 2-44 lists the major components of the IBM Flex System p460 Compute Node.
Table 2-44 Components of IBM Flex System p460 Compute Node AAS feature code 7895-42x 1764 1762 Description IBM Flex System p460 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Minimum quantity 1 2 2
Base processor 1 Required, select only one, minimum 1, maximum 1 EPR2 EPR4 EPR6 16 cores, (4 x 4 core), 3.3 GHz + 4-socket system board 32 cores, (4 x 8 core), 3.2 GHz + 4-socket system board 32 cores, (4 x 8 core), 3.55 GHz + 4-socket system board 1
Memory - 8 GB per core minimum with all DIMM slots filled with the same memory type 8145 8199 32 GB (2 x 16 GB), 1066 MHz, LP RDIMMs (1.35 V) 16 GB (2 x 8 GB), 1066 MHz, VLP RDIMMs (1.35 V)
Table 2-45 list the major components of the IBM Flex System p24L Compute Node.
Table 2-45 Components of the IBM Flex System p24L Compute Node AAS feature code 1457-7FL 1764 1762 EPR8 EPR9 EPR7 Description IBM Flex System p24L Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Processor 16 W 3.220 GHz, 8 core (2X) + 2S system board Processor 16 W 3.556 GHz, 8 core (2X) + 2S system board Processor 12 W 3.72 GHz, 6 core (2X) + 2S system board Minimum quantity 1 1 1
Memory - 2 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 8196 EM04 64 GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4RX8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8) 8 GB (2 x 4 GB), 1066 MHz, DDR3, VLP RDIMMS (1.35 V) 4 GB (2 x 2 GB), 1066 MHz, DDR3 DRAM, (RDIMM, 1Rx8)
37
Table 2-46 lists the major components of the IBM Flex System p260 Compute Node for both POWER7 configurations.
Table 2-46 Components of IBM Flex System p260 Compute Node AAS feature code 7895-22X 1764 1762 EPR1 EPR3 EPR5 Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Base processor 8 W 3.304 GHz, 4 core (2X) + 2S system board Base processor 16 W 3.220 GHz, 8 core (2X) + 2S system board Base processor 16 W 3.556 GHz, 8 core (2X) + 2S system board Minimum quantity 1 1 1 1 1 1
Memory - 8 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 64 GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4Rx8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8)
Table 2-47 lists the major components of the IBM Flex System p260 Compute Node for POWER7+.
Table 2-47 Components of IBM Flex System p260 Compute Node for POWER7+ AAS feature code 7895-23X 1764 1762 EPRA EPRB EPRD Description IBM Flex System p260 Compute Node IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System EN4054 4-port 10Gb Ethernet Adapter 16 W 4.116 GHz 8 core (2X) + 2S system board 16 W 3.612 GHz 8 core (2X) + 2S system board 8 W 4.088 GHz 4 core (2X)+ 2S system board Minimum quantity 1 1 1 1 1 1
Memory - 8 GB per core minimum with all DIMM slots filled with the same memory type EEMF EEME EEMD 8196 64 GB (2 x 32 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 4Rx8 32 GB (2 x 16 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (LP RDIMM, 2Rx8) 16 GB (2 x 8 GB), DIMMS (1.35 V), 1066 MHz, 4 Gb DDR3 DRAM (VLP RDIMM, 2Rx8 8 GB (2 x 4 GB), 1066 MHz, DDR3, VLP RDIMMS (1.35 V)
38
IBM PureFlex System and IBM Flex System Products and Technology
Table 2-48 lists the major components of the IBM Flex System x220 Compute Node.
Table 2-48 Components of IBM Flex System x220 Compute Node AAS feature code 7906-25X A1VM A1VN A1R1 A1BM A1BP A33Q A3VC XCC feature code 7906AC1 A1VM A1VN A1R1 A1BM A1BP A33Q A2VC Description Minimum quantity 1
IBM Flex System x220 Compute Node IBM Flex System Compute Node with embedded 1Gb Ethernet IBM Flex System Compute Node (LOM-Less) (select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter ServeRAID C105 for IBM Flex System IBM USB key for VMware ESXi 5.0
1 1 1 1 1
Table 2-49 lists the major components of the IBM Flex System x240 Compute Node.
Table 2-49 Components of IBM Flex System x240 Compute Node AAS feature code 7863-10X EN20 EN21 1764 1759 XCC feature code 8737AC1 A1BC A1BD A2N5 A1R1 Description IBM Flex System x240 Compute Node x240 with embedded 10 Gb Virtual Fabric x240 without embedded 10 Gb Virtual Fabric (select one of these base features) IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter (select if x240 without embedded 10 Gb Virtual Fabric is selected: EN21/A1BD) IBM Flex System x240 USB Enablement Kit 2 GB USB Hypervisor Key (VMware 5.0) 1 Minimum quantity
1 1
EBK2 EBK3
49Y8119 41Y8300
Table 2-50 lists the major components of the IBM Flex System x440 Compute Node.
Table 2-50 Major components of IBM Flex System x440 Compute Nodex AAS feature code 7917-45X A2BC A2BD 1759 XCC feature code 7917-AC1 A2BC A2BD A1R1 Description IBM Flex System x440 Compute Node IBM Flex System Compute Node with embedded (Has 2 x LOM for full-wide 10Gb Virtual Fabric) IBM Flex System Compute Node (LOMless) (select one of these base features) IBM Flex System CN4054 10Gb Virtual Fabric Adapter 2x required if LOMless is ordered Minimum quantity 1 1
39
Description IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter IBM USB Memory key for VMware ESXi 5.0
Minimum quantity 1 1 1
a. In the AAS system, FC EM09 is pairs of DIMMs. In the XCC system, FC 8941 is single DIMMs. The DIMMS are otherwise identical.
2.4.6 IBM Flex System V7000 Storage Node and IBM Storwize V7000
IBM Flex System V7000 Storage Node is selected by default in the PureFlex Enterprise configuration, and the node is installed within the Flex System chassis. It may be deselected only in AAS, in which case, the IBM Storwize V7000 must be selected in its place.
40
IBM PureFlex System and IBM Flex System Products and Technology
a. The default is two 200 GB or two 400 GB Drives. These drives may be deselected. b. The number of drives that is selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. See Table 2-14.
It is possible to add up to nine V7000 Expansion Enclosures per IBM Storwize V7000 controller. Table 2-53 lists the major components of the IBM Storwize V7000 Expansion Enclosure.
Table 2-53 V7000 Storage Expansion AAS feature code 2076-22 4 EFD8 5406 5401 9730 XCC feature code None None None None Description Minimu m quantity
IBM Storwize V7000 Expansion Enterprise Expansion Indicator SAS Cable 6M SAS Cable 1M Power cord to PDU (quantity 2) 2 2 2 1
Table 2-54 lists the major components of the IBM Storwize V7000 storage server, externally connected to a chassis. The server can be expanded by using V7000 Storage Expansion Trays.
Table 2-54 Components of the IBM Storwize V7000 storage server AAS feature code 2076-124 5305 3512 3514 XCC feature code 2076-124 5305 None None Description IBM Storwize V7000 Controller 5m Fiber-optic Cable 200 GB 2.5-inch SSD or 400 GB 2.5-inch SSD Minimum quantity 1 2 2a
41
AAS feature code 3543 3253 3546 3549 0010 6008 9730 9801
XCC feature code None None None None 0010 6008 9730 9801
Description 300 GB 2.5 10K 300 GB 2.5 15 K 600 GB 2.5 10K 900 GB 2.5 10K Storwize V7000 Software Preload 8 GB Cache Power cord to PDU (includes two power cords) Power supplies
1 2 1 2
a. You must have a minimum quantity of two, either 3512 or 3514. b. The number of drives that is selected depends on the number and type of nodes that are selected, if SmartCloud Entry is selected, and the number of PureFlex configurations. For an explanation of the quantities, see Table 2-55 on page 42. Table 2-55 Drive configuration table Type of configuration Power only nodes Intel nodes with no SmartCloud Entry Intel nodes with SmartCloud Entry Power nodes and Intel nodes with no SmartCloud Entry) Power nodes and System x nodes with SmartCloud Entry Power nodes and System x nodes with SmartCloud Entry, and more than one PureFlex 300 GB drives 16 0 8 16 16 24 600 GB drives 8 0 8 8 16 16 900 GB drives 8 0 8 8 16 16
Table 2-56 lists the options when you configure a IBM Flex System V7000 Storage Node Expansion Enclosure to be installed internally within a chassis. This enclosure connects to the existing controller through the SAS cables that are connected to the front of the V7000 Storage Node.
Table 2-56 IBM Flex System V7000 Storage Node Expansion AAS feature code 4939-A29 EFD8 ADA1 5406 XCC feature code 4939-X29 EFD8 ADA1 None Description Minimum quantity 1 1 2 2
IBM Flex System V7000 Storage Node Expansion Enclosure Enterprise Expansion Indicator 0.3M SAS Cable 6M SAS Cable
42
IBM PureFlex System and IBM Flex System Products and Technology
a. Select one PDU line item from this list. They are mutually exclusive. Most are quantity = 2 except for the 16A PDU which is quantity = 4. The selection depends on your country and utility power requirements.
2.4.8 Software
This section lists the software features of IBM PureFlex System Enterprise.
Virtualization
43
AIX V7
5765-PSE PowerSC Standard 5662-PSE 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA 5765-SCP SmartCloud Entry 5662-SCP 3-year SWMA
Optional components - Enterprise Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5639-EX1 - Flex System V7000 EV 5639-EXC - 3yr EV SWMA 5639-RE1 - Flex System V7000 RM 5639-REC - 3yr RM SWMA Not applicable 5765-AEZ AIX V6 Enterprise 5765-G99 AIX V7 Enterprise
IBM Flex System Manager Operating system Virtualization Security (PowerSC) Cloud Software (optional)
5765-PVE PowerVM Enterprise Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable
IBM Flex System Manager Operating system Virtualization Cloud Software (optional)
44
IBM PureFlex System and IBM Flex System Products and Technology
Red Hat Enterprise Linux (RHEL) Optional components - Enterprise Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software
5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 5639-EX1 - Flex System V7000 EV 5639-EXC - 3yr EV SWMA 5639-RE1 - Flex System V7000 RM 5639-REC - 3yr RM SWMA Not applicable Not applicable
45
IBM Flex System Manager Operating system Virtualization Cloud Software (optional)
VMware ESXi selectable in the hardware configuration 5765-SCP SmartCloud Entry 5662-SCP 3 yr SWMA 5641-SC3 SmartCloud Entry, 3 yr SWMA
Optional components - Enterprise Expansion IBM Storwize V7000 Software IBM Flex System V7000 Software 5639-EV1 V7000 External Virtualization software 5639-RM1 V7000 Remote Mirroring 55639-EX1 - Flex System V7000 EV 5639-EXC - 3yr EV SWMA 5639-RE1 - Flex System V7000 RM 5639-REC - 3yr RM SWMA 5765-FMS IBM Flex System Manager Advanced 5639-OSX RHEL for x86 5639-W28 Windows 2008 R2 5639-CAL Windows 2008 Client Access 00D7995 - Flex System V7000 EV 00D7995 - 3yr EV SWMA 00D7990 - Flex System V7000 RM 00D7990 - 3yr RM SWMA 94Y9783 IBM Flex System Manager Advanced 5731RSI RHEL for x86 - L3 support only 5731RSR RHEL for x86 - L1-L3 support 5731W28 Windows 2008 R2 5731CAL Windows 2008 Client Access
VMware ESXi selectable in the hardware configuration Not applicable Not applicable
2.4.9 Services
IBM PureFlex System Enterprise includes the following services: Service & Support offerings: Software Maintenance: 1 year of 9x5 (9 hours per day, 5 days per week) Hardware maintenance: 3 years of 9x5 Next Business Day service
46
IBM PureFlex System and IBM Flex System Products and Technology
Technical Support Services Essential minimum service level offering for every IBM PureFlex System Standard configuration: Three years with two microcode analyses per year Three years of Warranty Service upgrade to 24x7x4 service Three years of Account Advocate or Enhanced Technical Support (24x7) and software support prerequisites.
47
As shown in Table 2-61, the four main offerings are cumulative, for example, Enterprise takes seven days in total and includes the scope of the Virtualized and Intro services offerings. PureFlex Extra Chassis is per chassis.
Table 2-61 PureFlex Service offerings Function delivered PureFlex Intro 3 days Included PureFlex Virtualized 5 days Included PureFlex Enterprise 7 days Included PureFlex Cloud 10 days Included PureFlex Extra Chassis Add-on 5 days No add-on
One node and one switch configured FSM configuration Discovery, inventory, and ESA setup Review internal storage configuration Skills transfer Basic virtualization (VMware, KVM, and VMControl) Up to four nodes and two switches Advanced virtualization Server pools or VMware cluster configured (VMware or VMControl) Configure SmartCloud Entry Basic External network integration First chassis is configured with 13 nodes
Not included
Included
Included
Included
Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM) Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM) Configure up to 14 nodes within one chassis Up to two virtualization engines (ESXi, KVM, or PowerVM)
Not included
Not included
Included
Included
Not included
Not included
Not included
Included
These services offerings are included by default, but are not mandatory with PureFlex configurations. Any of these services offerings can be added to any of the PureFlex Systems (Express, Standard, or Enterprise) offerings. The five new offerings (PureFlex Intro, PureFlex Virtualized, PureFlex Enterprise, PureFlex Cloud, and PureFlex Extra-Chassis Add-on) replace the existing 3 Day Express, 5 Day Standard, and 7 Day Enterprise offerings. The offerings are optional and can be performed by qualified IBM Business Partners, in addition to the IBM Services Teams.
48
IBM PureFlex System and IBM Flex System Products and Technology
IBM SmartCloud Entry is an easy to deploy and simple to use private cloud software offering that delivers improved service levels and the fastest time to value. It transforms a virtualized platform into a private cloud by adding a self-service user portal and basic metering. Virtualization engine support includes PowerVM on Power Systems and VMware vSphere 5 on IBM x86 with hypervisor options (KVM and Hyper-V) planned for 2013. IBM SmartCloud Entry includes platform management (IBM Systems Director) as a competitive differentiator. With SmartCloud Entry, you can build on your current virtualization strategies to continue to gain IT efficiency, flexibility, and control. Using a cloud in IT environments has the following advantages: Reduces the data center footprint and management cost Provides an automated server request/provisioning solution Improves utilization, workload management, and your capability to deliver new services Provides rapid service deployment. You see an improvement in days or hours instead of weeks. Has a built-in metering system Improves IT governance and risk management IBM simplifies your journey from server consolidation to cloud management. IBM provides complete cloud solutions. These solutions include hardware, software technologies, and services for implementing a private cloud. These services add value on top of virtualized infrastructure with IBM SmartCloud Entry for Cloud offerings. The product provides a comprehensive cloud software stack with capabilities that you can get only with multiple products from other providers, such as VMware. You can use the product to quickly deploy your cloud environment. IBM also offers an advanced cloud if it is needed. You can take advantage of existing IBM server investments and virtualized environments to deploy IBM SmartCloud Entry with the following essential cloud infrastructure capabilities: Create images: Simplify the storage of thousands of images. Easily create new golden master images and software appliances by using corporate standard operating systems Convert images from physical systems or between various x86 hypervisors Reliably track images to ensure compliance and minimize security risks Optimize resources, reducing the number of virtualized images and the storage that is required for them Deploy VMs: Reduce time to value for new workloads from months to a few days. Deploy application images across compute and storage resources User self-service for improved responsiveness Ensure security through VM isolation, and project-level user access controls Easy to use: You do not need to know all the details of the infrastructure Investment protection from full support of existing virtualized environments Optimize performance on IBM systems with dynamic scaling, expansive capacity, and continuous operation
49
Operate a private cloud: Cut costs with efficient operations. Delegate provisioning to authorized users to improve productivity Maintain full oversight to ensure an optimally running and safe cloud through automated approval/rejection Standardize deployment and configuration to improve compliance and reduce errors by setting policies, defaults, and templates Simplify administration with an intuitive interface for managing projects, users, workloads, resources, billing, approvals, and metering New features with IBM SmartCloud Entry V2.4: Heterogeneous support: IBM SmartCloud Entry now provides heterogeneous cloud management across System x, Power Systems, PureFlex, and Flex Systems environments. You can use this unified code set to control various platform architectures from a single SmartCloud Entry GUI. Multi-cloud and multi-hypervisor support: Enables multi-cloud management across geographies, time zones, and by grouping tiered hardware environments (that is, production clouds, test clouds, and so on). It achieves higher scaling by combining multiple cloud instances, and includes expanded hypervisor options for greater choice and flexibility. Enhanced project level customization: Empowers you to configure IBM SmartCloud Entry to align with the clients operational structure. For example, users can set up tiered project level Resource Pools, giving one team project sandbox test hardware, where another team project receives full, production-ready hardware. Administrators can also implement VM expiration dates to better manage image proliferation. IBM Cloud and virtualization solutions offer flexible approaches to cloud. Where you start your journey depends on your business needs. For more information about IBM SmartCloud Entry, go to: http://ibm.com/systems/cloud
50
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 3.
Systems management
IBM Flex System Manager, the management component of IBM Flex System Enterprise Chassis, and compute nodes are designed to help you get the most out of your IBM Flex System installation. They also allow you to automate repetitive tasks. These management interfaces can significantly reduce the number of manual navigational steps for typical management tasks. They offer simplified system setup procedures by using wizards and built-in expertise to consolidated monitoring for physical and virtual resources. This chapter contains the following sections: 3.1, Management network on page 52 3.2, Chassis Management Module on page 53 3.3, Security on page 56 3.4, Compute node management on page 57 3.5, IBM Flex System Manager on page 60
51
Eth1 = embedded 2-port 10 GbE controller with Virtual Fabric Connector Eth0 = Special GbE management network adapter
Enterprise Chassis Flex System Manager System x compute node Power Systems compute node
Eth0 IMM
CMM
Port
I/O bay 1
I/O bay 2
CMM
CMM
Management workstation
52
IBM PureFlex System and IBM Flex System Products and Technology
Tip: If you want, the management node console can be connected to the data network for convenient access. One of the key functions that the data network supports is discovery of operating systems on the various network endpoints. Discovery of operating systems by the FSM is required to support software updates on an endpoint such as a compute node. The FSM Checking and Updating Compute Nodes wizard assists you in discovering operating systems as part of the initial setup.
3.2.1 Overview
The CMM is a hot-swap module that provides basic system management functions for all devices that are installed in the Enterprise Chassis. An Enterprise Chassis comes with at least one CMM, and supports CMM redundancy. The CMM is shown in Figure 3-2.
53
Through an embedded firmware stack, the CMM implements functions to monitor, control, and provide external user interfaces to manage all chassis resources. You can use the CMM to perform these functions among others: Define login IDs and passwords Configure security settings such as data encryption and user account security Select recipients for alert notification of specific events Monitor the status of the compute nodes and other components Find chassis component information Discover other chassis in the network and enable access to them Control the chassis, compute nodes, and other components Access the I/O modules to configure them Change the startup sequence in a compute node Set the date and time Use a remote console for the compute nodes Enable multi-chassis monitoring Set power policies and view power consumption history for chassis components
3.2.2 Interfaces
The CMM supports a web-based graphical user interface that provides a way to perform chassis management functions within a supported web browser. You can also perform management functions through the CMM command-line interface (CLI). Both the web-based and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM, or from any system that is connected to the same network. The CMM has the following default IPv4 settings: IP address: 192.168.70.100 Subnet: 255.255.255.0 User ID: USERID (all capital letters) Password: PASSW0RD (all capital letters, with a zero instead of the letter O) The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in an IPv6 environment can be done by either using the IPv4 IP address or the IPv6 link-local address. The IPv6 link-local address is automatically generated based on the MAC address of the CMM. By default, the CMM is configured to respond to DHCP first before it uses its static IPv4 address. If you do not want this operation to take place, connect locally to the CMM and change the default IP settings. You can connect locally, for example, by using a notebook. The web-based GUI brings together all the functionality that is needed to manage the chassis elements in an easy-to-use fashion consistently across all System x IMM2 based platforms.
54
IBM PureFlex System and IBM Flex System Products and Technology
Figure 3-4 shows an example of the Chassis Management Module front page after login.
55
3.3 Security
The focus of IBM on smarter computing is evident in the improved security measures that are implemented in IBM Flex System Enterprise Chassis. Todays world of computing demands tighter security standards and native integration with computing platforms. For example, the push towards virtualization increased the need for more security. This increase comes as more mission critical workloads are consolidated on to fewer and more powerful servers. The IBM Flex System Enterprise Chassis takes a new approach to security with a ground-up chassis management design to meet new security standards. These security enhancements and features are provided in the chassis: Single sign-on (central user management) End-to-end audit logs Secure boot: TPM and CRTM Intel TXT technology (Intel Xeon -based compute nodes) Signed firmware updates to ensure authenticity Secure communications Certificate authority and management Chassis and compute node detection and provisioning Role-based access control Security policy management Same management protocols that are supported on BladeCenter AMM for compatibility with earlier versions Insecure protocols come disabled by default in CMM, with Locks settings to prevent user from inadvertently or maliciously enabling them Supports up to 84 local CMM user accounts Supports up to 32 simultaneous sessions Planned support for DRTM The Enterprise Chassis ships Secure, and supports two security policy settings: Secure: Default setting to ensure a secure chassis infrastructure Strong password policies with automatic validation and verification checks Updated passwords that replace the manufacturing default passwords after the initial setup Only secure communication protocols such as Secure Shell (SSH) and Secure Sockets Layer (SSL) Certificates to establish secure, trusted connections for applications that run on the management processors Legacy: Flexibility in chassis security Weak password policies with minimal controls Manufacturing default passwords that do not have to be changed Unencrypted communication protocols such as Telnet, SNMPv1, TCP Command Mode, CIM-XML, FTP Server, and TFTP Server
56
IBM PureFlex System and IBM Flex System Products and Technology
The centralized security policy makes Enterprise Chassis easy to configure. In essence, all components run with the same security policy provided by the CMM. This consistency ensures that all I/O modules run with a hardened attack surface.
The management controllers for the various Enterprise Chassis components have the following default IPv4 addresses: CMM:192.168.70.100 Compute nodes: 192.168.70.101-114 (corresponding to the slots 1-14 in the chassis) I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering) In addition to the IPv4 address, all I/O modules also support link-local IPv6 addresses and configurable external IPv6 addresses.
57
No IMM2 reset is required on configuration changes because they become effective immediately without reboot Hardware management of non-volatile storage Faster Ethernet over USB 1 Gb Ethernet management capability Improved system power-on and boot time More detailed information for UEFI detected events enables easier problem determination and fault isolation User interface meets accessibility standards (CI-162 compliant) Separate audit and event logs Trusted IMM with significant security enhancements (CRTM/TPM, signed updates, authentication policies, and so on) Simplified update/flashing mechanism Addition of Syslog alerting mechanism provides you with an alternative to email and SNMP traps. Support for Features On Demand (FoD) enablement of server functions, option card features, and System x solutions and applications First Failure Data Capture - One button web press initiates data collection and download For more information about IMM2, see Chapter 5, Compute nodes on page 177. For more information, see Integrated Management Module II Users Guide http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346 IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849: http://www.redbooks.ibm.com/abstracts/tips0849.html
The SOL feature redirects server serial-connection data over a LAN without requiring special cabling by routing the data through the CMM network interface. The SOL connection enables Power Systems compute nodes to be managed from any remote location with network access to the CMM. SOL offers the following functions: Remote administration without KVM Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. This configuration allows the Power Systems compute nodes to be managed from a remote location.
59
Port mirroring capabilities: Port mirroring of CMM ports to both internal and external ports. For security reasons, the ability to mirror the CMM traffic is hidden and is available only to development and service personnel Management virtual local area network (VLAN) for Ethernet switches: A configurable management 802.1q tagged VLAN in the standard VLAN range of 1 - 4094. It includes the CMMs internal management ports and the I/O modules internal ports that are connected to the nodes.
60
IBM PureFlex System and IBM Flex System Products and Technology
The part number to order the management node is shown in Table 3-2.
Table 3-2 Ordering information for IBM Flex System Manager node Part number 8731A1xa Description IBM Flex System Manager node
a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for specifics.
The part numbers to order FoD software entitlement licenses are shown in the following tables. The part numbers for the same features are different in different countries. Ask your local IBM representative for specifics. Table 3-3 shows the information for the United States, Canada, Asia Pacific, and Japan.
Table 3-3 Ordering information for FoD licenses (United States, Canada, Asia Pacific, and Japan) Part number Base feature set 90Y4217 90Y4222 IBM Flex System Manager per managed chassis with 1-Year SW S&S IBM Flex System Manager per managed chassis with 3-Year SW S&S Description
Advanced feature set upgradea 90Y4249 00D7554 IBM Flex System Manager, Advanced Upgrade, per managed chassis with 1-Year SW S&S IBM Flex System Manager, Advanced Upgrade, per managed chassis with 3-Year SW S&S
Fabric Provisioning feature upgradea 90Y4221 90Y4226 IBM Flex System Manager Service Fabric Provisioning with 1-Year S&S IBM Flex System Manager Service Fabric Provisioning with 3-Year S&S
a. The Advanced Upgrade and Fabric Provisioning licenses are applied on top of the IBM FSM base license.
Table 3-4 shows the ordering information for Latin America and Europe/Middle East/Africa.
Table 3-4 Ordering information for FoD licenses (Latin America and Europe/Middle East/Africa) Part number Base feature set 95Y1174 95Y1179 IBM Flex System Manager Per Managed Chassis with 1-Year SW S&S IBM Flex System Manager Per Managed Chassis with 3-Year SW S&S Description
Advanced feature set upgradea 94Y9219 94Y9220 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 1-Year SW S&S IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with 3-Year SW S&S
61
Part number
Description
Fabric Provisioning feature upgradea 95Y1178 95Y1183 IBM Flex System Manager Service Fabric Provisioning with 1-Year S&S IBM Flex System Manager Service Fabric Provisioning with 3-Year S&S
a. The Advanced Upgrade and Fabric Provisioning licenses are applied on top of the IBM FSM base license.
IBM Flex System Manager base feature set offers the following functions: Support up to four managed chassis Support up to 5,000 managed elements Auto-discovery of managed elements Overall health status Monitoring and availability Hardware management Security management Administration Network management (Network Control) Storage management (Storage Control) Virtual machine lifecycle management (VMControl Express) The IBM Flex System Manager advanced feature set offers all the capabilities of the base feature set plus: Image management (VMControl Standard) Pool management (VMControl Enterprise) The IBM Flex System Manager advanced feature set upgrade offers the following advanced features: Image management (VMControl Standard) Pool management (VMControl Enterprise) Advanced network monitoring and quality of service (QoS) configuration (Service Fabric Provisioning) The Fabric Provisioning upgrade offers advanced network monitoring and quality of service (QoS) configuration (Service Fabric Provisioning). Fabric provisioning functionality is included in the advanced feature set. It is also available as a separate Fabric Provisioning feature upgrade for the base feature set. The Advanced Upgrade and the Fabric Provisioning feature upgrade are mutually exclusive, that is, either the Advance Upgrade or the Fabric Provisioning feature upgrade can be applied on top of the base feature set license, but not both. Important: The Advanced Upgrade and Fabric Provisioning licenses are applied on top of the IBM Flex System Manager base license.
62
IBM PureFlex System and IBM Flex System Products and Technology
63
Figure 3-6 shows the internal layout and major components of the FSM.
Cover
Heat sink Microprocessor Microprocessor heat sink filler SSD and HDD backplane Hot-swap storage cage SSD interposer SSD drives I/O expansion adapter ETE adapter
DIMM
Figure 3-6 Exploded view of the IBM Flex System Manager node, showing major components
Additionally, the FSM comes preconfigured with the components described in Table 3-5.
Table 3-5 Features of the IBM Flex System Manager node (8731) Feature Processor Memory SAS Controller Disk Integrated NIC Systems Management Description 1x Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W 8 x 4 GB (1x4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM One LSI 2004 SAS Controller 1 x IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 2 x IBM 200GB SATA 1.8" MLC SSD (configured in an RAID-1) Embedded dual-port 10 Gb Virtual Fabric Ethernet controller (Emulex BE3) Dual-port 1 GbE Ethernet controller on a management adapter (Broadcom 5718) Integrated Management Module II (IMM2) Management network adapter
64
IBM PureFlex System and IBM Flex System Products and Technology
Figure 3-7 shows the internal layout of the FSM. Filler slot for Processor 2 Processor 1
Drive bays
Figure 3-7 Internal view that shows the major components of IBM Flex System Manager
Front controls
The FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node. The diagram in Figure 3-8 shows the front of an FSM with the location of the control and LEDs.
Solid state drive LEDs
a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa 2 aaaaaaaaaaaaaaaaaaaaaaaa a aa aa aa aa aa aaaa aa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aa aaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa 1 aaaaaaaa aaaa aa aa aaaa a aaaaaaaaaaaaaa aaaaaaaaaa aaaa aaaa a aaaaaaaaaa aa aa aa aa aaaa aaaa aaaaaaaaaa aaaaaaaaaa aa aa aa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 0 a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Power button/LED
Identify LED
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa aa aa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa aa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaa a aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaa aa a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaa aa aaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaa aa a a a a aaaaaaaaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Fault Hard disk drive LED status LED Check log LED
Storage
The FSM ships with 2 x IBM 200 GB SATA 1.8" MLC SSD and 1 x IBM 1 TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are configured in an RAID-1 pair that provides roughly 200 GB of usable space. The 1 TB SATA drive is not part of a RAID group.
65
66
IBM PureFlex System and IBM Flex System Products and Technology
Chassis and hardware component views Hardware properties Component names and hardware identification numbers Firmware levels Utilization rates
Network management Management of network switches from various vendors Discovery, inventory, and status monitoring of switches Graphical network topology views Support for KVM, pHyp, VMware virtual switches, and physical switches VLAN configuration of switches Integration with server management Per-virtual machine network usage and performance statistics that are provided to VMControl Logical views of servers and network devices that are grouped by subnet and VLAN Network management (advanced feature set or fabric provisioning feature) Defines QoS settings for logical networks Configures QoS parameters on network devices Provides advanced network monitors for network system pools, logical networks, and virtual systems Storage management Discovery of physical and virtual storage devices Physical and logical topology views Support for virtual images on local storage across multiple chassis Inventory of physical storage configuration Health status and alerts Storage pool configuration Disk sparing and redundancy management Virtual volume management Support for virtual volume discovery, inventory, creation, modification, and deletion Virtualization management (base feature set) Support for VMware, Hyper-V, KVM, and IBM PowerVM Create virtual servers Edit virtual servers Manage virtual servers Relocate virtual servers Discover virtual server, storage, and network resources, and visualize the physical-to-virtual relationships
67
Virtualization management (advanced feature set) Create new image repositories for storing virtual appliances and discover existing image repositories in your environment Import external, standards-based virtual appliance packages into your image repositories as virtual appliances Capture a running virtual server that is configured just the way you want, complete with guest operating system, running applications, and virtual server definition Import virtual appliance packages that exist in the Open Virtual Machine Format (OVF) from the Internet or other external sources Deploy virtual appliances quickly to create new virtual servers that meet the demands of your ever-changing business needs Create, capture, and manage workloads Create server system pools, which enable you to consolidate your resources and workloads into distinct and manageable groups Deploy virtual appliances into server system pools Manage server system pools, including adding hosts or more storage space, and monitoring the health of the resources and the status of the workloads in them Group storage systems together by using storage system pools to increase resource utilization and automation Manage storage system pools by adding storage, editing the storage system pool policy, and monitoring the health of the storage resources I/O address management Manages assignments of Ethernet MAC and Fibre Channel WWN addresses. Monitors the health of compute nodes, and automatically, without user intervention, replaces a failed compute node from a designated pool of spare compute nodes by reassigning MAC and WWN addresses. Preassigns MAC addresses, WWN addresses, and storage boot targets for the compute nodes. Creates addresses for compute nodes, saves the address profiles, and deploys the addresses to the slots in the same or different chassis. Additional features Resource-oriented chassis map provides instant graphical view of chassis resource that includes nodes and I/O modules Fly-over provides instant view of individual server (node) status and inventory Chassis map provides inventory view of chassis components, a view of active statuses that require administrative attention, and a compliance view of server (node) firmware Actions can be taken on nodes such as working with server-related resources, showing and installing updates, submitting service requests, and starting the remote access tools Resources can be monitored remotely from mobile devices, including Apple iOS based devices, Google Android -based devices, and RIM BlackBerry based devices. Flex System Manager Mobile applications are separately available under their own terms and conditions as outlined by the respective mobile markets.
68
IBM PureFlex System and IBM Flex System Products and Technology
Remote console Ability to open video sessions and mount media such as DVDs with software updates to their servers from their local workstation Remote KVM connections Remote Virtual Media connections (mount CD/DVD/ISO/USB media) Power operations against servers (Power On/Off/Restart)
Hardware detection and inventory creation Firmware compliance and updates Health status (such as processor utilization) on all hardware devices from a single chassis view Automatic detection of hardware failures Provides alerts Takes corrective action Notifies IBM of problems to escalate problem determination
Administrative capabilities, such as setting up users within profile groups, assigning security levels, and security governance Bare metal deployment of hypervisors (VMware ESXi, KVM) through centralized images
69
Table 3-7 lists the agent tier support for the IBM Flex System managed compute nodes. Managed nodes include x240 compute node that supports Windows, Linux and VMware, and p260 and p460 compute nodes that support IBM AIX, IBM i, and Linux.
Table 3-7 Agent tier support by management system type Agent tier Managed system type Compute nodes that run AIX Compute nodes that run IBM i Compute nodes that run Linux Compute nodes that run Linux and supporting SSH Compute nodes that run Windows Compute nodes that run Windows and supporting SSH or distributed component object model (DCOM) Compute nodes that run VMware Other managed resources that support SSH or SNMP Agentless in-band Yes Yes No Yes No Yes Agentless out-of-band Yes Yes Yes Yes Yes Yes Platform Agent No Yes Yes Yes Yes Yes Common Agent Yes Yes Yes Yes Yes Yes
Yes Yes
Yes Yes
Yes No
Yes No
Table 3-8 summarizes the management tasks that are supported by the compute nodes that depend on the agent tier.
Table 3-8 Compute node management tasks that are supported by the agent tier Agent tier Managed system type Command automation Hardware alerts Platform alerts Health and status monitoring File transfer Inventory (hardware) Inventory (software) Problems (hardware status) Process management Power management Remote control Remote command line Resource monitors Update manager Agentless in-band No No No No No No Yes No No No No Yes No No Agentless out-of-band No Yes No No No Yes No Yes No Yes Yes No No No Platform Agent No Yes Yes Yes No Yes Yes Yes No No No Yes Yes Yes Common Agent Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
70
IBM PureFlex System and IBM Flex System Products and Technology
Table 3-9 shows the supported virtualization environments and their management tasks.
Table 3-9 Supported virtualization environments and management tasks Virtualization environment Management task Deploy virtual servers Deploy virtual farms Relocate virtual servers Import virtual appliance packages Capture virtual servers Capture workloads Deploy virtual appliances Deploy workloads Deploy server system pools Deploy storage system pools AIX and Linuxa Yes No Yes Yes Yes Yes Yes Yes Yes Yes IBM i Yes No No Yes Yes Yes Yes Yes No No VMware vSphere Yes Yes Yes No No No No No No No Microsoft Hyper-V Yes No No No No No No No No No Linux KVM Yes Yes Yes Yes Yes Yes Yes Yes Yes No
Table 3-10 shows the supported I/O switches and their management tasks.
Table 3-10 Supported I/O switches and management tasks EN2092 1 Gb Ethernet Yes Yes Yes Yes Yes Yes No EN4093 and EN4093R 10 Gb Ethernet Yes Yes Yes Yes Yes Yes Yes CN4093 10 Gb Converged Yes Yes Yes Yes Yes Yes No FC3171 8 Gb FC Yes Yes Yes Yes Yes Yes No FC5022 16 Gb FC Yes Yes Yes Yes No No No
Management task Discovery Inventory Monitoring Alerts Configuration management Automated logical network provisioning (ALNP) Stacked switch
Table 3-11 shows the supported virtual switches and their management tasks.
Table 3-11 Supported virtual switches and management tasks Virtualization environment Virtual switch Management task Discovery Inventory Configuration management Linux KVM Platform Agent Yes Yes Yes VMware vSphere VMware Yes Yes Yes IBM 5000V Yes Yes Yes PowerVM PowerVM Yes Yes Yes Hyper-V Hyper-V No No No
71
Virtualization environment Virtual switch Management task Automated logical network provisioning (ALNP)
Hyper-V Hyper-V No
Table 3-12 shows the supported storage systems and their management tasks.
Table 3-12 Supported storage systems and management tasks Storage system Management task Storage device discovery Inventory collection Monitoring (alerts and status) Integrated physical and logical topology views Show relationships between storage and server resources Perform logical and physical configuration View and manage attached devices VMControl provisioning V7000 Storage Node Yes Yes Yes Yes Yes Yes Yes Yes IBM Storwize V7000 Yes Yes Yes No Yes Yes No Yes
Use the Chassis Map to edit compute node details, view server properties, and manage compute node actions. Work with resource views, such as All Systems, Chassis and Members, Hosts, Virtual Servers, Network, Storage, and Favorites. Perform visual monitoring of status and events. View event history and active status. View inventory. Perform visual monitoring of job status. For other tasks, IBM FSM Explorer starts IBM Flex System Manager in a separate browser window or tab. You can return to the IBM FSM Explorer tab when you complete those tasks.
73
Command-line interface
The command-line interface (CLI) is an important interface for the IBM Flex System Manager management software, and you can use it to accomplish simple tasks directly or as a scriptable framework for automating functions that are not easily accomplished from a GUI. The IBM Flex System Manager management software includes a library of commands that you can use to configure the management software or perform many of the systems management operations that can be accomplished from the management software web interface. For more information, see the IBM Flex System Manager product publications available from the IBM Flex System Information Center at: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp Search for these publications: Installation and User's Guide Systems Management Guide Commands Reference Guide Management Software Troubleshooting Guide
74
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 4.
75
4.1 Overview
Figure 4-1 shows the Enterprise Chassis as seen from the front. The front of the chassis has 14 horizontal bays with removable dividers that allow nodes and future elements to be installed within the chassis. The nodes can be installed when the chassis is powered. The chassis employs a die-cast mechanical bezel for rigidity. This chassis construction allows for tight tolerances between nodes, shelves, and the chassis bezel. These tolerances ensure accurate location and mating of connectors to the midplane.
The major components of Enterprise Chassis are: Fourteen 1-bay compute node bays (can also support seven 2-bay or three 4-bay compute nodes with the shelves removed). Six 2500W power modules that provide N+N or N+1 redundant power. Optionally, the chassis may be ordered through configure-to-order (CTO) process with six 2100W power supplies for N+1 redundant power. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four physical I/O modules. An I/O architectural design capable of providing: Up to eight lanes of I/O to an I/O adapter. Each lane capable of up to 16 Gbps. A maximum of 16 lanes of I/O to a half wide-node with two adapters. A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand. Two IBM Flex System Manager (FSM) management appliances for redundancy. The FSM provides multiple-chassis management support for up to four chassis. Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis management support.
76
IBM PureFlex System and IBM Flex System Products and Technology
a. 2100W power supply units are available through the CTO process
Figure 4-2 shows the component parts of the chassis, with the shuttle removed. The shuttle forms the rear of the chassis where the I/O Modules, power supplies, fan modules, and Chassis Management Modules are installed. The Shuttle would be removed only to gain access to the midplane or fan distribution cards, in the rare event of a service action.
Chassis
I/O module
Shuttle
77
Within the chassis, a personality card holds vital product data (VPD) and other information relevant to the particular chassis. This card can be replaced only under service action, and is not normally accessible. The personality card is attached to the midplane as shown in Figure 4-4 on page 79.
Bay 13
Bay 14
Bay 11
Bay 12
Bay 9
Bay 10
Bay 7
Bay 8
Bay 5
Bay 6
Bay 3
Bay 4
Bay 1
a aaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaa aaaaaa aaaa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaa aaaaaa aa aa a a a a a a a a a a a a a aaaaaaaa aa aa aaaaaa aaaa aa aa aa aa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaa aa aaaaaa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a aaaaaa aa a a aa aa aaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 2
The chassis has the following features on the front: The front information panel on the lower left of the chassis Bays 1 - 14 supporting nodes and FSM Lower airflow inlet apertures that provide air cooling for switches, CMMs, and power supplies Upper airflow inlet apertures that provide cooling for power supplies For efficient cooling, each bay in the front or rear in the chassis must contain either a device or a filler. The Enterprise Chassis provides several LEDs on the front information panel that can be used to obtain the status of the chassis. The Identify, Check log, and the Fault LED are also on the rear of the chassis for ease of use.
78
IBM PureFlex System and IBM Flex System Products and Technology
4.1.2 Midplane
The midplane is the circuit board that connects to the compute nodes from the front of the chassis. It also connects to I/O modules, fan modules, and power supplies from the rear of the chassis. The midplane is located within the chassis, and can be accessed by removing the Shuttle assembly. Removing the midplane is only necessary in case of service action. The midplane is passive, which is to say that there are no electronic components on it. The midplane has apertures to allow air to pass through. It has connectors on both sides for power supplies, fan distribution cards, switches, I/O adapters, and nodes. Figure 4-4 shows the connectors on the midplane. Midplane front view Node power connectors Management connectors I/O module connectors Midplane rear view Power supply connectors CMM connectors
79
The following components can be installed into the rear of the chassis Up to two CMMs. Up to six 2500W or 2100W power supply modules. Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules. Additional fan modules can be installed for a total of 10 modules. Up to four I/O modules.
4.1.4 Specifications
Table 4-2 shows the specifications of the Enterprise Chassis 8721-A1x.
Table 4-2 Enterprise Chassis specifications Feature Machine type-model Form factor Maximum number of compute nodes supported Chassis per 42U rack Nodes per 42U rack Specifications System x ordering sales channel: 8721-A1x Power Systems sales channel: 7893-92Xa 10U rack mounted unit 14 half-wide (single bay), 7 full-wide (two bays), or 3 double-height full-wide (four bays). Mixing is supported. 4 56 half-wide, or 28 full-wide
80
IBM PureFlex System and IBM Flex System Products and Technology
Feature Management
Specifications One or two Chassis Management Modules for basic chassis management. Two CMMs form a redundant pair. One CMM is standard in 8721-A1x. The CMM interfaces with the integrated management module (IMM) or flexible service processor (FSP) integrated in each compute node in the chassis. An optional IBM Flex System Managera management appliance provides comprehensive management that includes virtualization, networking, and storage management. Up to eight lanes of I/O to an I/O adapter, with each lane capable of up to 16 Gbps bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters. A wide variety of networking solutions that include Ethernet, Fibre Channel, FCoE, and InfiniBand Six 2500W power modules that provide N+N or N+1 redundant power. Two are standard in model 8721-A1x. Power supplies are 80 PLUS Platinum certified and provide over 94% efficiency at both 50% load and 20% load. Power capacity of 2500 watts output rated at 200 VAC. Each power supply contains two independently powered 40 mm cooling fan modules. Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four 80 mm and two 40 mm fan modules are standard in model 8721-A1x. Height: 440 mm (17.3) Width: 447 mm (17.6) Depth, measured from front bezel to rear of chassis: 800 mm (31.5") Depth, measured from node latch handle to the power supply handle: 840 mm (33.1") Minimum configuration: 96.62 kg (213 lb) Maximum configuration: 220.45 kg (486 lb) 6.3 to 6.8 bels Operating air temperature 5C to 40C Input power: 200 - 240 V ac (nominal), 50 or 60 Hz Minimum configuration: 0.51 kVA (two power supplies) Maximum configuration: 13 kVA (six power supplies) 12,900 watts maximum
I/O architecture
Power suppliesb
Power consumption
a. When you order the IBM Flex System Enterprise Chassis through the Power Systems sales channel, select one of the IBM PureFlex System offerings. These offers are described in Chapter 2, IBM PureFlex System on page 11. In such offerings, the IBM Flex System Manager is a standard component and therefore is not optional. b. 2100W Power Modules are available to be ordered as CTO.
For data center planning, the chassis is rated to a maximum operating temperature of 40C. For comparison, BC-H is rated to 35C. 110v operation is not supported: The AC operating range is 200 VAC to 240 VAC.
81
The filter is attached to and removed from the chassis as shown in Figure 4-6.
Shelf
Tabs
82
IBM PureFlex System and IBM Flex System Products and Technology
a. Node must be powered off, in standby before removal. b. I/O Module might require reconfiguration, and removal is disruptive to any communications that are taking place.
83
The power supplies also contain two dual independently powered 40 mm cooling fan modules that are powered not from the power supply itself, but from the chassis midplane. The fan modules are variable speed and controlled by the chassis fan logic. The 2100W power supplies are 2100 watts output power that is rated at 200 - 240 VAC. Similar to the 2500W unit, this power supply also supports oversubscription; the 2100W unit can run up to 2895 W for a short duration. As with the 2500W units, the 2100W supplies have two independently powered dual 40 mm cooling fans that pick up power from the midplane included within the power supply assembly. Table 4-5 shows the ordering information for the Enterprise Chassis power supplies. Power supplies cannot be mixed in the same chassis.
Table 4-5 Power supply module option part numbers Part number 43W9049 47C7633 Feature codesa A0UC / 3590 A3JH / None Description IBM Flex System Enterprise Chassis 2500W Power Module IBM Flex System Enterprise Chassis 2100W Power Module Chassis models where standard 8721-A1x (x-config) 7893-92X (e-config) None
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
A chassis that is powered by the 2100W power supplies cannot provide N+N redundant power unless all the compute nodes are configured with 95 W or lower Intel processors. N+1 redundancy is possible with any processors. Table 4-6 shows the nodes that are supported in chassis when powered by either the 2100W or 2500W modules.
Table 4-6 Compute nodes that are supported by the power supplies Node IBM Flex System Manager management node x220 (with or without Storage Expansion Node or PCIe Expansion Node) x240 (with or without Storage Expansion Node or PCIe Expansion Node) x440 p24L p260 p460 V7000 Storage Node (either primary or expansion node) 2100W power supply Yes Yes Yesa Yesa No No No Yes 2500W power supply Yes Yes Yesa Yesa Yesa Yesa Yesa Yes
a. There are some limitations that are based on the TDP power of the processors that are installed or the power policy enabled. See Table 4-7 on page 85.
84
IBM PureFlex System and IBM Flex System Products and Technology
Table 4-7 lists details of the support for compute nodes supported based on type and number of power supplies that are installed in the chassis and the power policy enabled (N+N or N+1). In this table, the colors of the cells have the following meaning: Supported with no limitations as to the number of compute nodes that can be installed Supported but with limitations on the number of compute nodes that can be installed.
Table 4-7 Specific number of compute nodes supported based on installed power supplies
Compute node CPU TDP rating 60 W 70 W 80 W 95 W 115 W 130 W 135 W x440 95 W 115 W 130 W p24L p260 p460 x220 All All All 50 W 60 W 70 W 80 W 95 W FSM V7000 95 W N/A 14 14 14 14 14 2 3 14 14 14 14 14 2 3 2100W power supplies N+1, N=5 6 total 14 14 14 14 14 14 14 7 7 7 N+1, N=4 5 total 14 14 14 14 14 14 14 7 7 7 N+1, N=3 4 total 14 13 13 12 11 11 11 6 5 5 Not supported Not supported Not supported 14 14 14 14 14 2 3 14 14 14 14 14 2 3 N+N, N=3 6 total 14 14 14 13 12 11 11 6 6 5 N+1, N=5 6 total 14 14 14 14 14 14 14 7 7 7 14 14 7 14 14 14 14 14 2 3 2500W power supplies N+1, N=4 5 total 14 14 14 14 14 14 14 7 7 7 14 14 7 14 14 14 14 14 2 3 N+1, N=3 4 total 14 14 14 14 14 14 13 7 7 6 12 12 6 14 14 14 14 14 2 3 N+N, N=3 6 total 14 14 14 14 14 14 14 7 7 7 13 13 6 14 14 14 14 14 2 3
x240
Assumptions: All Compute Nodes are fully configured. Throttling and oversubscription is enabled. Tip: Consult the Power configurator for exact configuration support at: http://ibm.com/systems/bladecenter/resources/powerconfig.html
85
Both the 2500W and 2100W power supplies are 80 PLUS Platinum certified. 80 PLUS is a performance specification for power supplies that are used within servers and computers. The standard has several ratings, such as Bronze, Silver, Gold, Platinum. To meet the 80 PLUS Platinum standard, the power supply must have a power factor (PF) of 0.95 or greater at 50% rated load and efficiency equal to or greater than the following values: 90% at 20% of rated load 94% at 50% of rated load 91% at 100% of rated load Further information about 80 PLUS can be found at: http://www.plugloadsolutions.com Table 4-8 lists the efficiency of the 2500W Enterprise Chassis power supplies at various percentage loads at different input voltages.
Table 4-8 2500W power supply efficiency at different loads for 200 - 208 VAC and 220 - 240 VAC
Load Input voltage (VAC) Output power Efficiency 10% load 200-208V 250 W 93.2% 220-240V 275 W 93.5% 20% load 200-208V 500 W 94.2% 220-240V 550 W 94.4% 50% load 200-208V 1250 W 94.5% 220-240V 1375 W 92.2% 100% load 200-208V 2500W 91.8% 220-240V 2750 W 91.4%
Table 4-9 lists the efficiency of the 2100W Enterprise Chassis power supplies at various percentage loads at 230 VAC nominal voltage.
Table 4-9 2100W power supply efficiency at different loads for 230 VAC Load @ 230 VAC Output Power Efficiency 10% load 210 W 92.8% 20% load 420 W 94.1% 50% load 1050 W 94.2% 100% load 2100 W 91.8%
86
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-8 shows the location of the power supplies within the enterprise chassis. Two power supplies are installed into bay 4 and bay 1. Four power supply bays are shown with fillers that must be removed to install power supplies into the bays. Similar to the fan bay fillers, there are easy to operate blue touch point and finger hold apertures (circular) located below the blue touch points to make the filler removal process simple and intuitive. Population information for the 2100W and 2500W power supplies can be found in 4.7.2, Power supply population on page 99, which describes planning information in more detail, specifically the 2100W units node type restriction.
Power supply bay 6
With 2500W power modules, the chassis allows configurations to have N+N or N+1 redundancy. A fully configured chassis operates on just three 2500W power supplies with no redundancy, but N+1 or N+N is a better configuration to provide higher redundancy and availability. Installing three 2500W (or six 2500W with N+N redundancy) power supplies facilitates a balanced 3-phase configuration. All power supply modules are combined into a single power domain within the chassis. This combination distributes power to each of the compute nodes, I/O modules, and ancillary components through the Enterprise Chassis midplane. The midplane is a highly reliable design with no active components. Each power supply is designed to provide fault isolation and is hot swappable. In the case of the 2500W modules, power monitoring of both the DC and AC signals allows the Chassis Management Module to accurately monitor the power supplies. The integral power supply fans are not dependent upon the power supply being functional. They operate and are powered independently from the chassis midplane. Power supplies are added as required to meet the load requirements of the Enterprise Chassis configuration. There is no need to over provision a chassis. For more information about power-supply unit (PSU) planning, see 4.11, Infrastructure planning on page 154.
87
Figure 4-9 shows the power supply rear view and highlights the LEDs. There is a handle for removal and insertion of the power supply.
The rear of the power supply has a C20 inlet socket for connection to power cables. You can use a C19-C20 power cable, which can connect to a suitable IBM DPI rack power distribution unit (PDU). The rear LEDs are: AC Power: When lit green, this LED indicates that AC power is being supplied to the PSU inlet. DC Power: When lit green, this LED indicates that DC power is being supplied to the chassis midplane. Fault: When lit amber, this LED indicates a fault with the PSU. Table 4-10 shows the ordering information for the Enterprise Chassis power supplies.
Table 4-10 Power Supply Module option part number Part number 43W9049 47C7633 Feature codesa A0UC / 3590 A3JH / None Description IBM Flex System Enterprise Chassis 2500W Power Module IBM Flex System Enterprise Chassis 2100W Power Module
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Before you remove any power supplies, ensure that the remaining power supplies have sufficient capacity to power the Enterprise Chassis. Power usage information can be found in the Chassis Management Module web interface. For more information about oversubscription, see 4.7.2, Power supply population on page 99.
88
IBM PureFlex System and IBM Flex System Products and Technology
Fan bay 7
Fan bay 2
Fan bay 6
Fan bay 1
For more information about how to populate the fan modules, see 4.6, Cooling on page 93.
89
Power on LED
Fault LED
The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and chassis management modules. These modules ship preinstalled in the chassis. Each 40 mm fan module contains two 40 mm fans internally, side by side. The 80 mm fan modules distribute airflow to the compute nodes through the chassis from front to rear. Each 80 mm fan module contains two 80 mm fan modules, back to back at each end of the module, which are counter rotating. Both fan modules have an electromagnetic compatibility (EMC) mesh screen on the rear internal face of the module. This design provides a laminar flow through the screen. Laminar flow is a smooth flow of air, sometimes called streamline flow. This flow reduces turbulence of the exhaust air and improves the efficiency of the overall fan assembly. These factors combine to form a highly efficient fan design that provides the best cooling for lowest energy input: Design of the whole fan assembly The fan blade design The distance between and size of the fan modules The EMC mesh screen Figure 4-12 shows an 80 mm fan module.
Power on LED
Fault LED
90
IBM PureFlex System and IBM Flex System Products and Technology
The minimum number of 80 mm fan modules is four. The maximum number of 80 mm fan modules that can be installed is eight. Both fan modules have two LED indicators, consisting of a green power-on indicator and an amber fault indicator. The power indicator lights when the fan module has power, and flashes when the module is in the power save state. Table 4-11 lists the specifications of the 80 mm Fan Module Pair option. Pairs and singles: When the modules are ordered as an option, they are supplied as a pair. When the modules are configured using feature codes, they are single fans.
Table 4-11 80 mm Fan Module Pair option part number Part number 43W9078 (two fans) Feature codea A0UA / 7805 (one fan) Description IBM Flex System Enterprise Chassis 80 mm Fan Module
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
For more information about airflow and cooling, see 4.6, Cooling on page 93.
91
Fan logic modules are multiplexers for the internal I2C bus, which is used for communication between hardware components within the chassis. Each fan pack is accessed through a dedicated I2C bus, switched by the Fan Mux card, from each CMM. The fan logic module switches the I2C bus to each individual fan pack. This module can be used by the Chassis Management Module to determine multiple parameters, such as fan RPM. There is a fan logic module for the left and right side of the chassis. The left fan logic module access the left fan modules, and the right fan logic module accesses the right fan modules. Fan presence indication for each fan pack is read by the fan logic module. Power and fault LEDs are also controlled by the fan logic module. Figure 4-14 shows a fan logic module and its LEDs.
As shown in Figure 4-14 there are two LEDs on the fan logic module. The power-on LED is green when the fan logic module is powered. The amber fault LED flashes to indicate a faulty fan logic module. Fan logic modules are hot swappable. For more information about airflow and cooling, see 4.6, Cooling on page 93
!
White backlit IBM logo Identify LED Check log LED Fault LED
92
IBM PureFlex System and IBM Flex System Products and Technology
The following items are displayed on the front information panel: White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered. Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When the LED is flashing, this LED indicates that a condition occurred that caused the CMM to indicate that the chassis needs attention. Check Error Log LED: When lit (amber), this LED indicates that a noncritical event occurred. This event might be a wrong I/O module that is inserted into a bay, or a power requirement that exceeds the capacity of the installed power modules. Fault LED: When lit (amber), this LED indicates that a critical system error occurred. This error can be an error in a power module or a system error in a node. Figure 4-16 shows the LEDs on the rear of the chassis.
Identify LED
Fault LED
Figure 4-16 Chassis LEDs on the rear of the unit (lower right)
4.6 Cooling
This section addresses Enterprise Chassis cooling. The flow of air within the Enterprise Chassis follows a front to back cooling path. Cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. Air is drawn in both through the front node bays and the front airflow inlet apertures at the top and bottom of the chassis. There are two cooling zones for the nodes: A left zone and a right zone. The cooling can be scaled up as required, based on which node bays are populated. The number of fan modules that are required for some nodes is described further in this section.
93
When a node is not inserted in a bay, an airflow damper closes in the midplane. Therefore, no air is drawn in through an unpopulated bay. When a node is inserted into a bay, the damper is opened mechanically by the node insertion. This action allows for cooling of the node in that bay. Figure 4-17 shows the upper and lower cooling apertures. Upper cooling apertures
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a aa aa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a aa aaaa aaaa aaaa aa aa aa aa aa aa aa aaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaa aaaa aaaa aa aaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa aaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaa a a aaaa aaaaaa aa aa aaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa a a aa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aa aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaa aaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaa aaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaa aaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaa aaaa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaa aaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaaaa aa aa a aaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaa a a aa aa aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaa a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaa a a a a a a a a a a a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa a a a a a a aa aa aa aaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa a a a a a a aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaa a a aaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaa a a a aaaaaaaaaaaaa a aaaaaaaaaaaaaaaaa aaaa aaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaa aa aa aaaaaaaaaaa aa aa aa a a a a a aa aa aa aa aa aa aa aa aaaaaaaaaaa aa aa aa aa aaaaaaaaaaa aaaa aa aa aaaaaaaaaaa aa aaaa aa aaaaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaa aa aa aa aa aa aaaa a aaaaaaaa aa aa aaaaaa aa aaaa aaaa aa aaaa aaaa aaaa aaaa aaaa aaa aa aaaa aaaa aaaa aaaaa aaaa aaaa aaaa a aaaaa a aa aaaaaaaa aaaaaaaaa a a a a aaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a aa aa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaa aaaa aa aaaa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaa aa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaa a a aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaaaaaaaa aaaa aaaa aa aaaaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a aaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aa aa aaaa aaaa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aa aaaa aa aa aa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaa aaaa a aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaa aa aaaa aaaaaaaaaaaaaaa aa aa aaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a aa aa aa aaaaaaaaaaaaa aa aa aa aa aa aa aa aa aa a a a a aaaaaaaaaaaaa a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
a aaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a aaaa aaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaa aaaaaa aa aa a a a a a a a a a a a a a aa aa aa aa aa aa aaaaaaaaaa aaaaaa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aa aa aa aaaaaaaaaa aaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaa aa aa aaaaaaaaaa aaaa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aa aaaa aa aaaaaaaaaa aaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a aaaaaaaaaa aa aa aa aa aa aa aa aa aa aa aa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Various fan modules are present in the chassis to assist with efficient cooling. Fan modules consist of both 40 mm and 80 mm types, and are contained within hot pluggable fan modules. The power supplies also have two integrated, independently powered 40 mm fan modules. The cooling path for the nodes begins when air is drawn in from the front of the chassis. The airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from the front of the chassis, through the node, through openings in the Midplane and then into a plenum chamber. Each plenum is isolated from the other, providing separate left and right cooling zones. The 80 mm fan packs on each zone then move the warm air from the plenum to the rear of the chassis. In a 2-bay wide node, the air flow within the node is not segregated because it spans both airflow zones.
94
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-18 shows a chassis with the outer casing removed for clarity to show airflow path through the chassis. There is no airflow through the chassis midplane where a node is not installed. The air damper is opened only when a node is inserted in that bay. Node installed in Bay 14 Cool airflow in 80 mm fan pack
Cool airflow in
Figure 4-18 Airflow into chassis through the nodes and exhaust through the 80 mm fan packs (chassis casing is removed for clarity)
95
Figure 4-19 shows the path of air from the upper and lower airflow inlet apertures to the power supplies. Nodes Power Supply Cool airflow in
96
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-20 shows the airflow from the lower inlet aperture to the 40 mm fan modules. This airflow provides cooling for the switch modules and CMM installed in the rear of the chassis.
Airflow
I/O modules
CMM
Figure 4-20 40 mm fan module airflow (chassis casing is removed for clarity)
The right side 40 mm fan module cools the right switches, while the left 40 mm fan module cools the left pair of switches. Each 40 mm fan module has a pair of fans for redundancy. Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the lower openings in the CMM and I/O Modules where it provides cooling for these components. It passes through and is drawn out the top of the CMM and I/O modules. The warm air is expelled to the rear of the chassis by the 40 mm fan assembly. This expulsion is shown by the red airflow arrows in Figure 4-20. The removal of the fan pack exposes an opening in the bay to the 80 mm fan packs located below. A back flow damper within the fan bay then closes. The backflow damper prevents hot air from reentering the system from the rear of the chassis. The 80 mm fan packs cool the switch modules and the CMM while the fan pack is being replaced. Chassis cooling is implemented as a function of: Node configurations Power Monitor Circuits Component Temperatures Ambient Temperature This results in lower airflow volume (measured in cubic feet per minute or CFM) and lower cooling energy that is spent at a chassis level. This system also maximizes the temperature difference across the chassis (known generally as the Delta T) for more efficient room integration. Monitored Chassis level airflow usage is displayed to enable airflow planning and monitoring for hot air recirculation.
Chapter 4. Chassis and infrastructure configuration
97
Five Acoustic Optimization states can be selected. Use the one that best balances performance requirements with the noise level of the fans. Chassis level CFM usage is available to you for planning purposes. In addition, ambient health awareness can detect potential hot air recirculation to the chassis.
Rear View
Figure 4-21 Four 80 mm fan modules allow a maximum of four nodes installed
98
IBM PureFlex System and IBM Flex System Products and Technology
Installing six 80 mm fan modules allows a further four nodes to be supported within the chassis. The maximum therefore is eight as shown in Figure 4-22.
Rear View
Figure 4-22 Six 80 mm fan modules allow for a maximum of eight nodes
To cool more than eight nodes, all fan modules must be installed as shown in Figure 4-23.
14 14 12 12 10 10 8 8 6 6 4 4 2 2 9 8 4 3
7 6 Cooling zone
2 1 Cooling zone
Rear View
If there are insufficient fan modules for the number of nodes that are installed, the nodes might be throttled.
99
a. Including Storage Expansion Node or PCIe Expansion Node b. V7000 Storage Node: Either primary or expansion node
100
IBM PureFlex System and IBM Flex System Products and Technology
Power policies
There are five power management policies that can be selected to dictate how the chassis is protected in the case of potential power module or supply failures. These policies are configured by using the Chassis Management Module graphical interface. AC Power source redundancy Power is allocated under the assumption that no throttling of the nodes is allowed if a power supply fault occurs. This is an N+N configuration. AC Power source redundancy with compute node throttling allowed Power is allocated under the assumption that throttling of the nodes are allowed if a power supply fault occurs. This is an N+N configuration. Power Module Redundancy Maximum input power is limited to one less than the number of power modules when more than one power module is present. One power module can fail without affecting compute note operation. Multiple power node failures can cause the chassis to power off. Some compute nodes might not be able to power on if doing so would exceed the power policy limit. Power Module Redundancy with compute node throttling allowed This can be described as oversubscription mode. Operation in this mode assumes that a nodes load can be reduced, or throttled, to the continuous load rating within a specified time. This process occurs following a loss of one or more power supplies. The Power Supplies can exceed their continuous rating of 2500w for short periods. This is for an N+1 configuration. Basic Power Management This allows the total output power of all power supplies to be used. When operating in this mode, there is no power redundancy. If a power supply fails, or an AC feed to one or more supplies is lost, the entire chassis might shut down. There is no power throttling. The chassis is run using one of these power capping policies: No Power Capping Maximum input power is determined by the active power redundancy policy Static Capping This sets an overall chassis limit on the maximum input power. In a situation where powering on a component would cause the limit to be exceeded, the component is prevented from powering on.
101
For up to eight nodes with N+N configuration, install a further pair of power supplies in bays 2 and 5 as shown in Figure 4-25.
Figure 4-25 N+N power supply requirements with up to eight nodes installed
102
IBM PureFlex System and IBM Flex System Products and Technology
To support more than eight nodes with N+N, install the remaining pair of power supplies (3 and 6) as shown in Figure 4-26.
103
With configurations between five and eight nodes, for N+1 a total of three Power supplies are required (Figure 4-28).
Figure 4-28 N+1 - up to eight nodes are supported by three power supplies
For configurations greater than nine nodes, a total of four power supplies are required as shown in Figure 4-29.
Figure 4-29 N+1 fully configured chassis requires four power supplies
A fully populated chassis can function on three power supplies. However, avoid this configuration because it has no power redundancy in the event of a power source or power supply failure.
104
IBM PureFlex System and IBM Flex System Products and Technology
Table 4-13 lists the ordering information for the second CMM.
Table 4-13 Chassis Management Module ordering information Part number 68Y7030 Feature codea A0UE / 3592 Description IBM Flex System Chassis Management Module
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Figure 4-30 shows the location of the CMM bays on the back of the Enterprise Chassis.
The CMM provides these functions: Power control Fan management Chassis and compute node initialization Switch management Diagnostics Resource discovery and inventory management Resource alerts and monitoring management Chassis and compute node power management Network management The CMM has the following connectors: USB connection: Can be used for insertion of a USB media key for tasks such as firmware updates. 10/100/1000 Mbps RJ45 Ethernet connection: For connection to a management network. The CMM can be managed through this Ethernet port.
105
Serial port (mini-USB): For local serial (command-line interface (CLI)) access to the CMM. Use the cable kit that is listed in Table 4-14 for connectivity.
Table 4-14 Serial cable specifications Part number 90Y9338 Feature codea A2RR / None Description IBM Flex System Management Serial Access Cable Contains two cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The CMM has the following LEDs that provide status information: Power-on LED Activity LED Error LED Ethernet port link and port activity LEDs Figure 4-31 shows the CMM connectors and LEDs.
The CMM also incorporates a reset button. It has two functions, dependent upon how long the button is held in: When pressed for less than 5 seconds, the CMM restarts. When pressed for more than 5 seconds (for example 10-15 seconds), the CMM configuration is reset to manufacturing defaults and then restarts. For more information about how the CMM integrates into the Systems Management architecture, see 3.2, Chassis Management Module on page 53.
106
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-32 Rear view that shows the I/O Module bays 1 - 4
If a node has a two port integrated LAN on Motherboard (LOM) as standard, Module 1 and 2 are connected to this LOM. If an I/O adapter is installed in the nodes I/O expansion slot 1, Modules 1 and 2 are connected to this adapter. Modules 3 and 4 connect to the I/O adapter that is installed within I/O expansion bay 2 on the node. These I/O modules provide external connectivity, and connect internally to each of the nodes within the chassis. They can be either Switch or Pass-thru modules, with a potential to support other types in the future.
107
Figure 4-33 shows the connections from the nodes to the switch modules.
LOM connector (remove when I/O expansion adapter is installed)
I/O module 1
I/O module 3
I/O module 4
Node bay 14
The node in Bay 1 on Figure 4-33 shows that when shipped with a LOM, the LOM connector provides the link from the node system board to the midplane. Some nodes do not ship with LOM. If required, this LOM connector can be removed and an I/O expansion adapter can be installed in its place. This configuration is shown on the node in Bay 2 on Figure 4-33
108
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-34 shows the electrical connections from the LOM and I/O adapters to the I/O Modules, which all takes place across the chassis midplane.
Node M1 1 M2
Switch . . . 1
Node M1 2 M2
Switch . . . 2
Node M1 3 M2
Switch . . . 3
Node M1 14 M2
Switch . . . 4
A total of two I/O expansion adapters (designated M1 and M2 in Figure 4-34) can be plugged into a half-wide node. Up to 4 I/O adapters can be plugged into a full-wide node. Each I/O adapter has two connectors. One connects to the compute nodes system board (PCI Express connection). The second connector is a high speed interface to the midplane that mates to the midplane when the node is installed into a bay within the chassis. As shown in Figure 4-34, each of the links to the midplane from the I/O adapter (shown in red) are in fact four links wide. Exactly how many links are employed on each I/O adapter is dependent on the design of the adapter and the number of ports that are wired. Therefore, a half wide node can have a maximum of 16 I/O links, and a full wide node 32.
109
Each of these individual I/O links or lanes can be wired for 1 Gb or 10 Gb Ethernet, or 8 or 16 Gbps Fibre Channel. You can enable any number of these links. The application-specific integrated circuit (ASIC) type on the I/O Expansion adapter dictates the number of links that can be enabled. Some ASICs are two port and some are four port. For a two port ASIC, one port can go to one switch and one port to the other. This configuration is shown in Figure 4-36 on page 111. In the future other combinations can be implemented. In an Ethernet I/O adapter, the wiring of the links is to the IEEE 802.3ap standard, which is also known as the Backplane Ethernet standard. The Backplane Ethernet standard has different implementations at 10 Gbps, being 10GBASE-KX4 and 10GBASE-KR. The I/O architecture of the Enterprise Chassis supports both the KX4 and KR. 10GBASE-KX4 uses the same physical layer coding (IEEE 802.3 clause 48) as 10GBASE-CX4, where each individual lane (SERDES = Serializer/DeSerializer) carries 3.125 Gbaud of signaling bandwidth. 10GBASE-KR uses the same coding (IEEE 802.3 clause 49) as 10GBASE-LR/ER/SR, where the SERDES lane operates at 10.3125 Gbps. Each of the links between I/O expansion adapter and I/O module can either be 4x 3.125 Lanes/port (KX-4) or 4x 10 Gbps Lanes (KR). This choice is dependent on the expansion adapter and I/O Module implementation.
110
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-36 shows how the integrated 2-port 10 Gb LOM connects through a LOM connector to the midplane on a compute node. This implementation provides a pair of 10 Gb lanes. Each lane connects to a 10 Gb switch or 10 Gb pass-through module that is installed in I/O module bays in the rear of the chassis.
10 Gbps KR lane
P2
Figure 4-36 LOM implementation - Emulex 10 Gb Virtual Fabric onboard LOM to I/O Module
111
A half-wide compute node with two standard I/O adapter sockets and an I/O adapter with two ports is shown in Figure 4-37. Port 1 connects to one switch in the chassis and Port 2 connects to another switch in the chassis. With 14 compute nodes that are installed in the chassis, therefore, each switch has 14 internal ports for connectivity to the compute nodes.
P1 P3 P5 P7 P2 P4 P6 P8
x1 Ports
P2
x1 Ports
P1 P3 P5 P7 P2 P4 P6 P8 I/O modules
112
IBM PureFlex System and IBM Flex System Products and Technology
Another implementation of the I/O adapter is the four port. Figure 4-38 shows the interconnection to the I/O module bays for such I/O adapters that uses a 4-port ASIC.
P1 P3 P5 P7 P2 P4 P6 P8
x1 Ports x1 Ports
P1 P3 P5 P7 P2 P4 P6 P8 I/O modules
In this case, with each node having a four port I/O adapter in I/O slot 1, each I/O module would require 28 internal ports enabled. This configuration highlights another key feature of the I/O architecture: Switch partitioning. Switch partitioning is where sets of ports are enabled by Feature on Demand (FoD) to allow a great number of connections between nodes and a switch. With two lanes per node to each switch and 14 nodes requiring four ports that are connected, each switch therefore must have 28 internal ports enabled. You also need sufficient uplink ports.
113
The architecture allows for a total of eight lanes per I/O adapter, as shown in Figure 4-39. Therefore, a total of 16 I/O lanes per half wide node is possible. Each I/O module requires the matching number of internal ports to be enabled.
Node A1 bay 1 A2
. .. . Switch . . .. .. .. . bay 1 . .
Node A1 bay 2 A2
. .. . Switch . . .. .. .. . bay 3 . .
. .. . Switch . . .. .. .. . bay 2 . .
. Switch . . .. . .. . bay 4 . .. .. .
Figure 4-39 Full chassis connectivity - 8 ports per adapter (16 ports per standard-wide compute node)
For more information about switch partitioning and port enablement using FoD, see 4.10, I/O modules on page 114. For more information about I/O expansion adapters that install on the nodes, see 5.6.1, Overview on page 286.
114
IBM PureFlex System and IBM Flex System Products and Technology
4.10.11, IBM Flex System FC3171 8Gb SAN Pass-thru on page 151 4.10.12, IBM Flex System IB6131 InfiniBand Switch on page 153 There are four I/O Module bays at the rear of the chassis. To insert an I/O module into a bay, first remove the I/O filler. Figure 4-40 shows how to remove an I/O filler and insert an I/O module into the chassis by using the two handles.
OK
Identify
Switch error
The LEDs are as follows: OK (power) When this LED is lit, it indicates that the switch is on. When it is not lit and the amber switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it indicates that the switch is off.
115
Identify You can physically identify a switch by making this blue LED light up by using the management software. Switch Error When this LED is lit, it indicates a POST failure or critical alert. When this LED is lit, the system-error LED on the chassis is also lit. When this LED is not lit and the green LED is lit, it indicates that the switch is working correctly. If the green LED is also not lit, it indicates that the switch is off
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Part number 90Y9338 contains two cables: Mini-USB-to-RJ45 serial cable Mini-USB-to-DB9 serial cable
116
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-42 shows the I/O module naming scheme. As time progresses, this scheme might be expanded to support future technology.
EN2092
117
Part number
Feature codesa
Part number
EN4093R 10Gb Switch 95Y3309 A3J6 / ESW7 Yes Yes Yesd Yes
EN4093 10Gb Switch 49Y4270 A0TB / 3593 Yes Yes Yesd Yes
EN4091 10Gb Pass-thru 88Y6043 A1QV / 3700 Yesc Yesc Yesc Yes
Feature codesa None 90Y3554 None None None / 1762 A1R1 / 1759 None / EC24 None / EC26 EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter CN4058 8-port 10Gb Converged Adapter EN4132 2-port 10Gb RoCE Adapter
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config). b. 1 Gb is supported on the CN4093s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds. c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch.
FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config).
118
IBM PureFlex System and IBM Flex System Products and Technology
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config). b. To operate at FDR speeds, the IB6131 switch needs the FDR upgrade, as described in 4.10.12, IBM Flex System IB6131 InfiniBand Switch on page 153.
4.10.5 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch provides unmatched scalability, performance, convergence, and network virtualization, while also delivering innovations to help address a number of networking concerns and providing capabilities that help you prepare for the future. The switch offers full Layer 2/3 switching and FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a converged and integrated solution, and it is installed within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help you migrate to a 10 Gb or 40 Gb converged Ethernet infrastructure and offers virtualization features such as Virtual Fabric and IBM VMready, plus the ability to work with IBM Distributed Virtual Switch 5000V. Figure 4-43 shows the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch.
Figure 4-43 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch
The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10 GbE SFP+ ports, and six external Omni Ports enabled.
119
Further ports can be enabled: Fourteen more internal ports and two external 40 GbE QSFP+ uplink ports with Upgrade 1 Fourteen more internal ports and six more external Omni Ports with the Upgrade 2 license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 4-19 shows the part numbers for ordering the switches and the upgrades.
Table 4-19 Part numbers and feature codes for ordering Description Switch module IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch Features on Demand upgrades IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1) IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 2) Management cable IBM Flex System Management Serial Access Cable 90Y9338 00D5845 00D5847 A3HL / ESU1 A3HM / ESU2 00D5823 A3HH / ESW2 Part number Feature code (x-config / e-config)
Neither QSFP+ or SFP+ transceivers or cables are included with the switch. They must be ordered separately (see Table 4-21 on page 122). The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables, a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be used to connect to the switch locally for configuration tasks and firmware updates. The base switch and upgrades are as follows: 00D5823 is the part number for the physical device, which comes with 14 internal 10 GbE ports enabled (one to each node bay), two external 10 GbE SFP+ ports that are enabled to connect to a top-of-rack switch or other devices, and six Omni Ports enabled to connect to either Ethernet or Fibre Channel networking infrastructure, depending on the SFP+ cable or transceiver used. 00D5845 (Upgrade 1) can be applied on the base switch when you need more uplink bandwidth with two 40 GbE QSFP+ ports that can be converted into 4x 10 GbE SFP+ DAC links with the optional break-out cables. This upgrade also enables 14 more internal ports, for a total of 28 ports, to provide more bandwidth to the compute nodes using 4-port expansion cards. 00D5847 (Upgrade 2) can be applied on the base switch when you need more external Omni Ports on the switch or if you want more internal bandwidth to the node bays. The upgrade enables the remaining six external Omni Ports, plus 14 more internal 10 Gb ports, for a total of 28 internal ports, to provide more bandwidth to the compute nodes using four-port expansion cards. Both 00D5845 (Upgrade 1) and 00D5847 (Upgrade 2) can be applied on the switch at the same time so that you can use six ports on an eight-port expansion card, and use all the external ports on the switch.
120
IBM PureFlex System and IBM Flex System Products and Technology
Table 4-20 shows the switch upgrades and the ports they enable.
Table 4-20 CN4093 10 Gb Converged Scalable Switch part numbers and port upgrades Part number Feature codea Description Internal 10Gb Base switch (no upgrades) Add Upgrade 1 Add Upgrade 2 Add both Upgrade 1 and Upgrade 2 14 28 28 42 Total ports that are enabled External 10Gb SFP+ 2 2 2 2 External 10Gb Omni 6 6 12 12 External 40Gb QSFP+ 0 2 0 2
A3HH / ESW2 A3HL / ESU1 A3HM / ESU2 A3HL / ESU1 A3HM / ESU2
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Each upgrade license enables more internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches). Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all the internal ports. Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all the internal ports.
Front panel
Figure 4-44 shows the main components of the CN4093 switch.
2x 10 Gb ports 2x 40 Gb uplink ports (standard) (enabled with Upgrade 1) 12x Omni Ports (6 standard, 6 with Upgrade 2)
SFP+ ports
QSFP+ ports
SFP+ ports
Management ports
Switch LEDs
Figure 4-44 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch
The front panel contains the following components: LEDs that display the status of the switch module and the network: The OK LED indicates that the switch module passed the power-on self-test (POST) with no critical faults and is operational. Identify: You can use this blue LED to identify the switch physically by illuminating it through the management software. The error LED (switch module error) indicates that the switch module failed the POST or detected an operational fault.
121
One mini-USB RS-232 console port that provides an additional means to configure the switch module. This mini-USB-style connector enables connection of a special serial cable. (The cable is optional and it is not included with the switch. For more information, see Table 4-21. Two external SFP+ ports for 1 Gb or 10 Gb connections to external Ethernet devices. Twelve external SFP+ Omni Ports for 10 Gb connections to the external Ethernet devices or 4/8 Gb FC connections to the external SAN devices. Omni Ports support: 1 Gb is not supported on Omni Ports. Two external QSFP+ port connectors to attach QSFP+ modules or cables for a single 40 Gb uplink per port or splitting of a single port into 4x 10 Gb connections to external Ethernet devices. A link OK LED and a Tx/Rx LED for each external port on the switch module. A mode LED for each pair of Omni Ports indicating the operating mode. (OFF indicates that the port pair is configured for Ethernet operation, and ON indicates that the port pair is configured for Fibre Channel operation.)
122
IBM PureFlex System and IBM Flex System Products and Technology
Description 30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) QSFP+ breakout cables - 40 GbE to 4 x 10 GbE (supported on QSFP+ ports) 1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable QSFP+ direct-attach cables - 40 GbE (supported on QSFP+ ports) 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC SFP+ transceivers - 8 Gb FC (supported on Omni Ports) IBM 8Gb SFP+ SW Optical Transceiver
49Y7890 49Y7891
44X1964
5075 / 3286
123
Scalability and performance 40 Gb Ethernet ports for extreme uplink bandwidth and performance. Fixed-speed external 10 Gb Ethernet ports to use the 10 Gb core infrastructure. Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps on Ethernet ports. Media access control (MAC) address learning: Automatic update, and support for up to 128,000 MAC addresses. Up to 128 IP interfaces per switch. Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, and up to 16 ports per group. Support for jumbo frames (up to 9,216 bytes). Broadcast/multicast storm control. IGMP snooping to limit flooding of IP multicast traffic. IGMP filtering to control multicast traffic for hosts that participate in multicast groups. Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses or both. Fast port forwarding and fast uplink convergence for rapid STP convergence. Availability and redundancy Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy. IEEE 802.1D STP for providing L2 redundancy. IEEE 802.1s Multiple STP (MSTP) for topology optimization. Up to 32 STP instances are supported by a single switch. IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic, such as voice or video. Per-VLAN Rapid STP (PVRST) enhancements. Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes. Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off. VLAN support Up to 1024 VLANs supported per switch, with VLAN numbers from 1 - 4095. (4095 is used for management modules connection only.) 802.1Q VLAN tagging support on all ports. Private VLANs. Security VLAN-based, MAC-based, and IP-based access control lists (ACLs). 802.1x port-based authentication. Multiple user IDs and passwords. User access control. Radius, TACACS+, and LDAP authentication and authorization.
124
IBM PureFlex System and IBM Flex System Products and Technology
Quality of Service (QoS) Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing. Traffic shaping and re-marking based on defined policies. Eight Weighted Round Robin (WRR) priority queues per port for processing qualified traffic. IP v4 Layer 3 functions Host management. IP forwarding. IP filtering with ACLs, with up to 896 ACLs supported. VRRP for router redundancy. Support for up to 128 static routes. Routing protocol support (RIP v1, RIP v2, OSPF v2, and BGP-4), for up to 2048 entries in a routing table. Support for DHCP Relay. Support for IGMP snooping and IGMP relay. Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). IP v6 Layer 3 functions IPv6 host management (except for a default switch management IP address). IPv6 forwarding. Up to 128 static routes. Support for OSPF v3 routing protocol. IPv6 filtering with ACLs. Virtualization Virtual NICs (vNICs): Ethernet, iSCSI, or FCoE traffic is supported on vNICs. 802.1Qbg Edge Virtual Bridging (EVB) is an emerging IEEE standard for allowing networks to become virtual machine (VM)-aware: Virtual Ethernet Bridging (VEB) and Virtual Ethernet Port Aggregator (VEPA) are mechanisms for switching between VMs on the same hypervisor. Edge Control Protocol (ECP) is a transport protocol that operates between two peers over an IEEE 802 LAN providing reliable and in-order delivery of upper layer protocol data units. Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP) allows centralized configuration of network policies that persists with the VM, independent of its location. EVB Type-Length-Value (TLV) is used to discover and configure VEPA, ECP, and VDP.
VMready.
125
Converged Enhanced Ethernet Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic that is based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth that is based on the 802.1p priority value in each packets VLAN tag. Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Fibre Channel over Ethernet (FCoE) FC-BB5 FCoE specification compliant. Native FC Forwarder switch operations. End-to-end FCoE support (initiator to target). FCoE Initialization Protocol (FIP) support. Fibre Channel Omni Ports support 4/8 Gb FC when FC SFPs+ are installed in these ports. Full Fabric mode for end-to-end FCoE or NPV Gateway mode for external FC SAN attachments (support for IBM B-type, Brocade, and Cisco MDS external SANs). Fabric services in Full Fabric mode: Name Server. Registered State Change Notification (RSCN). Login services. Zoning
Manageability Simple Network Management Protocol (SNMP V1, V2, and V3). HTTP browser GUI. Telnet interface for CLI. SSH. Secure FTP (sFTP). Service Location Protocol (SLP). Serial interface for CLI. Scriptable CLI. Firmware image update (TFTP and FTP). Network Time Protocol (NTP) for switch clock synchronization. Monitoring Switch LEDs for external port status and switch module status indication. Remote Monitoring (RMON) agent to collect statistics and proactively monitor switch performance. Port mirroring for analyzing network traffic that passes through a switch. Change tracking and remote logging with syslog feature.
126
IBM PureFlex System and IBM Flex System Products and Technology
Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere). POST diagnostic tests. The following features are not supported by IPv6: Default switch management IP address SNMP trap host destination IP address Bootstrap Protocol (BOOTP) and DHCP RADIUS, TACACS+, and LDAP QoS metering and re-marking ACLs for out-profile traffic VMware Virtual Center (vCenter) for VMready Routing Information Protocol (RIP) Internet Group Management Protocol (IGMP) Border Gateway Protocol (BGP) Virtual Router Redundancy Protocol (VRRP) sFLOW
Standards supported
The switches support the following standards: IEEE 802.1AB data center Bridging Capability Exchange Protocol (DCBX) IEEE 802.1D Spanning Tree Protocol (STP) IEEE 802.1p Class of Service (CoS) prioritization IEEE 802.1s Multiple STP (MSTP) IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled) IEEE 802.1Qbg Edge Virtual Bridging IEEE 802.1Qbb Priority-Based Flow Control (PFC) IEEE 802.1Qaz Enhanced Transmission Selection (ETS) IEEE 802.1x port-based authentication IEEE 802.1w Rapid STP (RSTP) IEEE 802.2 Logical Link Control IEEE 802.3 10BASE-T Ethernet IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet IEEE 802.3ad Link Aggregation Control Protocol IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet IEEE 802.3u 100BASE-TX Fast Ethernet IEEE 802.3x Full-duplex Flow Control IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable FC-BB-5 FCoE For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch, TIPS0910, found at: http://www.redbooks.ibm.com/abstracts/tips0910.html?Open
4.10.6 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch
The IBM Flex System EN4093 and IBM Flex System 4093R 10Gb Scalable Switches are 10 Gb 64-port upgradeable midrange to high-end switch modules. They offer Layer 2/3 switching designed for installation within the I/O module bays of the Enterprise Chassis.
Chapter 4. Chassis and infrastructure configuration
127
The latest 4093R switch adds more capabilities to the EN4093, that is, Virtual NIC (Stacking), Unified fabric port (Stacking), Edge virtual bridging (Stacking), and CEE/FCoE (Stacking), and so it is ideal for clients that are looking to implement a converged infrastructure with NAS, iSCSI, or FCoE. For FCoE implementations, the EN4093R acts as a transit switch forwarding FCoE traffic upstream to another devices, such as the Brocade VDX or Cisco Nexus 5548/5596, where the FC traffic is broken out. For a detailed function comparison, see Table 4-24 on page 134. Each switch contains the following ports: Up to 42 internal 10 Gb ports Up to 14 external 10 Gb uplink ports (enhanced small form-factor pluggable (SFP+) connectors) Up to 2 external 40 Gb uplink ports (quad small form-factor pluggable (QSFP+) connectors) These switches are considered suitable for clients with these requirements: Building a 10 Gb infrastructure Implementing a virtualized environment Requiring investment protection for 40 Gb uplinks Want to reduce total cost of ownership (TCO) and improve performance, while maintaining high levels of availability and security Want to avoid oversubscription (traffic from multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact) The EN4093/4093R 10Gb Scalable Switch is shown in Figure 4-45.
128
IBM PureFlex System and IBM Flex System Products and Technology
As listed in Table 4-22, the switch is initially licensed with fourteen 10 Gb internal ports that are enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and four more SFP+ 10Gb ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied.
Table 4-22 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades Part number 49Y4270 Feature codea A0TB / 3593 Product description Total ports that are enabled Internal IBM Flex System Fabric EN4093 10Gb Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports IBM Flex System Fabric EN4093R 10Gb Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 1) Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 2) (requires Upgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports 14 10 Gb uplink 10 40 Gb uplink 0
05Y3309
A3J6 / ESW7
14
10
49Y4798
A1EL / 3596
28
10
88Y6037
A1EM / 3597
42
14
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The key components on the front of the switch are shown in Figure 4-46.
14x 10 Gb uplink ports (10 standard, 4 with Upgrade 2) 2x 40 Gb uplink ports (enabled with Upgrade 1)
SFP+ ports
QSFP+ ports
Management ports
Switch LEDs
Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch)
129
Consideration: Adding Upgrade 2 enables an additional 14 internal ports, for a total of 42 internal ports, with three ports that are connected to each of the 14 compute nodes in the chassis. To take full advantage of all 42 internal ports, a 6-port adapter is required, such as the CN4058 Adapter. Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplink as well. The rear of the switch has 14 SPF+ module ports and two QSFP+ module ports. The QSFP+ ports can be used to provide either two 40 Gb uplinks or eight 10 Gb ports. Use one of the supported QSFP+ to 4x 10 Gb SFP+ cables that are listed in Table 4-23. This cable splits a single 40 Gb QSPFP port into 4 SFP+ 10 Gb ports. The switch is designed to function with nodes that contain a 1Gb LOM, such as the IBM Flex System x220 Compute Node. To manage the switch, a mini USB port and an Ethernet management port are provided. The supported SFP+ and QSFP+ modules and cables for the switch are listed in Table 4-23.
Table 4-23 Supported SFP+ modules and cables Part number Feature codea Description
Serial console cables 90Y9338 A2RR / None IBM Flex System Management Serial Access Cable Kit
Small form-factor pluggable (SFP) transceivers - 1 GbE 81Y1618 81Y1622 90Y9424 3268 / EB29 3269 / EB2A A1PN / ECB8 IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) IBM SFP SX Transceiver IBM SFP LX Transceiver
SFP+ transceivers - 10 GbE 46C3447 90Y9412 44W4408 5053 / None A1PM / ECB9 4942 / 3382 IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 10GBase-SR SFP+ (MMFiber) transceiver
SFP+ Direct Attach Copper (DAC) cables - 10 GbE 90Y9427 90Y9430 90Y9433 A1PH / ECB4 A1PJ / ECB5 A1PK / ECB6 1m IBM Passive DAC SFP+ 3m IBM Passive DAC SFP+ 5m IBM Passive DAC SFP+
QSFP+ transceiver and cables - 40 GbE 49Y7884 90Y3519 90Y3521 A1DR / EB27 A1MM / EB2J A1MN / EC2K IBM QSFP+ 40GBASE-SR Transceiver (Requires either cable 90Y3519 or cable 90Y3521) 10m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884) 30m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)
130
IBM PureFlex System and IBM Flex System Products and Technology
Part number
Feature codea
Description
QSFP+ breakout cables - 40 GbE to 4x10 GbE 49Y7886 49Y7887 49Y7888 A1DL / EB24 A1DM / EB25 A1DN / EB26 1m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 3m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable 5m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable
QSFP+ Direct Attach Copper (DAC) cables - 40 GbE 49Y7890 49Y7891 A1DP / EB2B A1DQ / EB2H 1m QSFP+ to QSFP+ DAC 3m QSFP+ to QSFP+ DAC
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The EN4093/4093R 10Gb Scalable Switch has the following features and specifications: Internal ports Forty-two internal full-duplex 10 Gigabit ports. Fourteen ports are enabled by default. Optional FoD licenses are required to activate the remaining 28 ports. Two internal full-duplex 1 GbE ports that are connected to the chassis management module. External ports Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC cables. Ten ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DAC cables are not included and must be purchased separately. Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (ports are disabled by default. An optional FoD license is required to activate them). QSFP+ modules and DAC cables are not included and must be purchased separately. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module. Scalability and performance 40 Gb Ethernet ports for extreme uplink bandwidth and performance Fixed-speed external 10 Gb Ethernet ports to take advantage of 10 Gb core infrastructure Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps Media Access Control (MAC) address learning: Automatic update, support of up to 128,000 MAC addresses Up to 128 IP interfaces per switch Static and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad) link aggregation: Up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group Support for jumbo frames (up to 9,216 bytes)
131
Broadcast/multicast storm control Internet Group Management Protocol (IGMP) snooping to limit flooding of IP multicast traffic IGMP filtering to control multicast traffic for hosts that participate in multicast groups Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses or both Fast port forwarding and fast uplink convergence for rapid STP convergence Availability and redundancy Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy IEEE 802.1D Spanning Tree Protocol (STP) for providing L2 redundancy IEEE 802.1s Multiple STP (MSTP) for topology optimization, up to 32 STP instances are supported by single switch IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic like voice or video Rapid Per-VLAN STP (RPVST) enhancements Layer 2 Trunk Failover to support active/standby configurations of network adapter that team on compute nodes Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off Virtual local area network (VLAN) support Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only.) 802.1Q VLAN tagging support on all ports Private VLANs Security VLAN-based, MAC-based, and IP-based access control lists (ACLs) 802.1x port-based authentication Multiple user IDs and passwords User access control Radius, TACACS+ and LDAP authentication and authorization Quality of Service (QoS) Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing Traffic shaping and remarking based on defined policies Eight weighted round robin (WRR) priority queues per port for processing qualified traffic IP v4 Layer 3 functions Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes 132
IBM PureFlex System and IBM Flex System Products and Technology
Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for Dynamic Host Configuration Protocol (DHCP) Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). 802.1Qbg support IP v6 Layer 3 functions IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes Support for OSPF v3 routing protocol IPv6 filtering with ACLs Virtualization Virtual Fabric with virtual network interface card (vNIC) 802.1Qbg Edge Virtual Bridging (EVB) IBM VMready Converged Enhanced Ethernet Priority-based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic. This function is based on the 802.1p priority value in each packets VLAN tag. Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth that is based on the 802.1p priority value in each packets VLAN tag. Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities. Manageability Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI Secure Shell (SSH) Serial interface for CLI Scriptable CLI Firmware image update: Trivial File Transfer Protocol (TFTP) and File Transfer Protocol (FTP) Network Time Protocol (NTP) for switch clock synchronization Monitoring Switch LEDs for external port status and switch module status indication Remote monitoring (RMON) agent to collect statistics and proactively monitor switch performance Port mirroring for analyzing network traffic that passes through the switch Change tracking and remote logging with syslog feature
Chapter 4. Chassis and infrastructure configuration
133
Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere) POST diagnostic procedures Stacking: Up to eight switches in a stack FCoE support (EN4093R only) vNIC support (support for FCoE on vNICs) Table 4-24 compares the EN4093 to the EN4093R.
Table 4-24 EN4093 and EN4093R supported features Feature Layer 2 switching Layer 3 switching Switch Stacking Virtual NIC (stand-alone) Virtual NIC (stacking) Unified Fabric Port (stand-alone) Unified Fabric Port (stacking) Edge virtual bridging (stand-alone) Edge virtual bridging (stacking) CEE/FCoE (stand-alone) CEE/FCoE (stacking) EN4093 Yes Yes Yes Yes Yes Yes No Yes Yes Yes No EN4093R Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes
Both the EN4093 and EN4093R support vNIC+ FCoE and 802.1Qbg + FCoE stand alone (without stacking). The EN4093R supports vNIC + FCOE with stacking or 802.1Qbg + FCoE with stacking. For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches, TIPS0864, found at: http://www.redbooks.ibm.com/abstracts/tips0864.html?Open
134
IBM PureFlex System and IBM Flex System Products and Technology
The IBM Flex System EN4091 10Gb Ethernet Pass-thru Module is shown in Figure 4-47.
Figure 4-47 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module
The ordering part number and feature codes are listed in Table 4-25.
Table 4-25 EN4091 10Gb Ethernet Pass-thru Module part number and feature codes Part number 88Y6043 Feature codea A1QV / 3700 Product Name IBM Flex System EN4091 10Gb Ethernet Pass-thru
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The EN4091 10Gb Ethernet Pass-thru Module has the following specifications: Internal ports 14 internal full-duplex Ethernet ports that can operate at 1 Gb or 10 Gb speeds External ports Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. SFP+ modules and DAC cables are not included, and must be purchased separately. Unmanaged device that has no internal Ethernet management port. However, it is able to provide its VPD to the secure management network in the Chassis Management Module Supports 10 Gb Ethernet signaling for CEE, FCoE, and other Ethernet-based transport protocols. Allows direct connection from the 10 Gb Ethernet adapters that are installed in compute nodes in a chassis to an externally located top of rack switch or other external device. Consideration: The EN4091 10Gb Ethernet Pass-thru Module has only 14 internal ports. As a result, only two ports on each compute node are enabled, one for each of two pass-through modules that are installed in the chassis. If four-port adapters are installed in the compute nodes, ports 3 and 4 on those adapters are not enabled. There are three standard I/O module status LEDs as shown in Figure 4-41 on page 115. Each port has link and activity LEDs.
135
SFP+ transceivers - 10 GbE 44W4408 46C3447 90Y9412 4942 / 3282 5053 / None A1PM / None 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver
SFP transceivers - 1 GbE 81Y1622 81Y1618 90Y9424 3269 / EB2A 3268 / EB29 A1PN / None IBM SFP SX Transceiver IBM SFP RJ45 Transceiver IBM SFP LX Transceiver
Direct-attach copper (DAC) cables 81Y8295 81Y8296 81Y8297 95Y0323 95Y0326 95Y0329 A18M / EN01 A18N / EN02 A18P / EN03 A25A / None A25B / None A25C / None 1m 10GE Twinax Act Copper SFP+ DAC (active) 3m 10GE Twinax Act Copper SFP+ DAC (active) 5m 10GE Twinax Act Copper SFP+ DAC (active) 1m IBM Active DAC SFP+ Cable 3m IBM Active DAC SFP+ Cable 5m IBM Active DAC SFP+ Cable
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
For more information, see the IBM Redbooks Product Guide IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865, found at: http://www.redbooks.ibm.com/abstracts/tips0865.html?Open
136
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-48 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
As listed in Table 4-27, the switch comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order.
Table 4-27 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades Part number 49Y4294 Feature codea A0TF / 3598 Product description IBM Flex System EN2092 1Gb Ethernet Scalable Switch 14 internal 1 Gb ports 10 external 1 Gb ports IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) Adds 14 internal 1 Gb ports Adds 10 external 1 Gb ports IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb Uplinks) Adds 4 external 10 Gb uplinks
90Y3562
A1QW / 3594
49Y4298
A1EN / 3599
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The key components on the front of the switch are shown in Figure 4-49.
20x external 1 Gb ports (10 standard, 10 with Upgrade 1) 4x 10 Gb uplink ports (enabled with Uplinks upgrade)
RJ45 ports
Figure 4-49 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
137
The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 more internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter that is installed in each compute node (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter that is installed in each compute node (two ports of the adapter to each switch) The standard has 10 external ports enabled. Additional external ports are enabled with license upgrades: Upgrade 1 enables 10 more ports for a total of 20 ports Uplinks Upgrade enables the four 10 Gb SFP+ ports. These two upgrades can be installed in either order. This switch is considered ideal for clients with these characteristics: Still use 1 Gb as their networking infrastructure Are deploying virtualization and require multiple 1 Gb ports Want investment protection for 10 Gb uplinks Looking to reduce TCO and improve performance, while maintaining high levels of availability and security Looking to avoid oversubscription (multiple internal ports that attempt to pass through a lower quantity of external ports, leading to congestion and performance impact). The switch has three switch status LEDs (see Figure 4-41 on page 115) and one mini-USB serial port connector for console management. Uplink Ports 1 - 20 are RJ45, and the 4 x 10 Gb uplink ports are SFP+. The switch supports either SFP+ modules or DAC cables. The supported SFP+ modules and DAC cables for the switch are listed in Table 4-28.
Table 4-28 IBM Flex System EN2092 1Gb Ethernet Scalable Switch SFP+ and DAC cables Part number SFP transceivers 81Y1622 81Y1618 90Y9424 SFP+ transceivers 44W4408 46C3447 90Y9412 DAC cables 90Y9427 A1PH / None 1m IBM Passive DAC SFP+ 4942 / 3282 5053 / None A1PM / None 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR) IBM SFP+ SR Transceiver IBM SFP+ LR Transceiver 3269 / EB2A 3268 / EB29 A1PN / None IBM SFP SX Transceiver IBM SFP RJ45 Transceiver IBM SFP LX Transceiver Feature codea Description
138
IBM PureFlex System and IBM Flex System Products and Technology
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The EN2092 1 Gb Ethernet Scalable Switch has the following features and specifications: Internal ports Twenty-eight internal full-duplex Gigabit ports. Fourteen ports are enabled by default. An optional FoD license is required to activate another 14 ports. Two internal full-duplex 1 GbE ports that are connected to the chassis management module External ports Four ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. These ports are disabled by default. An optional FoD license is required to activate them. SFP+ modules are not included and must be purchased separately. Twenty external 10/100/1000 1000BASE-T Gigabit Ethernet ports with RJ-45 connectors. Ten ports are enabled by default. An optional FoD license is required to activate another 10 ports. One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module. Scalability and performance Fixed-speed external 10 Gb Ethernet ports for maximum uplink bandwidth Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization Non-blocking architecture with wire-speed forwarding of traffic MAC address learning: Automatic update, support of up to 32,000 MAC addresses Up to 128 IP interfaces per switch Static and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports per group Support for jumbo frames (up to 9,216 bytes) Broadcast/multicast storm control IGMP snooping for limit flooding of IP multicast traffic IGMP filtering to control multicast traffic for hosts that participate in multicast groups Configurable traffic distribution schemes over trunk links that are based on source/destination IP or MAC addresses, or both Fast port forwarding and fast uplink convergence for rapid STP convergence Availability and redundancy VRRP for Layer 3 router redundancy IEEE 802.1D STP for providing L2 redundancy IEEE 802.1s MSTP for topology optimization, up to 32 STP instances that are supported by a single switch
139
IEEE 802.1w RSTP (provides rapid STP convergence for critical delay-sensitive traffic like voice or video) RPVST enhancements Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off VLAN support Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to 4095 (4095 is used for the management modules connection only) 802.1Q VLAN tagging support on all ports Private VLANs Security VLAN-based, MAC-based, and IP-based ACLs 802.1x port-based authentication Multiple user IDs and passwords User access control Radius, TACACS+, and Lightweight Directory Access Protocol (LDAP) authentication and authorization QoS Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing Traffic shaping and remarking based on defined policies Eight WRR priority queues per port for processing qualified traffic IP v4 Layer 3 functions Host management IP forwarding IP filtering with ACLs, up to 896 ACLs supported VRRP for router redundancy Support for up to 128 static routes Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a routing table Support for DHCP Relay Support for IGMP snooping and IGMP relay Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM). IP v6 Layer 3 functions IPv6 host management (except default switch management IP address) IPv6 forwarding Up to 128 static routes
140
IBM PureFlex System and IBM Flex System Products and Technology
Support for OSPF v3 routing protocol IPv6 filtering with ACLs Virtualization VMready Manageability Simple Network Management Protocol (SNMP V1, V2, and V3) HTTP browser GUI Telnet interface for CLI SSH Serial interface for CLI Scriptable CLI Firmware image update (TFTP and FTP) NTP for switch clock synchronization Monitoring Switch LEDs for external port status and switch module status indication RMON agent to collect statistics and proactively monitor switch performance Port mirroring for analyzing network traffic that passes through the switch Change tracking and remote logging with the syslog feature Support for the sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer is required elsewhere) POST diagnostic functions For more information, see the IBM Redbooks Product Guide IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861, found at: http://www.redbooks.ibm.com/abstracts/tips0861.html?Open
141
Figure 4-50 shows the IBM Flex System FC5022 16Gb SAN Scalable Switch.
Figure 4-50 IBM Flex System FC5022 16Gb SAN Scalable Switch
Three versions are available, as listed in Table 4-29: 12-port and 24-port switch modules and a 24-port switch with the Enterprise Switch Bundle (ESB) software. The port count can be applied to internal or external ports by using a a feature that is called Dynamic Ports on Demand (DPOD). Ports counts can be increased with license upgrades, as described in Port and feature upgrades on page 143.
Table 4-29 IBM Flex System FC5022 16Gb SAN Scalable Switch part numbers Part number 88Y6374 00Y3324 90Y9356 Feature codesa A1EH / 3770 A3DP / ESW5 A1EJ / 3771 Description IBM Flex System FC5022 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Ports enabled by default 12 24 24
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Table 4-30 provides a feature comparison between the FC5022 switch models.
Table 4-30 Feature comparison by model Feature FC5022 16Gb 24-port ESB Switch 90Y9356 Number of active ports Number of SFP+ included Full fabric Access Gateway Advanced zoning Enhanced Group Management ISL Trunking Adaptive Networking Advanced Performance Monitoring Fabric Watch 24 None Included Included Included Included Included Included Included Included FC5022 24-port 16Gb SAN Scalable Switch 00Y3324 24 2x 16 Gb SFP+ Included Included Included Included Optional Not available Not available Optional FC5022 16Gb SAN Scalable Switch 88Y6374 12 None Included Included Included Included Not available Not available Not available Not available
142
IBM PureFlex System and IBM Flex System Products and Technology
Feature
FC5022 24-port 16Gb SAN Scalable Switch 00Y3324 Not available Not available
FC5022 16Gb SAN Scalable Switch 88Y6374 Not available Not available
Included Included
The part number for the switch includes the following items: One IBM Flex System FC5022 16Gb SAN Scalable Switch or IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch Important Notices Flyer Warranty Flyer Documentation CD-ROM The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables: A mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable. Either cable can be used to connect to the switch locally for configuration tasks and firmware updates.
Feature codesa A1EP / 3772 A1EQ / 3773 A3HN / ESW3 A3HP / ESW4
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
With DPOD, ports are licensed as they come online. With the FC5022 16Gb SAN Scalable Switch, the first 12 ports that report (on a first-come, first-served basis) on boot are assigned licenses. These 12 ports can be any combination of external or internal Fibre Channel ports. After all the licenses are assigned, you can manually move those licenses from one port to another port. Because this process is dynamic, no defined ports are reserved except ports 0 and 29. The FC5022 16Gb ESB Switch has the same behavior. The only difference is the number of ports.
143
Table 4-32 shows the total number of active ports on the switch after you apply compatible port upgrades.
Table 4-32 Total port counts after you apply upgrades Total number of active ports 24-port 16 Gb ESB SAN switch Ports on Demand upgrade Included with base switch Upgrade 1, 88Y6382 (adds 12 ports) Upgrade 2, 88Y6386 (adds 24 ports) 90Y9356 24 Not supported 48 24-port 16 Gb SAN switch 00Y3324 24 Not supported 48 16 Gb SAN switch 88Y6374 12 24 48
Transceivers
The FC5022 12-port and 24-port ESB SAN switches come without SFP+, which must be ordered separately to provide outside connectivity. The FC5022 24-port SAN switch comes standard with two Brocade 16 Gb SFP+ transceivers; more SFP+ can be ordered if required. Table 4-33 lists the supported SFP+ options.
Table 4-33 Supported SFP+ transceivers Part number 88Y6416 88Y6393 Feature codea 5084 / 5370 A22R / 5371 Description Brocade 8 Gb SFP+ SW Optical Transceiver Brocade 16 Gb SFP+ Optical Transceiver
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Benefits
The switches offer the following key benefits: Exceptional price/performance for growing SAN workloads The FC5022 16Gb SAN Scalable Switch delivers exceptional price/performance for growing SAN workloads. It achieves this through a combination of market-leading 1,600 MBps throughput per port and an affordable high-density form factor. The 48 FC ports produce an aggregate 768 Gbps full-duplex throughput, plus any external eight ports can be trunked for 128 Gbps inter-switch links (ISLs). Because 16 Gbps port technology dramatically reduces the number of ports and associated optics/cabling required through 8/4 Gbps consolidation, the cost savings and simplification benefits are substantial. Accelerating fabric deployment and serviceability with diagnostic ports Diagnostic Ports (D_Ports) are a new port type that is supported by the FC5022 16Gb SAN Scalable Switch. They enable administrators to quickly identify and isolate 16 Gbps optics, port, and cable problems, reducing fabric deployment and diagnostic times. If the optical media is found to be the source of the problem, it can be transparently replaced because 16 Gbps optics are hot-pluggable.
144
IBM PureFlex System and IBM Flex System Products and Technology
A building block for virtualized, private cloud storage The FC5022 16Gb SAN Scalable Switch supports multi-tenancy in cloud environments through VM-aware end-to-end visibility and monitoring, QoS, and fabric-based advanced zoning features. The FC5022 16Gb SAN Scalable Switch enables secure distance extension to virtual private or hybrid clouds with dark fiber support. They also enable in-flight encryption and data compression. Internal fault-tolerant and enterprise-class reliability, availability, and serviceability (RAS) features help minimize downtime to support mission-critical cloud environments. Simplified and optimized interconnect with Brocade Access Gateway The FC5022 16Gb SAN Scalable Switch can be deployed as a full-fabric switch or as a Brocade Access Gateway. It simplifies fabric topologies and heterogeneous fabric connectivity. Access Gateway mode uses N_Port ID Virtualization (NPIV) switch standards to present physical and virtual servers directly to the core of SAN fabrics. This configuration makes it not apparent to the SAN fabric, greatly reducing management of the network edge. Maximizing investments To help optimize technology investments, IBM offers a single point of serviceability that is backed by industry-renowned education, support, and training. In addition, the IBM 16/8 Gbps SAN Scalable Switch is in the IBM ServerProven program, enabling compatibility among various IBM and partner products. IBM recognizes that customers deserve the most innovative, expert integrated systems solutions.
145
Brocade Fabric OS delivers distributed intelligence throughout the network and enables a wide range of value-added applications. These applications include Brocade Advanced Web Tools and Brocade Advanced Fabric Services (on certain models). Supports up to 768 Gbps I/O bandwidth 420 million frames switches per second, 0.7 microseconds latency 8,192 buffers for up to 3,750 km extended distance at 4 Gbps FC (Extended Fabrics license required) In-flight 64 Gbps Fibre Channel compression and decompression support on up to two external ports (no license required) In-flight 32 Gbps encryption and decryption on up to two external ports (no license required) 48 Virtual Channels per port Port mirroring to monitor ingress or egress traffic from any port within the switch Two I2C connections able to interface with redundant management modules Hot pluggable, up to four hot pluggable switches per chassis Single fuse circuit Four temperature sensors Managed with Brocade Web Tools Supports a minimum of 128 domains in Native mode and Interoperability mode Nondisruptive code load in Native mode and Access Gateway mode 255 N_port logins per physical port D_port support on external ports Class 2 and Class 3 frames SNMP v1 and v3 support SSH v2 support Secure Sockets Layer (SSL) support NTP client support (NTP V3) FTP support for firmware upgrades SNMP/Management Information Base (MIB) monitoring functionality that is contained within the Ethernet Control MIB-II (RFC1213-MIB) End-to-end optics and link validation Sends switch events and syslogs to the CMM Traps identify cold start, warm start, link up/link down and authentication failure events Support for IPv4 and IPv6 on the management ports The FC5022 16Gb SAN Scalable Switches come standard with the following software features: Brocade Full Fabric mode: Enables high performance 16 Gb or 8 Gb fabric switching Brocade Access Gateway mode: Uses NPIV to connect to any fabric without adding switch domains to reduce management complexity Dynamic Path Selection: Enables exchange-based load balancing across multiple Inter-Switch Links for superior performance
146
IBM PureFlex System and IBM Flex System Products and Technology
Brocade Advanced Zoning: Segments a SAN into virtual private SANs to increase security and availability Brocade Enhanced Group Management: Enables centralized and simplified management of Brocade fabrics through IBM Network Advisor
147
FC-VI INCITS 357: 2002 FC-TAPE INCITS TR-24: 1999 FC-DA INCITS TR-36: 2004 (includes the following): FC-FLA INCITS TR-20: 1998 FC-PLDA INCIT S TR-19: 1998 FC-MI-2 ANSI/INCITS TR-39-2005 FC-PI INCITS 352: 2002 FC-PI-2 INCITS 404: 2005 FC-PI-4 INCITS 1647-D, revision 7.1 (under development) FC-PI-5 INCITS 479: 2011 FC-FS-2 ANSI/INCITS 424:2006 (includes the following): FC-FS INCITS 373: 2003 FC-LS INCITS 433: 2007 FC-BB-3 INCITS 414: 2006 (includes the following): FC-BB-2 INCITS 372: 2003 FC-SB-3 INCITS 374: 2003 (replaces FC-SB ANSI X3.271: 1996 and FC-SB-2 INCITS 374: 2001) RFC 2625 IP and ARP Over FC RFC 2837 Fabric Element MIB MIB-FA INCITS TR-32: 2003 FCP-2 INCITS 350: 2003 (replaces FCP ANSI X3.269: 1996) SNIA Storage Management Initiative Specification (SMI-S) Version 1.2 (includes the following): SNIA Storage Management Initiative Specification (SMI-S) Version 1.03 ISO standard IS24775-2006. (replaces ANSI INCITS 388: 2004) SNIA Storage Management Initiative Specification (SMI-S) Version 1.1.0 SNIA Storage Management Initiative Specification (SMI-S) Version 1.2.0 For more information, see the IBM Redbooks Product Guide IBM Flex System FC5022 16Gb SAN Scalable Switches, TIPS0870, found at: http://www.redbooks.ibm.com/abstracts/tips0870.html?Open
148
IBM PureFlex System and IBM Flex System Products and Technology
The I/O module has 14 internal ports and 6 external ports. All ports are licensed on the switch because there are no port licensing requirements. Ordering information is listed in Table 4-34.
Table 4-34 FC3171 8Gb SAN Switch Part number 69Y1930 Feature codea A0TD / 3595 Product Name IBM Flex System FC3171 8Gb SAN Switch
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
No SFP modules and cables are supplied as standard. The ones that are listed in Table 4-35 are supported.
Table 4-35 FC3171 8Gb SAN Switch supported SFP modules and cables Part number 44X1964 39R6475 Feature codesa 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ SW Optical Transceiver 4 Gb SFP Transceiver Option
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
You can reconfigure the FC3171 8Gb SAN Switch to become a pass-through module by using the switch GUI or CLI. The module can then be converted back to a full function SAN switch at some future date. The switch requires a reset when you turn on or off transparent mode. The switch can be configured by using either command line or QuickTools: Command Line: Access the switch by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools: Requires a current version of the Java runtime environment (JRE) on your workstation before you point a web browser to the switchs IP address. The IP address of the switch must be configured. QuickTools does not require a license and code is included.
149
On this switch when in Full Fabric mode, access to all of the Fibre Channel Security features is provided. Security includes additional services of SSL and SSH, which are available. In addition, RADIUS servers can be used for device and user authentication. After SSL/SSH is enabled, the security features are available to be configured. Configuring security features allows the SAN administrator to configure which devices are allowed to log on to the Full Fabric Switch module. This process is done by creating security sets with security groups. These sets are configured on a per switch basis. The security features are not available when in pass-through mode. Here are the FC3171 8Gb SAN Switch specifications and standards: Fibre Channel standards: C-PH version 4.3 FC-PH-2 FC-PH-3 FC-AL version 4.5 FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0
Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3 Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Full fabric mode: Generic loop port Transparent mode: Transparent fabric port Internal port type: Full fabric mode: F_port Transparent mode: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: External ports are automatically detected and self- configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module
2 Gb specifications 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex
150
IBM PureFlex System and IBM Flex System Products and Technology
4 Gb specifications 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb specifications 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex Nonblocking architecture to prevent latency System processor: IBM PowerPC For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866, found at: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
Exception: If you must enable full fabric capability later, do not purchase this switch. Instead, purchase the FC3171 8Gb SAN Switch.
151
There are no SFPs supplied with the switch and must be ordered separately. Supported transceivers and fiber optic cables are listed in Table 4-37.
Table 4-37 FC3171 8Gb SAN Pass-thru supported modules and cables Part number 44X1964 39R6475 Feature code 5075 / 3286 4804 / 3238 Description IBM 8 Gb SFP+ SW Optical Transceiver 4 Gb SFP Transceiver Option
The FC3171 8Gb SAN Pass-thru can be configured by using either the command line or QuickTools: Command Line: Access the module by using the console port through the Chassis Management Module or through the Ethernet port. This method requires a basic understanding of the CLI commands. QuickTools: Requires a current version of the JRE on your workstation before you point a web browser to the modules IP address. The IP address of the module must be configured. QuickTools does not require a license, and the code is included. The pass-through module supports the following standards: Fibre Channel standards: C-PH version 4.3 FC-PH-2 FC-PH-3 FC-AL version 4.5 FC-AL-2 Rev 7.0 FC-FLA FC-GS-3 FC-FG FC-PLDA FC-Tape FC-VI FC-SW-2 Fibre Channel Element MIB RFC 2837 Fibre Alliance MIB version 4.0
Fibre Channel protocols: Fibre Channel service classes: Class 2 and class 3 Operation modes: Fibre Channel class 2 and class 3, connectionless External port type: Transparent fabric port Internal port type: Transparent host port/NPIV mode Support for up to 44 host NPIV logins Port characteristics: 152 External ports are automatically detected and self- configuring Port LEDs illuminate at startup Number of Fibre Channel ports: 6 external ports and 14 internal ports Scalability: Up to 239 switches maximum depending on your configuration Buffer credits: 16 buffer credits per port Maximum frame size: 2148 bytes (2112 byte payload) Standards-based FC FC-SW2 Interoperability Support for up to a 255 to 1 port-mapping ratio Media type: SFP+ module
IBM PureFlex System and IBM Flex System Products and Technology
Fabric point-to-point bandwidth: 2 Gbps or 8 Gbps at full duplex 2 Gb Specifications 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second) 2 Gb fabric latency: Less than 0.4 msec 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex 4 Gb Specifications 4 Gb switch speed: 4.250 Gbps 4 Gb switch fabric point-to-point: 4 Gbps at full duplex 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex 8 Gb Specifications 8 Gb switch speed: 8.5 Gbps 8 Gb switch fabric point-to-point: 8 Gbps at full duplex 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex System processor: PowerPC Maximum frame size: 2148 bytes (2112 byte payload) Nonblocking architecture to prevent latency For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866, found at: http://www.redbooks.ibm.com/abstracts/tips0866.html?Open
90Y3450
90Y3462
A1QX / ESW1
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
153
Running the MLNX-OS, this switch has one external 1 Gb management port and a mini USB Serial port for updating software and debug use. These ports are in addition to InfiniBand internal and external ports. The switch has 14 internal QDR links and 18 CX4 uplink ports. All ports are enabled. The switch can be upgraded to FDR speed (56 Gbps) by using the FOD process with part number 90Y3462 as listed in Table 4-38 on page 153. There are no InfiniBand cables that are shipped as standard with this switch and these must be purchased separately. Supported cables are listed in Table 4-39.
Table 4-39 IB6131 InfiniBand Switch InfiniBand supported cables Part number 49Y9980 90Y3470 Feature codesa 3866 / 3249 A227 / ECB1 Description IB QDR 3m QSFP Cable Option (passive) 3m FDR InfiniBand Cable (passive)
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config. The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using e-config.
The switch has the following specifications: IBTA 1.3 and 1.21 compliance Congestion control Adaptive routing Port mirroring Auto-Negotiation of 10 Gbps, 20 Gbps, 40 Gbps, or 56 Gbps Measured node-to-node latency of less than 170 nanoseconds Mellanox QoS: 9 InfiniBand virtual lanes for all ports, eight data transport lanes, and one management lane High switching performance: Simultaneous wire-speed any port to any port Addressing: 48K Unicast Addresses maximum per Subnet, 16K Multicast Addresses per Subnet Switch throughput capability of 1.8 Tb/s For more information, see the IBM Redbooks Product Guide IBM Flex System IB6131 InfiniBand Switch, TIPS0871, found at: http://www.redbooks.ibm.com/abstracts/tips0871.html?Open
4.11.4, UPS planning on page 160 4.11.5, Console planning on page 161 4.11.6, Cooling planning on page 162 4.11.7, Chassis-rack cabinet compatibility on page 163 For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401
155
Description IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU+ (NA) IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU (NA)
156
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-54 shows a typical configuration given a 32A 3-phase wye supply at 380-415VAC (often termed WW or International) N+N.
L3 N G
L2 L1
L3 N G
L2 L1
= Power cables
The maximum number of Enterprise Chassis that can be installed with a 42U rack is four. Therefore, the chassis requires a total of four 32A 3-phase wye feeds to provide for a fully redundant N+N configuration.
157
L1 G L2 L3 L2
L1 G L3
158
IBM PureFlex System and IBM Flex System Products and Technology
N G
N G
159
L1 G L2 L3 L2
L1 G L3
40K9615 IBM DPI 60a Cord (IEC 309 2P+G) Building power = 200 VAC, 60 Amp, 1 Phase (48A supplied by PDU after UL derating)
= Cables
For more information about planning your IBM Flex System power infrastructure, see the IBM Flex System Enterprise Chassis Power Requirements Guide at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401
160
IBM PureFlex System and IBM Flex System Products and Technology
At international voltages, the 11000VA UPS is ideal for powering a fully loaded chassis. Figure 4-58 shows how each power feed can be connected to one of the four 20A outlets on the rear of the UPS. This UPS requires hard wiring to a suitable supply by a qualified electrician.
= Cables
In North America the available UPS at 200-208VAC is the UPS6000. This UPS has two outlets that can be used to power two of the power supplies within the chassis. In a fully loaded chassis, the third pair of power supplies must be connected to another UPS. Figure 4-59 shows this UPS configuration.
= Cables
Figure 4-59 Two UPS 6000 North American (200 - 208 VAC)
For more information, see the at-a-glance guide IBM 11000VA LCD 5U Rack Uninterruptible Power Supply, TIPS0814, found at: http://www.redbooks.ibm.com/abstracts/tips0814.html?Open
161
Connection to the FSM management interface by browser allows remote presence to each node within the chassis. Connection remotely into the Ethernet management port of the CMM by using the browser allows remote presence to each node within the chassis. You can also connect directly to each IMM2 on a node and start a remote console session to that node through the IMM.
162
IBM PureFlex System and IBM Flex System Products and Technology
If installed within a non-IBM rack, the vertical rails must have clearances to EIA-310-D. There must be sufficient room in front of the vertical front rack-mounted rail to provide minimum bezel clearance of 70 mm (2.76 inches) depth. The rack must be sufficient to support the weight of the chassis, cables, power supplies, and other items that are installed within. There must be sufficient room behind the rear of the rear rack rails to provide for cable management and routing. Ensure the stability of any non-IBM rack by using stabilization feet or baying kits so that it does not become unstable when it is fully populated. Finally, ensure that sufficient airflow is available to the Enterprise Chassis. Racks with glass fronts do not normally allow sufficient airflow into the chassis, unless they are specialized racks that are specifically designed for forced air cooling. Airflow information in CFM is available from the IBM Power Configurator tool.
163
Rack cabinet IBM 47U 1200 mm Deep Static Expansion Rack IBM eServer Cluster 25U Rack IBM Linux Cluster 42U Rack IBM Netfinity Rack IBM Netfinity Rack IBM Netfinity Enterprise Rack IBM Netfinity Enterprise Rack Expansion Cabinet IBM Netfinity NetBAY 22
Feature code 7654 1047 1048 None None None None None
a. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways. For more information, including images, see 4.12, IBM 42U 1100mm Enterprise V2 Dynamic Rack on page 164. b. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways, and includes a unique PureFlex door. For more information, including images of the door, see 4.13, IBM PureFlex System 42U Rack and 42U Expansion Rack on page 169. c. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated front to back cable raceways, and includes the original square blue design of unique PureFlex Logod Door, shipped between Q2 and Q4 2012.
Racks that have glass-fronted doors do not allow sufficient airflow for the Enterprise Chassis, such as the Netfinity racks shown in Table 4-43 on page 163. In some cases with the older Netfinity racks, the chassis depth is such that the Enterprise Chassis cannot be accommodated within the dimensions of the rack.
164
IBM PureFlex System and IBM Flex System Products and Technology
This 42U rack conforms to the EIA(TM)-310-D industry standard for a 24-inch, type A rack cabinet. The dimensions are listed in Table 4-45.
Table 4-45 Dimensions of IBM 42U 1100mm Enterprise V2 Dynamic Rack, 9363-4PX Dimension Height Width Depth Weight Value 2009 mm (79.1 in.) 600 mm (23.6 in.) 1100 mm (43.3 in.) 174 kg (384 lb), including outriggers
The rack features outriggers (stabilizers) allowing for movement while populated. Figure 4-60 shows the 9363-4PX rack.
Here are the features of the IBM 42U 1100mm Enterprise V2 Dynamic Rack: A perforated front door allows for improved air flow. Square EIA Rail mount points. Six side-wall compartments support 1U-high PDUs and switches without taking up valuable rack space. Cable management rings are included to help cable management. Easy to install and remove side panels are a standard feature. The front door can be hinged on either side, providing flexibility to open in either direction.
165
Front and rear doors and side panels include locks and keys to help secure servers. Heavy-duty casters with the use of outriggers (stabilizers) come with the 42U Dynamic racks for added stability, allowing movement of the rack while loaded. Tool-less 0U PDU rear channel mounting reduces installation time and increases accessibility. 1U PDU can be mounted to present power outlets to the rear of the chassis in side pocket openings. Removable top and bottom cable access panels in both front and rear. IBM is a leading vendor with specific ship-loadable designs. These kinds of racks are called dynamic racks. The IBM 42U 1100mm Enterprise V2 Dynamic Rack and IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack are dynamic racks. A dynamic rack has extra heavy-duty construction and sturdy packaging that can be reused for shipping a fully loaded rack. They also have outrigger casters for secure movement and tilt stability. Dynamic racks also include a heavy-duty shipping pallet that includes a ramp for easy on and off maneuvering. Dynamic racks undergo additional shock and vibration testing, and all IBM racks are of welded rather than the more flimsy bolted construction. Figure 4-61 shows the rear view of the 42U 1100mm Flex System Dynamic Rack.
Cable raceway
Outriggers
Figure 4-61 42U 1100mm Flex System Dynamic Rack rear view, with doors and sides panels removed
166
IBM PureFlex System and IBM Flex System Products and Technology
The IBM 42U 1100mm Enterprise V2 Dynamic Rack rack also provides more space for front cable management and the use of front to back cable raceways. There are four cable raceways on each rack, with two on each side. The raceways allow cables to be routed from the front of the rack, through the raceway, and out to the rear of the rack. The raceways also have openings into the side bays of the rack to allow connection into those bays. Figure 4-62 shows the cable raceways.
Cable raceway
Figure 4-62 Cable raceway (as viewed from rear of rack)
The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is mounted vertically in the rear of the side bay and presents its outlets to the rear of the rack. Four 0U PDUs can also be vertically mounted in the rear of the rack.
167
The rack width is 600 mm, which is a standard width of a floor tile in many locations, to complement current raised floor data center designs. Dimensions of the rack base are shown in Figure 4-63.
600 mm 46 mm 199 mm 65 mm
1100 mm
65 mm 458 mm
Front of Rack
Figure 4-63 Rack dimensions
The rack has square mounting holes common in the industry, onto which the Enterprise Chassis and other server and storage products can be mounted. For implementations where the front anti-tip plate is not required, an air baffle/air recirculation prevention plate is supplied with the rack. You might not want to use the plate when an airflow tile must be positioned directly in front of the rack.
168
IBM PureFlex System and IBM Flex System Products and Technology
This air baffle, which is shown in Figure 4-64, can be installed to the lower front of the rack. It helps prevent warm air from the rear of the rack from circulating underneath the rack to the front, improving the cooling efficiency of the entire rack solution.
4.13 IBM PureFlex System 42U Rack and 42U Expansion Rack
The IBM PureFlex System 42U Rack and IBM PureFlex System 42U Expansion Rack are optimized for use with IBM Flex System components, IBM System x servers, and BladeCenter systems. Their robust design allows them to be shipped with equipment already installed. The rack footprint is 600 mm x 1100 mm.
169
These racks are usually shipped as standard with a PureFlex system, but they are available for ordering by clients who want to deploy rack solutions with a similar design across their data center. The door design may also be fitted to existing deployed PureFlex System racks that have the original solid blue door design that shipped from Q2 2012 onwards. Table 4-46 shows the available options and associated part numbers for the two PureFlex racks and the PureFlex door.
Table 4-46 PureFlex system racks and rack door Model 9363-4CX / A3GR 9363-4DX / A3GS 44X3132 / EU21 Description IBM PureFlex System 42U Rack IBM PureFlex System 42U Expansion Rack IBM PureFlex System Rack Door Details Primary Rack. Ships with side doors. Expansion Rack. Ships with no side doors, but with a baying kit to join onto a primary rack. Front door for rack that is embellished with PureFlex design
These racks share the rack frame design of the IBM 42U 1100mm Enterprise V2 Dynamic Rack, but ship with a PureFlex branded door. The door may be ordered separately. These IBM PureFlex System 42U racks are industry-standard 19-inch racks that support IBM PureFlex System and Flex System chassis, IBM System x servers, and BladeCenter chassis. The racks conform to the EIA(TM)-310-D industry standard for 19-inch, type A rack cabinets, and have outriggers (stabilizers), allowing for movement of large loads. The optional IBM Rear Door Heat eXchanger can be installed in to this rack to provide a superior cooling solution, and the entire cabinet will still fit on a standard data center floor tile (width). For more information, see 4.14, IBM Rear Door Heat eXchanger V2 Type 1756 on page 172.
170
IBM PureFlex System and IBM Flex System Products and Technology
The front door is hinged on one side only. The rear door can be hinged on either side and may be removed for ease of access when cabling or servicing systems within the rack. The front door is a unique PureFlex -branded front door that allows for excellent airflow into the rack. Here are the racks features: Six side-wall compartments support 1U-high power distribution units (PDUs) and switches without taking up valuable rack space. Cable management slots are provided to route hook-and-loop fasteners around cables. Side panels are a standard feature and are easy to install and remove. Front and rear doors and side panels include locks and keys to help secure servers. Horizontal and vertical cable channels are built into the frame. Heavy-duty casters with outriggers (stabilizers) come with the 42U rack for added stability, allowing movement of large loads. Tool-less 0U PDU rear channel mounting is provided. 600 mm standard width to complement current raised-floor data center designs. Increase in depth to from 1,000 mm to 1,100 mm to improve cable management. Increase in door perforation to maximize airflow. Support for tool-less 0U PDU mounting, and 1U PDU easy installation of 1U PDUs. Front-to-back cable raceways for easy routing of cables such as Fibre Channel or SAS. Support for shipping of fully integrated solutions. Vertical cable channels that are built into the frame. Lockable doors and side panels. Heavy-duty casters to help safely move large loads in the rack. Front stabilizer plate. The door may be ordered as a separate part number for attaching to existing PureFlex racks. Rack specifications for the two IBM PureFlex System Racks and the PureFlex Rack door are shown in Table 4-47.
Table 4-47 IBM PureFlex System Rack specifications Description 9363-4CX Description PureFlex System 42U Rack Dimension Height Width Depth Weight 9363-4DX PureFlex System 42U Expansion Rack Height Width Depth Weight Value 2009 mm (79.1 in.) 604 mm (23.8 in.) 1100 mm (43.3 in.) 179 kg (394 lb), including outriggers 2009 mm (79.1 in.) 604 mm (23.8 in.) 1100 mm (43.3 in.) 142 kg (314 lb), including outriggers
171
Description 44X3132
Value 1924 mm (75.8 in.) 597 mm (23.5 in.) 90 mm (3.6 in.) 19.5 kg (43 lb)
Attaching a rear door heat exchanger to the rear of a rack allows up to 100,000 BTU/hr or 30kw of heat to be removed at a rack level. As the warm air passes through the heat exchanger, it is cooled with water and exits the rear of the rack cabinet into the data center. The door is designed to provide an overall air temperature drop of up to 25C measured between air that enters the exchanger and exits the rear.
172
IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-67 shows the internal workings of the IBM Rear Door Heat eXchanger V2.
The supply inlet hose provides an inlet for chilled, conditioned water. A return hose delivers warmed water back to the water pump or chiller in the cool loop. It must meet the water supply requirements for secondary loops.
173
Figure 4-68 shows the percentage heat that is removed from a 30 KW heat load as a function of water temperature and water flow rate. With 18 Degrees at 10 (gpm), 90% of 30 kW heat is removed by the door.
% heat removal as function of water temperature and flow rate for given rack power, rack inlet temperature, and rack air flow rate 140 130 120 110
Water temperature 12C * 14C * 16C * 18C * 20C * 22C * 24C * Rack Power (W) = 30000 Tinlet, air (C) = 27 Airflow (cfm) = 2500 4 6 8 10 12 14
% heat removal
For efficient cooling, water pressure and water temperature must be delivered in accordance with the specifications listed in Table 4-48. The temperature must be maintained above the dew point to prevent condensation from forming.
Table 4-48 1756 RDHX specifications Rear Door heat exchanger V2 Depth Width Height Empty Weight Filled Weight Temperature Drop Water Temperature Specifications 129 mm (5.0 in) 600 mm (23.6 in) 1950 mm (76.8 in) 39 kg (85 lb) 48 kg (105 lb) Up to 25C (45F) between air exiting and entering RDHX Above Dew Point: 18C 1C (64.4F 1.8F) for ASHRAE Class 1 Environment 22C 1C (71.6F 1.8F) for ASHRAE Class 2 Environment Minimum: 22.7 liters (6 gallons) per minute, Maximum: 56.8 liters (15 gallons) per minute
Required water flow rate (as measured at the supply entrance to the heat exchanger)
174
IBM PureFlex System and IBM Flex System Products and Technology
The installation and planning guide provides lists of suppliers that can provide coolant distribution unit solutions, flexible hose assemblies, and water treatment that meet the suggested water quality requirements. It takes three people to install the rear door heat exchanger. The exchanger requires a non-conductive step ladder to be used for attachment of the upper hinge assembly. Consult the planning and implementation guides before proceeding. The installation and planning guides can be found at: http://www.ibm.com/support/entry/portal/
175
176
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 5.
Compute nodes
This chapter describes the IBM Flex System servers or compute nodes. The applications that are installed on the compute nodes can run natively on a dedicated physical server. Or they can be virtualized in a virtual machine that is managed by a hypervisor layer. The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM POWER7 processors. Depending on the compute node design, nodes can come in one of these form factors: Half-wide node: Occupies one chassis bay, half the width of the chassis (approximately 215 mm or 8.5 in.). An example is the IBM Flex System x240 Compute Node. Full-wide node: Occupies two chassis bays side-by-side, the full width of the chassis (approximately 435 mm or 17 in.). An example is the IBM Flex System p460 Compute Node. This chapter includes the following sections: 5.1, IBM Flex System Manager on page 178 5.2, IBM Flex System x220 Compute Node on page 178 5.3, IBM Flex System x240 Compute Node on page 207 5.4, IBM Flex System x440 Compute Node on page 245 5.5, IBM Flex System p260 and p24L Compute Nodes on page 266 5.6, IBM Flex System p460 Compute Node on page 286 5.7, IBM Flex System PCIe Expansion Node on page 304 5.8, IBM Flex System Storage Expansion Node on page 311 5.9, I/O adapters on page 318
177
5.2.1 Introduction
The IBM Flex System x220 Compute Node is a high-availability, scalable compute node that is optimized to support the next-generation microprocessor technology. With a balance of cost and system features, the x220 is an ideal platform for general business workloads. This section describes the key features of the server.
178
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-1 shows the front of the compute node, showing the location of the controls, LEDs, and connectors.
Two 2.5 HS drive bays
USB port
Power
LED panel
Figure 5-2 shows the internal layout and major components of the x220.
Cover
Hard disk drive cage Hot-swap hard disk drive Right air baffle
DIMM
Figure 5-2 Exploded view of the x220, showing the major components
179
Chipset Memory
Memory maximums
RAID support
Network interfaces
180
IBM PureFlex System and IBM Flex System Products and Technology
Specification Two connectors for I/O adapters; each connector has PCIe x8+x4 interfaces. Includes an Expansion Connector (PCIe 3.0 x16) for future use to connect a compute node expansion unit. Dedicated PCIe 2.0 x4 interface for ServeRAID H1135 adapter only. USB ports: One external and two internal ports for an embedded hypervisor. A console breakout cable port on the front of the server provides local KVM and serial ports (cable standard with chassis; additional cables are optional). UEFI, IBM IMM2 with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and IBM ServerGuide. Power-on password, administrator's password, and Trusted Platform Module V1.2. Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.2.13, Operating system support on page 206. Optional service upgrades are available through IBM ServicePac offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.) Maximum configuration: 6.4 kg (14.11 lb).
Ports
Systems management Security features Video Limited warranty Operating systems supported Service and support
Dimensions Weight
Figure 5-3 shows the components on the system board of the x220.
Hot-swap drive bay backplane Processor 2 and six memory DIMMs USB port 2 Broadcom Ethernet I/O connector 1 Fabric Connector
USB port 1
Expansion Connector
Figure 5-3 Layout of the IBM Flex System x220 Compute Node system board
181
5.2.2 Models
The current x220 models are shown in Table 5-2. All models include 4 GB of memory (one 4 GB DIMM) running at either 1333 MHz or 1066 MHz (depending on model).
Table 5-2 Models of the IBM Flex System x220 Compute Node, type 7906
Model Intel Processor E5-2400: 2 maximum Pentium 1400: 1 maximum Memory RAID adapter Disk baysa Disks Embed 1 GbEb I/O slots (used/ max) 1 / 2b 1 / 2b 1 / 2b 1 / 2b 1 / 2b
7906-A2x
1x Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W 1x Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W 1x Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W 1x Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2418L 4C 2.0 GHz 10 MB 1333 MHz 50 W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W 1x Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W 1x Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W
1x 4 GB UDIMM (1066 MHz)c 1x 4 GB UDIMM 1333 MHz 1x 4 GB RDIMM (1066 MHz)c 1x 4 GB RDIMM 1333 MHz 1x4GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHz 1x 4 GB RDIMM 1333 MHzc 1x 4 GB RDIMM 1333 MHzc
ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105 ServeRAID C105
2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap 2x 2.5 hot-swap
Open
Standard
7906-B2x
Open
Standard
7906-C2x
Open
Standard
7906-D2x
Open
Standard
7906-F2x
Open
Standard
7906-G2x
Open
No
0/2 1 / 2b 1 / 2b
7906-G4x
Open
Standard
7906-H2x
Open
Standard
7906-J2x
Open
No
0/2
7906-L2x
Open
No
0/2
a. The 2.5-inch drive bays can be replaced and expanded with 1.8-inch bays and a ServeRAID M5115 RAID controller. This configuration supports up to eight 1.8-inch SSDs. b. These models include an embedded 1 Gb Ethernet controller. Connections are routed to the chassis midplane by using a Fabric Connector. Precludes the use of I/O connector 1 (except the ServeRAID M5115). c. For A2x and C2x, the memory operates at 1066 MHz, the memory speed of the processor. For J2x and L2x, memory operates at 1333 MHz to match the installed DIMM, rather than 1600 MHz.
182
IBM PureFlex System and IBM Flex System Products and Technology
The x220 is a half wide compute node and requires that the chassis shelf is installed in the IBM Flex System Enterprise Chassis. Figure 5-4 shows the chassis shelf in the chassis.
Figure 5-4 The IBM Flex System Enterprise Chassis showing the chassis shelf
The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or larger, shelves must be removed from within the chassis. Remove the shelves by sliding the two latches on the shelf towards the center, and then sliding the shelf from the chassis.
183
PCIe 2.0 x4
Front KVM port USB x1 DDR3 DIMMs 3 memory channels 2 DIMMs per channel QPI link (up to 8 GT/s) IMM v2 USB Video & serial Management to midplane PCIe 2.0 x2 PCIe 3.0 x8+x4 I/O connector 1 Intel Xeon Processor 2 PCIe 3.0 x4 PCIe 3.0 x4 PCIe 3.0 x8+x4 I/O connector 2 PCIe 3.0 x16 Sidecar connector
1 GbE LOM
Figure 5-5 IBM Flex System x240 Compute Node system board block diagram
The IBM Flex System x220 Compute Node has the following system architecture features as standard: Two 2011-pin type R (LGA-2011) processor sockets An Intel C600 PCH Three memory channels per socket Up to two DIMMs per memory channel 12 DDR3 DIMM sockets Support for UDIMMs and RDIMMs One integrated 1 Gb Ethernet controller (1 GbE LOM in diagram) One LSI 2004 SAS controller Integrated software RAID 0 and 1 with support for the H1135 LSI-based RAID controller One IMM2 Two PCIe 3.0 I/O adapter connectors with one x8 and one x4 host connection each (12 lanes total). One internal and one external USB connector
184
IBM PureFlex System and IBM Flex System Products and Technology
Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W Intel Pentium 1407 2C 2.8 GHz 5 MB 1066 MHz 80 W
A2x -
Intel Xeon processors Noneb 90Y4801 90Y4800 90Y4799 90Y4797 90Y4796 90Y4795 90Y4793 A3C4 / None A1VY / A1WC A1VX / A1WB A1VW / A1WA A1VU / A1W8 A1VT / A1W7 A1VS / A1W6 A1VQ / A1W4 Intel Xeon E5-1410 4C 2.8 GHz 10 MB 1333 MHz 80 W Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2407 4C 2.2 GHz 10 MB 1066 MHz 80 W Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W C2x D2x G2x, G4x H2x J2x L2x
Intel Xeon processors - Low power 00D9528 00D9527 90Y4805 00D9526 90Y4804 A3C7 / A3CA A3C6 / A3C9 A1W2 / A1WE A3C5 / A3C8 A1W1 / A1WD Intel Xeon E5-2418L 4C 2.0 GHz 10 MB 1333 MHz 50 W Intel Xeon E5-2428L 6C 1.8 GHz 15 MB 1333 MHz 60 W Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W Intel Xeon E5-2448L 8C 1.8 GHz 20 MB 1600 MHz 70 W Intel Xeon E5-2450L 8C 1.8 GHz 20 MB 1600 MHz 70 W F2x B2x -
a. The first feature code is for processor 1 and second feature code is for processor 2. b. The Intel Pentium 1407 and Intel Xeon E5-1410 are available through CTO or special bid only.
185
The x220 supports LP DDR3 memory LRDIMMs, RDIMMs, and UDIMMs. The server supports up to six DIMMs when one processor is installed, and up to 12 DIMMs when two processors are installed. Each processor has three memory channels, with two DIMMs per channel. The following rules apply when you select the memory configuration: Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks that are supported per channel is eight. The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the Maximum quantity row in Table 5-5 and Table 5-6 on page 187. All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of: Memory speed that is supported by a specific processor. Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the Maximum operating speed section in Table 5-5 and Table 5-6 on page 187. The shaded cells indicate that the speed indicated is the maximum that the DIMM allows. Cells that are highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated speed.
Table 5-5 Maximum memory speeds (Part 1 - UDIMMs and LRDIMMs) Spec Rank Part numbers Rated speed Rated voltage Operating voltage Maximum quantitya Largest DIMM Max memory capacity Max memory at rated speed Maximum operating speed 1 DIMM per channel 2 DIMMs per channel 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1066 MHz 1066 MHz 1066 MHz 1333 MHz 1066 MHz 1.35 V 12 2 GB 24 GB 12 GB Single rank 49Y1403 (2 GB) 1333 MHz 1.35 V 1.5 V 12 2 GB 24 GB 12 GB 1.35 V 12 4 GB 48 GB 24 GB UDIMMs Dual rank 49Y1404 (4 GB) 1333 MHz 1.35 V 1.5 V 12 4 GB 48 GB 24 GB 1.35 V 12 32 GB 384 GB N/A LRDIMMs Quad rank 90Y3105 (32 GB) 1333 MHz 1.35 V 1.5 V 12 32 GB 384 GB 192 GB
a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown.
186
IBM PureFlex System and IBM Flex System Products and Technology
Table 5-6 Maximum memory speeds (Part 2 - RDIMMs) Spec Rank Part numbers Rated speed Rated voltage Operating voltage Max quantitya Largest DIMM Max memory capacity Max memory at rated speed 1.35 V 12 4 GB 48 GB 48 GB Single rank 49Y1406 (4 GB) 1333 MHz 1.35 V 1.5 V 12 4 GB 48 GB 48 GB 1.35 V 12 8 GB 96 GB 96 GB RDIMMs Dual rank 49Y1407 (4 GB) 49Y1397 (8 GB) 1333 MHz 1.35 V 1.5 V 12 8 GB 96 GB 96 GB 90Y3109 (4 GB) 1600 MHz 1.5 V 1.5 V 12 4 GB 48 GB 48 GB Quad rank 49Y1400 (16 GB) 1066 MHz 1.35 V 1.35 V 12 16 GB 192 GB N/A 1.5 V 12 16 GB 192 GB N/A
Maximum operating speed (MHz) 1 DIMM per channel 2 DIMMs per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1600 MHz 800 MHz 800 MHz 800 MHz 800 MHz
a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed, the maximum quantity that is supported is half of that shown.
The following memory protection technologies are supported: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed. Table 5-7 lists the memory options available for the x220 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of three (one for each of the three memory channels) if possible.
Table 5-7 Supported memory DIMMs Part number Feature codea Description
Unbuffered DIMM (UDIMM) modules 49Y1403 49Y1404 A0QS 8648 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM
Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz 49Y1406 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM
187
Description 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM
Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559 90Y3178 90Y3109 00D4968 A28Z A24L A292 A2U5 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM
Load-reduced DIMMs (LRDIMMs) 90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
188
IBM PureFlex System and IBM Flex System Products and Technology
Table 5-8 shows DIMM installation if you have one processor installed.
Table 5-8 Suggested DIMM installation with one processor installed (independent channel mode) Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3
Number of processors
Number of DIMMs
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
DIMM 7 DIMM 7 x x x x x
1 1 x 1 1 1 x 1
1 2 3 4 5 6 x x x x x x x x x x x x
x x x x x x x x x
Table 5-9 shows DIMM installation if you have two processors installed.
Table 5-9 Suggested DIMM installation with two processors installed (independent channel mode) Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3
Number of processors
Number of DIMMs
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
2 2 2 2 x 2 2 2 2 2 2 x 1
2 3 4 5 6 7 8 9 10 11 12 x x x x x x x x x x x x x x x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
DIMM 8 x x x x x x x x x x x
DIMM 8
189
Number of processors
Number of DIMMs
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
DIMM 7
1 1 x 1
2 4 6 x x x x x x
x x x
x x x
190
IBM PureFlex System and IBM Flex System Products and Technology
DIMM 8
Table 5-11 shows DIMM installation if you have two processors that are installed with rank sparing, using either dual or single ranked DIMMs.
Table 5-11 Suggested DIMM installation with 2 processors, rank-sparing mode, single or dual ranked Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3
Number of processors
Number of DIMMs
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
DIMM 7 x x x x x
2 2 x 2 2 x 2
4 6 8 10 12 x x x x x x x x x x x x
x x x x x
x x x x x x x x x x x x x
a. The pair of DIMMs must be identical in capacity, type, and rank count.
DIMM 8 x x x x x
191
Table 5-13 and Table 5-14 show the suggested DIMM installation in mirrored channel mode for one or two processors.
Table 5-13 Suggested DIMM installation with one processor - mirrored channel mode Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3
Number of processors
Number of DIMMsb
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
DIMM 7 DIMM 7 x
1 x 1
4 6
x x x
x x x
a. For optimal memory performance, populate all memory channels equally. b. The pair of DIMMs must be identical in capacity, type, and rank count. Table 5-14 Suggested DIMM installation with two processors - mirrored channel mode Optimal memory configa Processor 1 Channel 1 Channel 2 Channel 3 Channel 1 Processor 2 Channel 2 Channel 3
Number of processors
Number of DIMMsb
DIMM 11
DIMM 12
DIMM 10
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 9
2 2
4 6 8
x x x x x
x x x x x x
x x x
a. For optimal memory performance, populate all memory channels equally. b. The pair of DIMMs must be identical in capacity, type, and rank count.
Memory installation considerations for IBM Flex System x220 Compute Node
Use the following general guidelines when you determine the memory configuration of your IBM Flex System x220 Compute Node: All memory installation considerations apply equally to one- and two-processor systems. All DIMMs must be DDR3 DIMMs. Memory of different types (RDIMMs, and UDIMMs) cannot be mixed in the system. If you mix DIMMs with 1.35 V and 1.5 V, the system runs all of them at 1.5 V and you lose the energy advantage. If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest speed. You cannot mix non-mirrored channel and mirrored channel modes
192
IBM PureFlex System and IBM Flex System Products and Technology
DIMM 8 x x x
DIMM 8
Install memory DIMMs in order of their size, with the largest DIMM first. The correct installation order is the DIMM slot farthest from the processor first (DIMM slots 5, 8, 3, 10, 1, and 12). Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot farthest from the processor. Start with DIMM slots 5 and 8 and work inwards. Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration because it can affect performance. For maximum memory bandwidth, install one DIMM in each of the three memory channels. In other words, three DIMMs at a time. Populate equivalent ranks per channel. Physically, DIMM slots 2, 4, 6, 7, 9, and 11 must be populated (actual DIMM or DIMM filler). DIMM slots 1,3, 5, 8, 10, and 12 do not require a DIMM filler. Different memory modes require a different population order (see Table 5-12 on page 191, Table 5-13 on page 192, and Table 5-14 on page 192).
193
Consideration: There is no native (in-box) driver for Windows and Linux. The drivers must be downloaded separately. In addition, there is no support for VMware, Hyper-V, Xen, or SSDs.
ServeRAID H1135
The x220 also supports an entry level hardware RAID solution with the addition of the ServeRAID H1135 Controller for IBM Flex System and BladeCenter. The H1135 is installed in a dedicated slot (Figure 5-3 on page 181). When the H1135 adapter is installed, the C105 controller is disabled. The H1135 has the following features: Based on the LSI SAS2004 6 Gbps SAS 4-port controller PCIe 2.0 x4 host interface CIOv form factor (supported in the x220 and BladeCenter HS23E) Support for SAS, SATA, and SSD drives Support for RAID 0, RAID 1, and non-RAID 6 Gbps throughput per port Support for up to two volumes Fixed stripe size of 64 KB Native driver support in Windows, Linux, and VMware S.M.A.R.T. support Support for MegaRAID Storage Manager management software
ServeRAID M5115
The ServeRAID M5115 SAS/SATA Controller (90Y4390) is an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be backed up to flash memory when attached to an optional supercapacitor. The M5115 attaches to the I/O adapter 1 connector. It can be attached even if the Fabric Connector is installed (used to route the Embedded Gb Ethernet to chassis bays 1 and 2). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. When the M5115 adapter is installed, the C105 controller is disabled. The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs Up to eight 1.8-inch SSDs For more information about these configurations, see ServeRAID M5115 configurations and options on page 195. The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports. PCI Express 3.0 x8 host interface. 6 Gbps throughput per port. 800 MHz dual-core IBM PowerPC processor with an LSI SAS2208 6 Gbps ROC controller. Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411.
194
IBM PureFlex System and IBM Flex System Products and Technology
Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs. Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended. Support for SEDs with MegaRAID SafeStore. Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447). Support for up to 64 virtual drives, up to 128 drive groups, and up to 16 virtual drives per drive group. Also supports up to 32 physical drives per drive group. Support for LUN sizes up to 64 TB. Configurable stripe size up to 1 MB. Compliant with DDF CoD. S.M.A.R.T. support. MegaRAID Storage Manager management software.
At least one hardware kit is required with the ServeRAID M5115 controller. These hardware kits enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 (90Y4424) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane that is attached through the system board to an onboard controller. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on PCI RAID controllers.
195
To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required. ServeRAID M5100 Series IBM Flex System Flash Kit for x220 (90Y4425) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercapacitor. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 (90Y4426) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles, left and right, that can attach two 1.8-inch SSD attachment locations. It also contains flex cables for attachment to up to four 1.8-inch SSDs. Table 5-17 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash kit, and the SSD Expansion kit.
Table 5-17 ServeRAID M5115 hardware kits Drive support that is required Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => ServeRAID M5115 90Y4390 Required Required Required Required Required Required Components required Enablement Kit 90Y4424 Required Required Required Required Flash Kit 90Y4425 SSD Expansion Kit 90Y4426
196
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-6 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-17 on page 196).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit for x220 (90Y4424)
ServeRAID M5115 controller
Figure 5-6 The ServeRAID M5115 and the Enablement Kit installed
Figure 5-7 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-17 on page 196).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Flash Kit for x220 (90Y4425) and ServeRAID M5100 Series SSD Expansion Kit for x220 (90Y4426)
ServeRAID M5115 controller
SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)
Figure 5-7 ServeRAID M5115 with Flash and SSD Expansion Kits installed
The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for processor 1 Two in a tray above the memory banks for processor 2
197
Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. The FoD license upgrades are listed in Table 5-18.
Table 5-18 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum supported 1 1 1
These features are described as follows: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This license is an FoD license. Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System, implemented by using the LSI MegaRAID FastPath software, provides high-performance I/O acceleration for SSD-based virtual drives. It uses an extremely low-latency I/O path to increase the maximum IOPS capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is an FoD license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System, implemented by using the LSI MegaRAID CacheCade Pro 2.0, is designed to accelerate the performance of HDD arrays. It can do so with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a FoD license. This feature requires that at least one SSD drive is installed.
198
IBM PureFlex System and IBM Flex System Products and Technology
Description IBM 512 GB SATA 1.8-inch MLC Enterprise Value SSD IBM 64 GB SATA 1.8-inch MLC Enterprise Value SSD
Maximum supported 8 8
10 K SAS hard disk drives 42D0637 49Y2003 81Y9650 5599 5433 A282 IBM 300 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 600 GB 10 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 900 GB 10 K 6 Gbps SAS 2.5-inch SFF HS HDD No No No Supported Supported Supported Supported Supported Supported
15 K SAS hard disk drives 42D0677 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 42D0707 81Y9690 5409 A1P3 IBM 500 GB 7200 6 Gbps NL SAS 2.5-inch SFF Slim-HS HDD IBM 1 TB 7.2 K 6 Gbps NL SAS 2.5-inch SFF HS HDD No No Supported Supported Supported Supported A1NX A1NZ A1AV IBM 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS HDD Supported Supported Supported Supported Supported Supported Supported Supported Supported 5536 A283 IBM 146 GB 15 K 6 Gbps SAS 2.5-inch SFF Slim-HS HDD IBM 300 GB 15 K 6 Gbps SAS 2.5-inch SFF HS HDD No No Supported Supported Supported Supported
Solid-state drives 43W7718 90Y8643 90Y8648 49Y5844 49Y5839 A2FN A2U3 A2U4 A3AU A3AS IBM 200 GB SATA 2.5-inch MLC HS SSD IBM 256 GB SATA 2.5-inch MLC HS Entry SSD IBM 128 GB SATA 2.5-inch MLC HS Entry SSD IBM 512 GB SATA 2.5-inch MLC HS Enterprise Value SSD IBM 64 GB SATA 2.5-inch MLC HS Enterprise Value SSD No No No No No Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported
199
200
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-8 shows the rear of the x240 compute node and the locations of the I/O connectors.
I/O connector 1
I/O connector 2
Figure 5-8 Rear of the x220 compute node showing the locations of the I/O connectors
Table 5-21 lists the I/O adapters that are supported in the x220.
Table 5-21 Supported I/O adapters for the x220 compute node Part number Feature code Ports Description
Ethernet adapters 49Y7900 90Y3466 90Y3554 A1BR A1QY A1R1 4 2 4 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb Ethernet Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter
Fibre Channel adapters 69Y1938 95Y2375 88Y6370 A1BM A2N5 A1BP 2 2 2 IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter
InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter
Consideration: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis but across all compute nodes.
201
202
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-9 shows the location of the LEDs and controls on the front of the x220.
Hard disk drive activity LED Hard disk drive status LED
USB port
Identify LED
Fault LED
NMI control
Figure 5-9 The front of the x220 with the front panel LEDs and controls shown
Location
Blue
Check error log Fault Hard disk drive activity LED Hard disk drive status LED
203
NMI
Power LED
The status of the power LED of the x220 shows the power status of the compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-25.
Table 5-25 The power LED states of the x220 compute node Power LED state Off On; fast flash mode On; slow flash mode On; solid Status of compute node No power to compute node Compute node has power Chassis Management Module is in discovery mode (handshake) Compute node has power Power in stand-by mode Compute node has power Compute node is operational
Exception: The power button does not operate when the power LED is in fast flash mode.
204
IBM PureFlex System and IBM Flex System Products and Technology
The x220 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node as shown in Figure 5-10.
To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-26.
Table 5-26 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational System board error is detected A mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST An NMI has occurred An over-temperature condition has occurred that was critical enough to shut down the server A memory fault has occurred. The corresponding DIMM error LEDs on the system board should also be lit. A fault is detected in the adjacent expansion unit (if installed)
205
Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features (for example, by using SNMP) Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allows the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. You can use this address to remotely manage the x220 by connecting directly to the IMM independent of the IBM Flex System Manager or Chassis Management Module. For more information about the IMM, see 3.4.1, Integrated Management Module II on page 57.
206
IBM PureFlex System and IBM Flex System Products and Technology
5.3.1 Introduction
The x240 supports the following equipment: Up to two Intel Xeon E5-2600 series multi-core processors Twenty-four memory DIMMs Two hot-swap drives Two PCI Express I/O adapters Two optional internal USB connectors Figure 5-11 shows the x240.
207
Figure 5-12 shows the location of the controls, LEDs, and connectors on the front of the x240.
Hard disk drive activity LED Hard disk drive status LED
USB port
NMI control
LED panel
Figure 5-12 The front of the x240 showing the location of the controls, LEDs, and connectors
Figure 5-13 shows the internal layout and major components of the x240.
Cover Heat sink Microprocessor heat sink filler I/O expansion adapter Air baffle
Microprocessor Hot-swap storage backplane Hot-swap storage cage Hot-swap storage drive
Air baffle
Figure 5-13 Exploded view of the x240 showing the major components
208
IBM PureFlex System and IBM Flex System Products and Technology
Chipset Memory
Memory maximums
RAID support
Network interfaces
Systems management
Security features
209
Component Video Limited warranty Operating systems supported Service and support
Specification Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware vSphere. For more information, see 5.3.13, Operating system support on page 244. Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8 hours fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width 215 mm (8.5), height 51 mm (2.0), depth 493 mm (19.4) Maximum configuration: 6.98 kg (15.4 lb)
Dimensions Weight
Figure 5-14 shows the components on the system board of the x240.
Hot-swap drive bay backplane Processor 2 and 12 memory DIMMs I/O connector 1 Fabric Connector
I/O connector 2
Expansion Connector
210
IBM PureFlex System and IBM Flex System Products and Technology
5.3.2 Models
The current x240 models are shown in Table 5-28. All models include 8 GB of memory (2x 4 GB DIMMs) running at either 1600 MHz or 1333 MHz (depending on the model).
Table 5-28 Models of the x240 type 8737 Modelsa 8737-A1x 8737-D2x 8737-F2x 8737-G2x 8737-H1x 8737-H2x 8737-J1x 8737-L2x 8737-M1x 8737-M2x 8737-N2x 8737-Q2x 8737-R2x Intel processor (model, cores, core speed, L3 cache, memory speed, TDP power) (two max) 1x Xeon E5-2630L 6C 2.0 GHz 15 MB 1333 MHz 60 W 1x Xeon E5-2609 4C 2.40 GHz 10 MB 1066 MHz 80 W 1x Xeon E5-2620 6C 2.0 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2630 6C 2.3 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 1x Xeon E5-2670 8C 2.6 GHz 20 MB 1600 MHz 115 W 1x Xeon E5-2660 8C 2.2 GHz 20 MB 1600 MHz 95 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 1x Xeon E5-2643 4C 3.3 GHz 10 MB 1600 MHz 130 W 1x Xeon E5-2667 6C 2.9 GHz 15 MB 1600 MHz 130 W 1x Xeon E5-2690 8C 2.9 GHz 20 MB 1600 MHz 135 W Standard memoryb 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB 2x 4 GB Available drive bays Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Two (open) Available I/O slotsc 2 1 1 1 2 1 2 1 2 1 1 1 1 10 GbE embedd No Yes Yes Yes No Yes No Yes No Yes Yes Yes Yes
a. The model numbers that are provided are worldwide generally available variant (GAV) model numbers that are not orderable as listed. They must be modified by country. The US GAV model numbers use the following nomenclature: xxU. For example, the US orderable part number for 8737-A2x is 8737-A2U. See the product-specific official IBM announcement letter for other country-specific GAV model numbers. b. The maximum system memory capacity is 768 GB when you use 24x 32 GB DIMMs. c. Some models include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. This embedded controller precludes the use of an I/O adapter in I/O connector 1, as shown in Figure 5-14 on page 210. For more information, see 5.3.10, Embedded 10 Gb Virtual Fabric Adapter on page 238. d. Models number in the form x2x (for example, 8737-L2x) include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. Model numbers in the form x1x (for example 8737-A1x) do not include this embedded controller.
211
The x240 is a half wide compute node. The chassis shelf must be installed in the IBM Flex System Enterprise Chassis. Figure 5-15 shows the chassis shelf in the chassis.
Figure 5-15 The IBM Flex System Enterprise Chassis showing the chassis shelf
The shelf is required for half-wide compute nodes. To install the full-wide or larger, shelves must be removed from within the chassis. Slide the two latches on the shelf towards the center and then slide the shelf from the chassis.
212
IBM PureFlex System and IBM Flex System Products and Technology
PCIe x4 G2
USB
DDR3 DIMMs 4 memory channels 3 DIMMs per channel QPI links (8 GT/s)
x1
USB
IMM v2
PCIe x16 G3 I/O connector 1 Intel Xeon Processor 2 PCIe x8 G3 PCIe x8 G3 PCIe x16 G3 I/O connector 2 PCIe x16 G3 Sidecar connector
Figure 5-16 IBM Flex System x240 Compute Node system board block diagram
The IBM Flex System x240 Compute Node has the following system architecture features as standard: Two 2011-pin type R (LGA-2011) processor sockets An Intel C600 PCH Four memory channels per socket Up to three DIMMs per memory channel Twenty-four DDR3 DIMM sockets Support for UDIMMs, RDIMMs, and new LRDIMMs One integrated 10 Gb Virtual Fabric Ethernet controller (10 GbE LOM in diagram) One LSI 2004 SAS controller Integrated HW RAID 0 and 1 One Integrated Management Module II Two PCIe x16 Gen3 I/O adapter connectors Two Trusted Platform Module (TPM) 1.2 controllers One internal USB connector
213
The new architecture allows the sharing of data on-chip through a high-speed ring interconnect between all processor cores, the last level cache (LLC), and the system agent. The system agent houses the memory controller and a PCI Express root complex that provides 40 PCIe 3.0 lanes. This ring interconnect and LLC architecture is shown in Figure 5-17.
Core Core
L1/L2 L1/L2
.
Core L1/L2 LLC
to Chipset
System agent
PCIe 3.0 Root Complex 40 lanes PCIe 3.0 Memory Controller
QPI link
The two Xeon E5-2600 series processors in the x240 are connected through two QuickPath Interconnect (QPI) links. Each QPI link is capable of up to eight giga-transfers per second (GTps) depending on the processor model installed. Table 5-30 shows the QPI bandwidth of the Intel Xeon E5-2600 series processors.
Table 5-30 QuickPath Interconnect bandwidth Intel Xeon E5-2600 series processor Advanced Standard Basic QuickPath Interconnect speed (GTps) 8.0 GTps 7.25 GTps 6.4 GTps QuickPath Interconnect bandwidth (GBps) in each direction 32.0 GBps 29.0 GBps 25.6 GBps
5.3.5 Processor
The Intel Xeon E5-2600 series is available with up to eight cores and 20 MB of last-level cache. It features an enhanced instruction set called Intel Advanced Vector Extensions (AVX). This set doubles the operand size for vector instructions (such as floating-point) to 256 bits and boosts selected applications by up to a factor of two. The new architecture also introduces Intel Turbo Boost Technology 2.0 and improved power management capabilities. Turbo Boost automatically turns off unused processor cores and increases the clock speed of the cores in use if thermal requirements are still met. Turbo Boost Technology 2.0 takes advantage of the new integrated design. It also implements a more granular overclocking in 100 MHz steps instead of 133 MHz steps on former Nehalem-based and Westmere-based microprocessors.
214
IBM PureFlex System and IBM Flex System Products and Technology
As listed in Table 5-28 on page 211, standard models come with one processor that is installed in processor socket 1. In a two processor system, both processors communicate with each other through two QPI links. I/O is served through 40 PCIe Gen2 lanes and through a x4 Direct Media Interface (DMI) link to the Intel C600 PCH. Processor 1 has direct access to 12 DIMM slots. By adding the second processor, you enable access to the remaining 12 DIMM slots. The second processor also enables access to the sidecar connector, which enables the use of mezzanine expansion units. Table 5-31 show a comparison between the features of the Intel Xeon 5600 series processor and the new Intel Xeon E5-2600 series processor that is installed in the x240.
Table 5-31 Comparison of Xeon 5600 series and Xeon E5-2600 series processor features Specification Cores Physical Addressing Cache size Memory channels per socket Max memory speed Virtualization technology New instructions QPI frequency Inter-socket QPI links PCI Express Xeon 5600 Up to six cores / 12 threads 40-bit (Uncorea limited) 12 MB 3 1333 MHz Real Mode support and transition latency reduction AES-NI 6.4 GTps 1 36 Lanes PCIe on chipset Xeon E5-2600 Up to eight cores / 16 threads 46-bit (Core and Uncorea ) Up to 20 MB 4 1600 MHz Adds Large VT pages Adds AVX 8.0 GTps 2 40 Lanes/Socket Integrated PCIe
a. Uncore is an Intel term that is used by Intel to describe the parts of a processor that are not the core.
Table 5-32 lists the features for the different Intel Xeon E5-2600 series processor types.
Table 5-32 Intel Xeon E5-2600 series processor features Processor model Advanced Xeon E5-2650 Xeon E5-2658 Xeon E5-2660 Xeon E5-2665 Xeon E5-2670 Xeon E5-2680 Xeon E5-2690 2.0 GHz 2.1 GHz 2.2 GHz 2.4 GHz 2.6 GHz 2.7 GHz 2.9 GHz Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 20 MB 8 8 8 8 8 8 8 95 W 95 W 95 W 115 W 115 W 130 W 135 W 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 8 GT/s 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz 1600 MHz Processor frequency Turbo HT L3 cache Cores Power TDP QPI Link speeda Max DDR3 speed
215
Processor model Standard Xeon E5-2620 Xeon E5-2630 Xeon E5-2640 Basic Xeon E5-2603 Xeon E5-2609 Low power Xeon E5-2650L Xeon E5-2648L Xeon E5-2630L Special Purpose Xeon E5-2667 Xeon E5-2643 Xeon E5-2637
Processor frequency
Turbo
HT
L3 cache
Cores
Power TDP
15 MB 15 MB 15 MB
6 6 6
95 W 95 W 95 W
No No
No No
10 MB 10 MB
4 4
80 W 80 W
20 MB 20 MB 15 MB
8 8 6
70 W 70 W 60 W
Yes No No
Yes No No
15 MB 10 MB 5 MB
6 4 2
130 W 130 W 80 W
216
IBM PureFlex System and IBM Flex System Products and Technology
Description Intel Xeon Processor E5-2667 6C 2.9 GHz 15 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2670 8C 2.6 GHz 20 MB Cache 1600 MHz 115 W Intel Xeon Processor E5-2680 8C 2.7 GHz 20 MB Cache 1600 MHz 130 W Intel Xeon Processor E5-2690 8C 2.9 GHz 20 MB Cache 1600 MHz 135 W
For more information about the Intel Xeon E5-2600 series processors, see: http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-5000-sequenc e.html
5.3.6 Memory
This section has the following topics: Memory subsystem overview Memory types on page 220 Memory options on page 222 Memory channel performance considerations on page 222 Memory modes on page 224 DIMM installation order on page 225 Memory installation considerations on page 228 The x240 has 12 DIMM sockets per processor (24 DIMMs in total) running at either 800, 1066, 1333, or 1600 MHz. It supports 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB memory modules, as listed in Table 5-36 on page 222. The x240 with the Intel Xeon E5-2600 series processors can support up to 768 GB of memory in total when you use 32 GB LRDIMMs with both processors installed. The x240 uses double data rate type 3 (DDR3) LP DIMMs. You can use registered DIMMs (RDIMMs), unbuffered DIMMs (UDIMMs), or load-reduced DIMMs (LRDIMMs). However, the mixing of the different memory DIMM types is not supported. The E5-2600 series processor has four memory channels, and each memory channel can have up to three DIMMs. Figure 5-18 shows the E5-2600 series and the four memory channels.
Channel 2
Channel 3
DIMM 10
Channel 0
Channel 1
DIMM 6
DIMM 3
DIMM 4
Figure 5-18 The Intel Xeon E5-2600 series processor and the four memory channels
DIMM 5
DIMM 2
DIMM 1
DIMM 11
DIMM 12
DIMM 9
DIMM 8
DIMM 7
217
Maximum ranks per channel (any memory voltage) Maximum number of DIMMs Unbuffered DIMM (UDIMM) modules Supported memory sizes Supported memory speeds Maximum system capacity Maximum memory speed
4 GB 1333 MHz 64 GB (16 x 4 GB) 1.35V @ 2DPC: 1333 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: Not supported 8 One processor: 8 Two processor: 16
Maximum ranks per channel (any memory voltage) Maximum number of DIMMs
218
IBM PureFlex System and IBM Flex System Products and Technology
Memory subsystem characteristic Load-reduced (LRDIMM) modules Supported sizes Maximum capacity Supported speeds Maximum memory speed
32 and 16 GB 768 GB (24 x 32 GB) 1333 and 1066 MHz 1.35V @ 2DPC: 1066 MHz 1.5V @ 2DPC: 1333 MHz 1.35V or 1.5V @ 3DPC: 1066 MHz 8a One processor: 12 Two processor: 24
Maximum ranks per channel (any memory voltage) Maximum number of DIMMs
a. Because of reduced electrical loading, a 4R (four-rank) LRDIMM has the equivalent load of a two-rank RDIMM. This reduced load allows the x240 to support three 4R LRDIMMs per channel (instead of two as with UDIMMs and RDIMMs). For more information, see Memory types on page 220.
Tip: When an unsupported memory configuration is detected, the IMM illuminates the DIMM mismatch light path error LED and the system does not boot. Examples of a DIMM mismatch error are: Mixing of RDIMMs, UDIMMs, or LRDIMMs in the system Not adhering to the DIMM population rules In some cases, the error log points to the DIMM slots that are mismatched. Figure 5-19 shows the location of the 24 memory DIMM sockets on the x240 system board and other components.
DIMMs 13-18 Microprocessor 2 DIMMs 1-6 I/O expansion 1 LOM connector (some models only)
I/O expansion 2
DIMMs 19-24
DIMMs 7-12
Microprocessor 1
219
Table 5-35 lists which DIMM connectors belong to which processor memory channel.
Table 5-35 The DIMM connectors for each processor memory channel Processor Memory channel Channel 0 Channel 1 Processor 1 Channel 2 Channel 3 Channel 0 Channel 1 Processor 2 Channel 2 Channel 3 13, 14, and 15 16, 17, and 18 7, 8, and 9 10, 11, and 12 22, 23, and 24 19, 20, and 21 DIMM connector 4, 5, and 6 1, 2, and 3
Memory types
The x240 supports three types of DIMM memory: RDIMM modules Registered DIMMs are the mainstream module solution for servers or any applications that demand heavy data throughput, high density, and high reliability. RDIMMs use registers to isolate the memory controller address, command, and clock signals from the dynamic random-access memory (DRAM). This process results in a lighter electrical load. Therefore, more DIMMs can be interconnected and larger memory capacity is possible. The register does, however, typically impose a clock or more of delay, meaning that registered DIMMs often have slightly longer access times than their unbuffered counterparts. In general, RDIMMs have the best balance of capacity, reliability, and workload performance with a maximum performance of 1600 MHz (at 2 DPC). For more information about supported x240 RDIMM memory options, see Table 5-36 on page 222. UDIMM modules In contrast to RDIMMs that use registers to isolate the memory controller from the DRAMs, UDIMMs attach directly to the memory controller. Therefore, they do not introduce a delay, which creates better performance. The disadvantage is limited drive capability. Limited capacity means that the number of DIMMs that can be connected together on the same memory channel remains small because of electrical loading. This leads to less DIMM support, fewer DIMMs per channel (DPC), and overall lower total system memory capacity than RDIMM systems. UDIMMs have the lowest latency and lowest power usage. They also have the lowest overall capacity. For more information about supported x240 UDIMM memory options, see Table 5-36 on page 222. LRDIMM modules Load-reduced DIMMs are similar to RDIMMs. They also use memory buffers to isolate the memory controller address, command, and clock signals from the individual DRAMS on the DIMM. Load-reduced DIMMs take the buffering a step further by buffering the memory controller data lines from the DRAMs also.
220
IBM PureFlex System and IBM Flex System Products and Technology
Load-reduced DIMM
DRAM DRAM DRAM DRAM
Memory controller
Register
CMD/ADDR/ CLK
Memory controller
DRAM DRAM
DATA
DRAM DRAM
DRAM DRAM
In essence, all signaling between the memory controller and the LRDIMM is now intercepted by the memory buffers on the LRDIMM module. This system allows additional ranks to be added to each LRDIMM module without sacrificing signal integrity. It also means that fewer actual ranks are seen by the memory controller (for example, a 4R LRDIMM has the same look as a 2R RDIMM). The additional buffering that the LRDIMMs support greatly reduces the electrical load on the system. This reduction allows the system to operate at a higher overall memory speed for a certain capacity. Conversely, it can operate at a higher overall memory capacity at a certain memory speed. LRDIMMs allow maximum system memory capacity and the highest performance for system memory capacities above 384 GB. They are suited for system workloads that require maximum memory such as virtualization and databases. For more information about supported x240 LRDIMM memory options, see Table 5-36 on page 222. The memory type that is installed in the x240 combines with other factors to determine the ultimate performance of the x240 memory subsystem. For a list of rules when populating the memory subsystem, see Memory installation considerations on page 228.
221
Memory options
Table 5-36 lists the memory DIMM options for the x240.
Table 5-36 Memory DIMMs for the x240 type 8737 Part number FC Description Where used
Registered DIMM (RDIMM) modules - 1066 MHz and 1333 MHz 49Y1405 49Y1406 8940 8941 2 GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM H1x, H2x, G2x, F2x, D2x, A1x
4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM
Registered DIMM (RDIMM) modules - 1066 MHz and 1333 MHz 49Y1559 A28Z 4 GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM R2x, Q2x, N2x, M2x, M1x, L2x, J1x
4 GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 16 GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM
Unbuffered DIMM (UDIMM) modules 49Y1404 8648 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM
Load-reduced (LRDIMM) modules 49Y1567 90Y3105 A290 A291 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM 32 GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM
222
IBM PureFlex System and IBM Flex System Products and Technology
Speed of DDR3 DIMMs installed For maximum performance, the speed rating of each DIMM module must match the maximum memory clock speed of the Xeon E5-2600 processor. Remember these rules when you match processors and DIMM modules: The processor never over-clocks the memory in any configuration. The processor clocks all the installed memory at either the rated speed of the processor or the speed of the slowest DIMM installed in the system. For example, an Intel Xeon E5-2640 series processor clocks all installed memory at a maximum speed of 1333 MHz. If any 1600 MHz DIMM modules are installed, they are clocked at 1333 MHz. However, if any 1066 MHz or 800 MHz DIMM modules are installed, all installed DIMM modules are clocked at the slowest speed (800 MHz). Number of DIMMs per channel (DPC) Generally, the Xeon E5-2600 processor series clocks up to 2DPC at the maximum rated speed of the processor. However, if any channel is fully populated (3DPC), the processor slows all the installed memory down. For example, an Intel Xeon E5-2690 series processor clocks all installed memory at a maximum speed of 1600 MHz up to 2DPC. However, if any one channel is populated with 3DPC, all memory channels are clocked at 1066 MHz. DIMM voltage rating The Xeon E5-2600 processor series supports both low voltage (1.35 V) and standard voltage (1.5 V) DIMMs. Table 5-36 on page 222 shows that the maximum clock speed for supported low voltage DIMMs is 1333 MHz. The maximum clock speed for supported standard voltage DIMMs is 1600 MHz. Table 5-37 lists the memory DIMM options for the x240, including the memory channel speed, which is based on number of DIMMs per channel, ranks per DIMM, and DIMM voltage rating.
Table 5-37 x240 memory DIMM and memory channel speed support Memory capacity per DIMM Ranks per DIMM and data width Memory channel speed and voltage support by DIMM per channel (NS = Not Supported) DRAM density 1DPC 1.35V 1.5V 2DPC 1.35V 1.5V 3DPC 1.35V 1.5V
Part number
RDIMM 49Y1405 49Y1406 49Y1407 49Y1559 90Y3178 90Y3109 49Y1397 49Y1563 49Y1400 2 GB 4 GB 4 GB 4 GB 4 GB 8 GB 8 GB 16 GB 16 GB 1Rx8 1Rx4 2Rx8 1Rx4 2Rx8 2Rx4 2Rx4 2Rx4 4Rx4 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 4 Gb 2 Gb 1333 1333 1333 NS NS NS 1333 1333 800 1333 1333 1333 1600 1600 1600 1333 1333 1066 1333 1333 1333 NS NS NS 1333 1333 NS 1333 1333 1333 1600 1600 1600 1333 1333 800 NS NS NS NS NS NS NS NS NS 1066 1066 1066 1066 1066 1066 1066 1066 NS
223
Part number
Memory channel speed and voltage support by DIMM per channel (NS = Not Supported) DRAM density 1DPC 1.35V 4 Gb NS 1.5V 1600 2DPC 1.35V NS 1.5V 1600 3DPC 1.35V NS 1.5V 1066
4 GB
2Rx8
2 Gb
1333
1333
1333
1333
NS
NS
16 GB 32 GB
4Rx4 4Rx4
2 Gb 4 Gb
1066 1066
1333 1333
1066 1066
1333 1333
1066 1066
1066 1066
Memory modes
The x240 type 8737 supports three memory modes: Independent channel mode Rank-sparing mode Mirrored-channel mode These modes can be selected in the Unified Extensible Firmware Interface (UEFI) setup. For more information, see 5.3.12, Systems management on page 240.
Rank-sparing mode
In rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the same channel. The spare rank is held in reserve and is not used as active memory. The spare rank must have an identical or larger memory capacity than all the other active memory ranks on the same channel. After an error threshold is surpassed, the contents of that rank are copied to the spare rank. The failed rank of memory is taken offline, and the spare rank is put online and used as active memory in place of the failed rank. The memory DIMM installation sequence when using rank-sparing mode is identical to independent channel mode, as described in Memory DIMM installation: Independent channel and rank-sparing modes on page 225.
224
IBM PureFlex System and IBM Flex System Products and Technology
Mirrored-channel mode
In mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical in capacity, type, and rank count. The channels are grouped in pairs. Each channel in the group receives the same data. One channel is used as a backup of the other, which provides redundancy. The memory contents on channel 0 are duplicated in channel 1, and the memory contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and channel 1 must be the same size and type. The DIMMs in channel 2 and channel 3 must be the same size and type. The effective memory that is available to the system is only half of what is installed. Because memory mirroring is handled in hardware, it is operating system-independent. Consideration: In a two processor configuration, memory must be identical across the two processors to enable the memory mirroring feature. Figure 5-21 shows the E5-2600 series processor with the four memory channels and which channels are mirrored when operating in mirrored-channel mode.
Channel 1
Channel 3
DIMM 10
Channel 2
DIMM 7
Mirrored Pair
Figure 5-21 Showing the mirrored channels and DIMM pairs when in mirrored-channel mode
For more information about the memory DIMM installation sequence when using mirrored channel mode, see Memory DIMM installation: Mirrored-channel on page 228.
DIMM 8
DIMM 9
DIMM 11
DIMM 12
DIMM 1
DIMM 2
DIMM 3
225
This sequence spreads the DIMMs across as many memory channels as possible. For best performance and to ensure a working memory configuration, install the DIMMs in the sockets as shown in the following tables. Table 5-38 shows DIMM installation if you have one processor installed.
Table 5-38 Suggested DIMM installation for the x240 with one processor installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3
Processor 2
Channel 4 Channel 2 Channel 1
Number of processors
Number of DIMMs
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23
1 1 1 1 1 1 1 1 1 1 1 1
1 2 3 4 5 6 7 8 9 10 11 12 x x x x x x x x x x x x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
a. For optimal memory performance, populate all the memory channels equally.
226
IBM PureFlex System and IBM Flex System Products and Technology
DIMM 24
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
Table 5-39 shows DIMM installation if you have two processors installed.
Table 5-39 Suggested DIMM installation for the x240 with two processors installed Optimal memory configa Processor 1
Channel 2 Channel 1 Channel 3 Channel 4 Channel 3
Processor 2
Channel 4 Channel 2 Channel 1
Number of processors
Number of DIMMs
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23 x x x x x x x x x x x
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
a. For optimal memory performance, populate all the memory channels equally.
227
DIMM 24
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
a. The pair of DIMMs must be identical in capacity, type, and rank count.
228
IBM PureFlex System and IBM Flex System Products and Technology
USB ports
The x240 has one external USB port on the front of the compute node. Figure 5-22 shows the location of the external USB connector on the x240.
Figure 5-22 The front USB connector on the x240 Compute Node
The x240 also supports an option that provides two internal USB ports (x240 USB Enablement Kit) to are primarily used for attaching USB hypervisor keys. For more information, see 5.3.9, Integrated virtualization on page 236.
229
Table 5-41 lists the ordering part number and feature code of the console breakout cable. One console breakout cable ships with the IBM Flex System Enterprise Chassis.
Table 5-41 Ordering part number and feature code Part number 81Y5286 Feature code A1NF Description IBM Flex System Console Breakout Cable
Figure 5-24 The LSI2004 SAS controller connections to the HDD interface
230
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-25 shows the front of the x240, including the two hot-swap drive bays.
Figure 5-25 The x240 showing the front hot-swap disk drive bays
10 K SAS hard disk drives 42D0637 49Y2003 81Y9650 5599 5433 A282 IBM 300 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 600 GB 10K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 900 GB 10K 6 Gbps SAS 2.5" SFF HS HDD
15 K SAS hard disk drives 42D0677 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 42D0707 81Y9690 5409 A1P3 IBM 500 GB 7200 6 Gbps NL SAS 2.5" SFF Slim-HS HDD IBM 1TB 7.2K 6 Gbps NL SAS 2.5" SFF HS HDD A1NX A1NZ A1AV IBM 250 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 500 GB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD 5536 A283 IBM 146 GB 15K 6 Gbps SAS 2.5" SFF Slim-HS HDD IBM 300 GB 15K 6 Gbps SAS 2.5" SFF HS HDD
Solid-state drives 43W7718 90Y8643 90Y8648 49Y5844 49Y5839 A2FN A2U3 A2U4 A3AU A3AS IBM 200 GB SATA 2.5" MLC HS SSD IBM 256 GB SATA 2.5" MLC HS Entry SSD IBM 128 GB SATA 2.5" MLC HS Entry SSD IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD
231
The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid-state drives: Up to two 2.5-inch drives only Up to four 1.8-inch drives only Up to two 2.5-inch drives, plus up to four 1.8-inch solid-state drives Up to eight 1.8-inch solid-state drives The ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller that supports RAID 0, 1, 10, 5, 50, and, optionally, 6 and 60. It includes 1 GB of cache. This cache can be backed up to a flash cache when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342). At least one hardware kit is required with the ServeRAID M5115 controller to enable specific drive support: ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 (90Y4342) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the standard two-bay backplane (which is attached through the system board to an onboard controller) with a new backplane. The new backplane attaches with an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on Peripheral Component Interconnect (PCI) RAID controllers. To avoid data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash. This process uses power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are used. If you plan to install four or eight 1.8-inch SSDs, this kit is not required. 232
IBM PureFlex System and IBM Flex System Products and Technology
ServeRAID M5100 Series IBM Flex System Flash Kit for x240 (90Y4341) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches with an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and so this kit does not have a supercap. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles replacing the existing baffles, and each baffle has mounts for two SSDs. Included flexible cables connect the drives to the controller. Table 5-45 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash kit, and the SSD Expansion kit. Tip: If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119, described in 5.2.11, Integrated virtualization on page 202) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time.
Table 5-45 ServeRAID M5115 hardware kits Required drive support Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => Components required ServeRAID M5115 90Y4390 Required Required Required Required Required Required Enablement Kit 90Y4342 Required Required Required Required Flash Kit 90Y4341 SSD Expansion Kit 90Y4391a
a. If you install the SSD Expansion Kit, you cannot also install the x240 USB Enablement Kit (49Y8119).
233
Figure 5-26 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-45 on page 233).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit (90Y4342)
ServeRAID M5115 controller
Figure 5-26 The ServeRAID M5115 and the Enablement Kit installed
Figure 5-27 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of Table 5-45 on page 233).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Flash Kit (90Y4341) and ServeRAID M5100 Series SSD Expansion Kit (90Y4391)
ServeRAID M5115 controller
SSD Expansion Kit: Four SSDs on special air baffles above DIMMs (no CacheVault flash protection)
Figure 5-27 ServeRAID M5115 with Flash and SSD Expansion Kits installed
The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Two in a tray above the memory banks for CPU 1 Two in a tray above the memory banks for CPU 2 234
IBM PureFlex System and IBM Flex System Products and Technology
The ServeRAID M5115 controller has the following specifications: Eight internal 6 Gbps SAS/SATA ports PCI Express 3.0 x8 host interface 6 Gbps throughput per port 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID-on-Chip (ROC) controller Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411 Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342 Support for SAS and SATA HDDs and SSDs Support for intermixing SAS and SATA HDDs and SSDs; mixing different types of drives in the same array (drive group) is not recommended Support for self-encrypting drives (SEDs) with MegaRAID SafeStore Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447) Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per drive group, and up to 32 physical drives per drive group Support for logical unit number (LUN) sizes up to 64 TB Configurable stripe size up to 1 MB Compliant with Disk Data Format (DDF) configuration on disk (CoD) S.M.A.R.T. support MegaRAID Storage Manager management software Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance accelerator, and SSD caching enabler. Table 5-46 lists all Feature on Demand (FoD) license upgrades.
Table 5-46 Supported upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Accelerator for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum supported 1 1 1
These features have the following characteristics: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This license is a Feature on Demand license.
235
Performance Accelerator (90Y4412) The Performance Accelerator for IBM Flex System is implemented by using the LSI MegaRAID FastPath software. It provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum input/output operations per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System is implemented by using the LSI MegaRAID CacheCade Pro 2.0. It is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache. This configuration helps maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed. The 1.8-inch solid-state drives that are supported by the ServeRAID M5115 controller are listed in Table 5-47.
Table 5-47 Supported 1.8-inch solid-state drives Part number 43W7746 43W7726 49Y5993 49Y5834 Feature code 5420 5428 A3AR A3AQ Description IBM 200 GB SATA 1.8" MLC SSD IBM 50 GB SATA 1.8" MLC SSD IBM 512GB SATA 1.8" MLC Enterprise Value SSD IBM 64GB SATA 1.8" MLC Enterprise Value SSD Maximum supported 8 8 8 8
236
IBM PureFlex System and IBM Flex System Products and Technology
The USB memory keys connect to the internal x240 USB Enablement Kit. Table 5-49 lists the ordering information for the internal x240 USB Enablement Kit.
Table 5-49 Internal USB port option Part number 49Y8119 Feature code A33M Description x240 USB Enablement Kit
The x240 USB Enablement Kit connects to the system board of the server, as shown in Figure 5-28. The kit offers two ports, and enables you to install two memory keys. If you do, both devices are listed in the boot menu. With this setup, you can boot from either device, or set one as a backup in case the first one becomes corrupted.
USB flash key USB two-port assembly
Figure 5-28 The x240 compute node showing the location of the internal x240 USB Enablement Kit
Consideration: The x240 USB Enablement Kit and USB memory keys are not supported if the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is already installed because these kits occupy the same location in the server. For a complete description of the features and capabilities of VMware ESX Server, go to: http://www.vmware.com/products/vi/esx/
237
The Compute Node Fabric Connector enables port 1 on the Embedded 10 Gb Virtual Fabric Adapter to be routed to I/O module bay 1. Similarly, port 2 can be routed to I/O module bay 2. The Compute Node Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1. Consideration: If I/O connector 1 has the Embedded 10 Gb Virtual Fabric Adapter installed, only I/O connector 2 is available for the installation of additional I/O adapters. (An exception is that the ServeRAID Controller can coexist in slot 1 with an Embedded adapter.) The Embedded 10 Gb Virtual Fabric Adapter is based on the Emulex BladeEngine 3, which is a single-chip, dual-port 10 Gigabit Ethernet (10 GbE) Ethernet Controller. The Embedded 10 Gb Virtual Fabric Adapter includes these features: PCI-Express Gen2 x8 host bus interface Supports multiple Virtual Network Interface Card (vNIC) functions TCP/IP offload Engine (TOE enabled) SRIOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering using FoD
238
IBM PureFlex System and IBM Flex System Products and Technology
Table 5-50 lists the ordering information for the IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade. This upgrade enables the iSCSI and FCoE support on the Embedded 10 Gb Virtual Fabric Adapter.
Table 5-50 Feature on Demand upgrade for FCoE and iSCSI support Part number 90Y9310 Feature code A2TD Description IBM Flex System Embedded 10 Gb Virtual Fabric Upgrade
Figure 5-30 shows the x240 and the location of the Compute Node Fabric Connector on the system board.
Captive screws LOM connector
Figure 5-30 The x240 showing the location of the Compute Node Fabric Connector
239
Figure 5-31 shows the rear of the x240 compute node and the locations of the I/O connectors.
I/O connector 1
I/O connector 2
Figure 5-31 Rear of the x240 compute node showing the locations of the I/O connectors
Table 5-51 lists the I/O adapters that are supported in the x240.
Table 5-51 Supported I/O adapters for the x240 compute node Part number Feature code Ports Description
Ethernet adapters 49Y7900 90Y3466 90Y3554 A1BR A1QY A1R1 4 2 4 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System EN4132 2-port 10Gb Ethernet Adapter IBM Flex System CN4054 10Gb Virtual Fabric Adapter
Fibre Channel adapters 69Y1938 95Y2375 88Y6370 A1BM A2N5 A1BP 2 2 2 IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter IBM Flex System FC5022 2-port 16Gb FC Adapter
InfiniBand adapters 90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter
Requirement: Any supported I/O adapter can be installed in either I/O connector. However, you must be consistent not only across chassis, but across all compute nodes.
240
IBM PureFlex System and IBM Flex System Products and Technology
USB port
Identify LED
Fault LED
NMI control
Figure 5-32 The front of the x240 with the front panel LEDs and controls shown
Location
Blue
Check error log Fault Hard disk drive activity LED Hard disk drive status LED
241
NMI
Power LED
The status of the power LED of the x240 shows the power status of the x240 compute node. It also indicates the discovery status of the node by the Chassis Management Module. The power LED states are listed in Table 5-54.
Table 5-54 The power LED states of the x240 compute node Power LED state Off On; fast flash mode Status of compute node No power to the compute node. The compute node has power. The Chassis Management Module is in discovery mode (handshake). The compute node has power. Power in stand-by mode. The compute node has power. The compute node is operational.
Consideration: The power button does not operate when the power LED is in fast flash mode.
242
IBM PureFlex System and IBM Flex System Products and Technology
The x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is on the upper right of the compute node, as shown in Figure 5-33.
To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meaning of each LED in the light path diagnostics panel is listed in Table 5-55.
Table 5-55 Light path panel LED definitions LED LP S BRD MIS NMI TEMP MEM ADJ Color Green Yellow Yellow Yellow Yellow Yellow Yellow Meaning The light path diagnostics panel is operational. A system board error is detected. A mismatch occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST. A non-maskable interrupt (NMI) occurred. An over-temperature condition occurred that was critical enough to shut down the server. A memory fault occurred. The corresponding DIMM error LEDs on the system board are also lit. A fault is detected in the adjacent expansion unit (if installed).
243
Remote access to system fan, voltage, and temperature values Remote IMM and UEFI update UEFI update when the server is powered off Remote console by way of a serial over LAN Remote access to the system event log Predictive failure analysis and integrated alerting features (for example, by using Simple Network Management Protocol (SNMP)) Remote presence, including remote control of server by using a Java or Active x client Operating system failure window (blue screen) capture and display through the web interface Virtual media that allow the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available on the local network. You can use this address to remotely manage the x240 by connecting directly to the IMM independent of the FSM or CMM. For more information about the IMM, see 3.4.1, Integrated Management Module II on page 57.
244
IBM PureFlex System and IBM Flex System Products and Technology
5.4.1 Introduction
The IBM Flex System x440 Compute Node is a double-wide compute node that provides scalability to support up to four Intel Xeon E5-4600 processors. The nodes width allows for significant I/O capability. The server is ideal for virtualization, database, and memory-intensive high performance computing environments. Figure 5-34 shows the front of the compute node, showing the location of the controls, LEDs, and connectors. The light path diagnostic panel is on the upper edge of the front panel bezel, in the same place as on the x220 and x240.
Two 2.5 HS drive bays Light path diagnostics panel
LED panel
245
Figure 5-35 shows the internal layout and major components of the x440.
Cover
Air baffle Air baffle Air baffle Heat sink Microprocessor heat sink filler Backplane assembly I/O expansion adapter
Microprocessor
Figure 5-35 Exploded view of the x440 showing the major components
Chipset Memory
246
IBM PureFlex System and IBM Flex System Products and Technology
Components Memory maximums Memory protection Disk drive bays Maximum internal storage
Specification With LRDIMMs: Up to 1.5 TB with 48x 32 GB LRDIMMs and four processors. With RDIMMs: Up to 768 GB with 48x 16 GB RDIMMs and four processors. ECC, Chipkill (for x4-based memory DIMMs), memory mirroring, and memory rank sparing. Two 2.5-inch hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD drives. Optional Flash Kit support for up to eight 1.8-inch SSDs. With two 2.5-inch hot-swap drives: Up to 2 TB with 1 TB 2.5" NL SAS HDDs, or up to 1.8 TB with 900 GB 2.5" SAS HDDs, or up to 2 TB with 1 TB 2.5" SATA HDDs, or up to 512 GB with 256 GB 2.5" SATA SSDs. Intermix of SAS and SATA HDDs and SSDs is supported. With 1.8-inch SSDs and ServeRAID M5115 RAID adapter: Up to 1.6 TB with eight 200 GB 1.8-inch SSDs. RAID 0 and 1 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, and 50 support and 1 GB cache. Supports up to eight 1.8-inch SSDs with expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance enabler. x4x models: Four 10 Gb Ethernet ports with two dual-port Embedded 10Gb Virtual Fabric Ethernet LAN-on-motherboard (LOM) controllers; Emulex BE3 based. Upgradeable to FCoE and iSCSI using IBM Feature on Demand license activation. x2x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters. Four I/O connectors for adapters. PCI Express 3.0 x16 interface. USB ports: One external. Two internal for embedded hypervisor. Console breakout cable port that provides local KVM and serial ports (cable standard with chassis; additional cables are optional). UEFI, IBM Integrated Management Module 2 (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and IBM ServerGuide. Power-on password, administrator's password, and Trusted Platform Module V1.2. Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise Server 10 and 11, VMware ESX 4, and vSphere 5. For details, see 5.4.14, Operating systems support on page 266. Optional service upgrades are available through IBM ServicePac offerings: Four-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical support for IBM hardware and selected IBM and OEM software. Width: 437 mm (17.2 in.), height 51 mm (2.0 in.), depth 493 mm (19.4 in.) Maximum weight: 12.25 kg (27 lbs).
RAID support
Network interfaces
Systems management
Security features Video Limited warranty Operating systems supported Service and support
Dimensions Weight
247
Figure 5-36 shows the components on the system board of the x440.
1
I/O adapters: 1 (top) to 4 (bottom)
2
USB ports
Figure 5-36 Layout of the IBM Flex System x440 Compute Node system board
5.4.2 Models
The current x440 models, with processor, memory, and other embedded options that are shipped as standard with each model type, are shown in Table 5-57.
Table 5-57
Model
Standard models of the IBM Flex System x440 Compute Node, type 7917
Intel Processor E5-4800: 4 maximuma Memory RAID adapter Disk bays (used/max)b Disks Embedded 10GbE Virtual Fabric No I/O slots (used/ max) 0/4 2 / 4d
7917-A2x
Xeon E5-4603 4C 2.0 GHz 10 MB 1066 MHz 95W Xeon E5-4603 4C 2.0 GHz 10 MB 1066 MHz 95W Xeon E5-4607 6C 2.2 GHz 12 MB 1066 MHz 95W Xeon E5-4607 6C 2.2 GHz 12 MB 1066 MHz 95W Xeon E5-4610 6C 2.4 GHz 15 MB 1333 MHz 95W Xeon E5-4610 6C 2.4 GHz 15 MB 1333 MHz 95W Xeon E5-4620 8C 2.2 GHz 16 MB 1333 MHz 95W
1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1066 MHzc 1x 8 GB 1333 MHz 1x 8 GB 1333 MHz 1x 8 GB 1333 MHz
SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID SAS/SATA RAID
2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2) 2.5 hot-swap (0 / 2)
Open
7917-A4x
Open
Standard
7917-B2x
Open
No
0/4 2 / 4d
7917-B4x
Open
Standard
7917-C2x
Open
No
0/4 2 / 4d
7917-C4x
Open
Standard
7917-D2x
Open
No
0/4
248
IBM PureFlex System and IBM Flex System Products and Technology
Model
Memory
RAID adapter
Disks
7917-D4x
Xeon E5-4620 8C 2.2 GHz 16 MB 1333 MHz 95W Xeon E5-4650 8C 2.7 GHz 20 MB 1600 MHz 130W Xeon E5-4650 8C 2.7 GHz 20 MB 1600 MHz 130W
Open
7917-F2x
Open
No
0/4 2 / 4d
7917-F4x
Open
Standard
a. Processor detail: Processor quantity and model, cores, core speed, L3 cache, memory speed, and power consumption. b. The 2.5-inch drive bays can be replaced and expanded with additional internal bays to support up to eight 1.8-inch solid-state drives (SSDs). See 5.4.7, Internal disk storage on page 254. c. For models Axx and Bxx, the standard DIMM is rated at 1333 MHz, but operates at up to 1066 MHz to match the processor memory speed. d. The x4x models include two Embedded 10Gb Virtual Fabric Ethernet controllers. Connections are routed using a Fabric Connector. The Fabric Connectors preclude the use of an I/O adapter in I/O connectors 1 and 3, except the ServeRAID M5115 controller, which can be installed in slot 1.
249
PCIe Mux
PCIe x4 G2
PCIe x8 G3 Intel Xeon CPU 1 QPI links (8 GT/s) Intel Xeon CPU 2
USB
PCIe x16 G3
The IBM Flex System x440 Compute Node has the following system architecture features as standard: Four 2011-pin type R (LGA-2011) processor sockets. An Intel C600 PCIe Controller Hub (PCH). Four memory channels per socket. Up to three DIMMs per memory channel. Sixteen DDR3 DIMM sockets. Support for LRDIMMs and RDIMMs. Two dual port integrated 10Gb Virtual Fabric Ethernet controllers that are based on Emulex BE3. Upgradeable to FCoE and iSCSI through IBM Features on Demand (FoD). One LSI 2004 SAS controller with integrated RAID 0 and 1 to the two internal drive bays. Support for ServeRAID M5115 controller for RAID 5 and other levels to up to 1.8-inch bays. Integrated Management Module II (IMMv2) for systems management. Four PCIe 3.0 I/O adapter connectors x16. Two internal and one external USB connectors.
250
IBM PureFlex System and IBM Flex System Products and Technology
90Y9060 88Y6263 90Y9062 69Y3100 90Y9064 69Y3106 90Y9066 90Y9049 90Y9070 69Y3112 90Y9068 90Y9055 90Y9072 69Y3118 90Y9186 90Y9185
Xeon E5-4603 4C 2.0GHz 10MB 1066MHz 95W Xeon E5-4603 4C 2.0GHz 10MB 1066MHz 95W Xeon E5-4607 6C 2.2GHz 12MB 1066MHz 95W Xeon E5-4607 6C 2.2GHz 12MB 1066MHz 95W Xeon E5-4610 6C 2.4GHz 15MB 1333MHz 95W Xeon E5-4610 6C 2.4GHz 15MB 1333MHz 95W Xeon E5-4617 6C 2.9GHz 15MB 1600MHz 130W Xeon E5-4617 6C 2.9GHz 15MB 1600MHz 130W Xeon E5-4620 8C 2.2GHz 16MB 1333MHz 95W Xeon E5-4620 8C 2.2GHz 16MB 1333MHz 95W Xeon E5-4640 8C 2.4GHz 20MB 1600MHz 95W Xeon E5-4640 8C 2.4GHz 20MB 1600MHz 95W Xeon E5-4650 8C 2.7GHz 20MB 1600MHz 130W Xeon E5-4650 8C 2.7GHz 20MB 1600MHz 130W Xeon E5-4650L 8C 2.6GHz 20MB 1600MHz 115W Xeon E5-4650L 8C 2.6GHz 20MB 1600MHz 115W
a. When two feature codes are specified, the first feature code is for CPU 1 and the second feature code is for CPU 2. When only one feature code is specified, this is the feature code that is used for CPU 3 and CPU 4.
251
The x440 supports two types of low profile DDR3 memory: RDIMMs and LRDIMMs. The server supports up to 12 DIMMs when one processor is installed, and up to 48 DIMMs when four processors are installed. Each processor has four memory channels, with three DIMMs per channel. The following rules apply when you select the memory configuration: The x440 supports RDIMMs and LRDIMMs, but UDIMMs are not supported. Mixing of RDIMM and LRDIMM is not supported. Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all DIMMs operate at 1.5 V. The maximum number of ranks that is supported per channel is eight (except for Load Reduced DIMMs, where more than eight ranks are supported, because one quad-rank LRDIMM provides the same electrical load on a memory bus as a single-rank RDIMM). The maximum quantity of DIMMs that can be installed in the server depends on the number of processors. For more information, see the Maximum quantity row in Table 5-60. All DIMMs in all processor memory channels operate at the same speed, which is determined as the lowest value of: Memory speed that is supported by a specific processor. Lowest maximum operating speed for the selected memory configuration that depends on rated speed. For more information, see the Maximum operating speed section in Table 5-60. Table 5-60 shows the maximum memory speeds that are achievable based on the installed DIMMs and the number of DIMMs per channel. The table also shows the maximum memory capacity at any speed that is supported by the DIMM and the maximum memory capacity at the rated DIMM speed. In the table, cells that are highlighted with a gray background indicate when the specific combination of DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at the rated speed.
Table 5-60 Maximum memory speeds
Specification Rank Part numbers Single rank DIMMs 49Y1406 (4 GB) 49Y1559 (4 GB) RDIMMs Dual rank DIMMs 49Y1407 (4 GB) 49Y1397 (8 GB) 49Y1563 (16 GB) 1333 MHz 1.35 V 48 16 GB 768 GB 90Y3109 (4 GB) 00D4968 (16 GB) LRDIMM Quad Rank LRDIMMs 49Y1567 (16 GB) 90Y3105 (32 GB)
192 GB
128 GB
512 GB
512 GB
1.0 TB
Maximum operating speed (MHz) 1 DIMM per channel 1333 MHz 1600 MHz 1333 MHz 1600 MHz 1333 MHz (1.5 V)
252
IBM PureFlex System and IBM Flex System Products and Technology
Specification 2 DIMMs per channel 3 DIMMs per channel 1333 MHz 1066 MHz (1.5 V) 1600 MHz 1066 MHz
a. The maximum quantity that is supported is shown for four processors installed. When two processors are installed, the maximum quantity that is supported is a half of the quantity that is shown. When one processor is installed, the quantity is one quarter of that shown.
The x440 supports the following memory protection technologies: ECC Chipkill (for x4-based memory DIMMs; look for x4 in the DIMM description) Memory mirroring Memory sparing If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per processor). Both DIMMs in a pair must be identical in type and size. If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size of a rank varies depending on the DIMMs installed. Table 5-61 lists the memory options available for the x440 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of four (one for each of the memory channels). A total of 48 DIMMs are the maximum number supported.
Table 5-61 Memory options for the x440 Part number Feature code Description Models where used
Registered DIMM (RDIMM) modules 49Y1406 49Y1407 49Y1559 90Y3109 49Y1397 49Y1563 00D4968 8941 8947 A28Z A292 8923 A1QT A2U5 4 GB (1x 4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x 4 GB, 2Rx8, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 4 GB (1x 4 GB, 1Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x 8 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM 8 GB (1x 8 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x 16 GB, 2Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM 16 GB (1x 16 GB, 2Rx4, 1.5 V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM F2x and F4x All other models -
Load Reduced DIMM (LRDIMM) modules 49Y1567 90Y3105 A290 A291 16 GB (1x1 6 GB, 4Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM 32 GB (1x 32 GB, 4Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM -
253
10 K SAS hard disk drives 44W2264 90Y8877 90Y8872 81Y9650 5413 A2XC A2XD A282 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SEDa IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD 2 2 2 2
15 L SAS hard disk drives 90Y8926 81Y9670 NL SATA 81Y9722 81Y9726 81Y9730 NL SAS 90Y8953 81Y9690 SSDs 43W7718 90Y8643 90Y8648 A2FN A2U3 A2U4 IBM 200GB SATA 2.5" MLC HS SSD IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD 2 2 2 A2XE A1P3 IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD 2 2 A1NX A1NZ A1AV IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2 2 2 A2XB A283 IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD 2 2
a. Supports self-encrypting drive (SED) technology. For more information, see Self-Encrypting Drives for IBM System x at http://www.redbooks.ibm.com/abstracts/tips0761.html?Open.
254
IBM PureFlex System and IBM Flex System Products and Technology
The hardware kits are as follows: ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 (46C9030) enables support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection. This enablement kit replaces the two standard 1-bay backplanes (which are attached through the system board to an onboard controller) with new 1-bay backplanes that attach to an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit. MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered by a supercapacitor to protect data that is stored in the controller cache. This module eliminates the need for the lithium-ion battery that is commonly used to protect DRAM cache memory on PCI RAID controllers. To avoid the possibility of data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash memory using power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash memory back to the DRAM cache, which can then be flushed to disk. Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required
255
ServeRAID M5100 Series IBM Flex System Flash Kit for x440 (46C9031) enables support for up to four 1.8-inch SSDs. This kit replaces the two standard 1-bay backplanes with a two 2-bay backplanes that attach to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and therefore this kit does not have a supercapacitor. ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 (46C9032) enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles that can attach two 1.8-inch SSD attachment locations and flex cables for attachment to up to four 1.8-inch SSDs. Product-specific kits: These kits are specific for the x440 and cannot be used with the x240 or x220. Table 5-64 shows the kits that are required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, then you need the M5115 controller, the Flash Kit, and the SSD Expansion Kit.
Table 5-64 ServeRAID M5115 hardware kits Wanted drive support Maximum number of 2.5-inch drives 2 0 2 0 Maximum number of 1.8-inch SSDs 0 4 (front) 4 (internal) 8 (both) => => => => Components required ServeRAID M5115 90Y4390 Required Required Required Required Required Required Enablement kIt 46C9030 Required Required Required Required Flash Kit 46C9031 SSD Expansion Kit 46c9032
Figure 5-38 shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of Table 5-64).
ServeRAID M5115 controller (90Y4390) with ServeRAID M5100 Series Enablement Kit (46C9030)
ServeRAID M5115 controller
Figure 5-38 The ServeRAID M5115 and the Enablement Kit installed
256
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-39 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 in Table 5-64 on page 256).
ServeRAID M5115 controller (90Y4390) with Flash Kit for x440 (46C9031) and SSD Expansion Kit for x440 (46C9032)
ServeRAID M5115 controller
SSD Expansion Kit: Four SSDs connectors on special air baffles above DIMMs (no CacheVault flash protection)
Figure 5-39 ServeRAID M5115 with Flash and SSD Expansion Kits installed
The eight SSDs are installed in the following locations: Four in the front of the system in place of the two 2.5-inch drive bays Four on trays above the memory banks The ServeRAID M5115 controller, 90Y4390, has the following specifications: Eight internal 6 Gbps SAS/SATA ports. PCI Express 3.0 x8 host interface. 6 Gbps throughput per port. 800 MHz dual-core IBM PowerPC processor with LSI SAS2208 6 Gbps RAID on Chip (ROC) controller. Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional upgrade using 90Y4411. Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342. Support for SAS and SATA HDDs and SSDs. Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in the same array (drive group) is not recommended Support for self-encrypting drives (SEDs) with MegaRAID SafeStore. Optional support for SSD performance acceleration with MegaRAID FastPath and SSD caching with MegaRAID CacheCade Pro 2.0 (90Y4447). Support for up to 64 virtual drives, up to 128 drive groups, up to 16 virtual drives per one drive group, and up to 32 physical drives per one drive group. Support for logical unit number (LUN) sizes up to 64 TB. Configurable stripe size up to 1 MB.
Chapter 5. Compute nodes
257
Compliant with Disk Data Format (DDF) configuration on disk (COD). S.M.A.R.T. support. MegaRAID Storage Manager management software. Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance upgrade, and SSD caching enabler. The feature upgrades are as listed in Table 5-65. These upgrades are all Feature on Demand (FoD) license upgrades.
Table 5-65 Supported ServeRAID M5115 upgrade features Part number 90Y4410 90Y4412 90Y4447 Feature code A2Y1 A2Y2 A36G Description ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System ServeRAID M5100 Series Performance Upgrade for IBM Flex System (MegaRAID FastPath) ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System (MegaRAID CacheCade Pro 2.0) Maximum supported 1 1 1
Here are the descriptions for these features: RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is a Feature on Demand license. Performance Upgrade (90Y4412) The Performance Upgrade for IBM Flex System (implemented using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System (implemented using the LSI MegaRAID CacheCade Pro 2.0) is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.
258
IBM PureFlex System and IBM Flex System Products and Technology
Each x440 model that includes the embedded 10 Gb also has the Compute Node Fabric Connector installed in each of I/O connectors 1 and 3 (and physically screwed onto the system board) to provide connectivity to the Enterprise Chassis midplane. Figure 5-40 shows the Compute Node Fabric Connector.
The Fabric Connector enables port 1 of each embedded 10 Gb controller to be routed to I/O module bay 1 and port 2 of each controller to be routed to I/O module bay 2. The Fabric Connectors can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1 and 3. The Embedded 10Gb controllers are based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. Here are some of the features of the Embedded 10Gb controller: PCI-Express Gen2 x8 host bus interface Supports multiple virtual NIC (vNIC) functions TCP/IP Offload Engine (TOE enabled) SRIOV capable RDMA over TCP/IP capable iSCSI and FCoE upgrade offering through FoD Table 5-66 lists the ordering information for the IBM Flex System Embedded 10Gb Virtual Fabric Upgrade, which enables the iSCSI and FCoE support on the Embedded 10Gb Virtual Fabric controller. To upgrade both controllers, you need two FoD licenses.
Table 5-66 Feature on Demand upgrade for FCoE and iSCSI support Part number 90Y9310 Feature code A2TD Description IBM Flex System Embedded 10Gb Virtual Fabric Upgrade Maximum supported 2
259
1 2 3 4
Figure 5-41 Location of the I/O adapters in the IBM Flex System x440 Compute Node
260
IBM PureFlex System and IBM Flex System Products and Technology
All I/O adapters are the same shape and can be used in any available slot. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in Table 5-67. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.
Table 5-67 Adapter to I/O bay correspondence I/O adapter slot in the x440 Slot 1 Port on the adapter Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Slot 2 Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Slot 3 Port 1 Port 2 Port 3 (for 4-port cards) Port 4(for 4-port cards) Slot 4 Port 1 Port 2 Port 3 (for 4-port cards) Port 4 (for 4-port cards) Corresponding I/O module bay in the chassis Module bay 1 Module bay 2 Module bay 1 Module bay 2 Module bay 3 Module bay 4 Module bay 3 Module bay 4 Module bay 1 Module bay 2 Module bay 1 Module bay 2 Module bay 3 Module bay 4 Module bay 3 Module bay 4
261
Figure 5-42 shows the location of the switch bays in the rear of the Enterprise Chassis.
I/O module bay 1 I/O module bay 3 I/O module bay 2 I/O module bay 4
Figure 5-43 shows how the two port adapters are connected to switches installed in the I/O Module bays in an Enterprise Chassis.
. Switch . . bay 1 . . .
. Switch . . bay 3 . . .
. Switch . . bay 2 . . .
. Switch . . bay 4 . . .
Figure 5-43 Logical layout of the interconnects between I/O adapters and I/O module
262
IBM PureFlex System and IBM Flex System Products and Technology
10Gb Ethernet 90Y3554 90Y3558 90Y3466 A1R1 A1R0 A1QY IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System CN4054 Virtual Fabric Adapter (SW Upgrade) (Feature on Demand to provide FCoE and iSCSI support) IBM Flex System EN4132 2-port 10Gb Ethernet Adapter 4 License 2 4 4 4
1Gb Ethernet 49Y7900 InfiniBand 90Y3454 A1QZ IBM Flex System IB6132 2-port FDR InfiniBand Adapter 2 4 A10Y IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 4 4
a. For x4x models with two Embedded 10Gb Virtual Fabric controllers standard, the Compute Node Fabric Connectors occupy the same space as the I/O adapters in I/O slots 1 and 3, so you must remove the Fabric Connectors if you plan to install adapters in those I/O slots
a. For x4x models with two Embedded 10Gb Virtual Fabric controllers standard, the Compute Node Fabric Connectors occupy the same space as the I/O adapters in I/O slots 1 and 3, so you must remove the Fabric Connectors if you plan to install adapters in those I/O slots Chapter 5. Compute nodes
263
264
IBM PureFlex System and IBM Flex System Products and Technology
To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis. The meanings of the LEDs in the light path diagnostics panel are listed in Table 5-71.
Table 5-71 Light path diagnostic panel LEDs LED LP S BRD MIS NMI TEMP MEM ADJ Meaning The light path diagnostics panel is operational. A system board error is detected. A mismatch occurred between the processors, DIMMs, or HDDs within the configuration reported by POST. A non-maskable interrupt (NMI) occurred. An over-temperature condition occurred that was critical enough to shut down the server. A memory fault occurred. The corresponding DIMM error LEDs on the system board are also lit. A fault is detected in the adjacent expansion unit (if installed).
Remote management
The server contains an IBM Integrated Management Module II (IMMv2), which interfaces with the advanced management module in the chassis. The combination of these two components provides advanced service-processor control, monitoring, and an alerting function. If an environmental condition exceeds a threshold or if a system component fails, LEDs on the system board are lit to help you diagnose the problem, the error is recorded in the event log, and you are alerted to the problem. A virtual presence capability comes standard for remote server management. Remote server management is provided through the following industry-standard interfaces: Intelligent Platform Management Interface (IPMI) Version 2.0 Simple Network Management Protocol (SNMP) Version 3 Common Information Model (CIM) Web browser The server also supports virtual media and remote control features, which provide the following functions: Remotely viewing video with graphics resolutions up to 1600 x 1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state Remotely accessing the server using the keyboard and mouse from a remote client Mapping the CD or DVD drive, diskette drive, and USB flash drive on a remote client, and mapping ISO and diskette image files as virtual drives that are available for use by the server Uploading a diskette image to the IMM2 memory and mapping it to the server as a virtual drive Capturing blue-screen errors
265
266
IBM PureFlex System and IBM Flex System Products and Technology
5.5.13, Integrated features on page 285 5.5.14, Operating system support on page 285
5.5.1 Specifications
The IBM Flex System p260 Compute Node is a half-wide, Power Systems compute node with these characteristics: Two POWER7 or POWER7+ processor sockets Sixteen memory slots Two I/O adapter slots An option for up to two internal drives for local storage The IBM Flex System p260 Compute Node has the specifications that are shown in Table 5-72.
Table 5-72 IBM Flex System p260 Compute Node specifications Components Model numbers Form factor Chassis support Processor Specification IBM Flex System p24L Compute Node: 1457-7FL IBM Flex System p260 Compute Node: 7895-22X and 7895-23X Half-wide compute node. IBM Flex System Enterprise Chassis. p24L: Two IBM POWER7 processors p260: Two IBM POWER7 (model 22X) or POWER7+ (model 23X) processors. POWER7 processors: Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 45 nm fabrication technology. POWER7+ processors: Each processor contains either eight cores (up to 4.1 GHz or 3.6 GHz and 32 MB L3 cache) or four cores (4.0 GHz and 16 MB L3 cache). Each processor has 10 MB L3 cache per core, so 8-core processors have 80 MB of L3 cache total. There is an integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology. Chipset Memory IBM P7IOC I/O hub. 16 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports IBM Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 512 GB using 16x 32 GB DIMMs. ECC, Chipkill.
267
Specification Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters. Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. IBM AIX, IBM i, and Linux.
Maximum internal storage RAID support Network interfaces PCI Expansion slots Ports Systems management Security features Video Limited warranty Operating systems supported Service and support
Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 215 mm (8.5), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 7.0 kg (15.4 lb).
Dimensions Weight
268
IBM PureFlex System and IBM Flex System Products and Technology
16 DIMM slots
(HDDs are mounted on the cover, located over the memory DIMMs)
Figure 5-45 Layout of the IBM Flex System p260 Compute Node
269
Power button
Figure 5-46 Front panel of the IBM Flex System p260 Compute Node
The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis.
270
IBM PureFlex System and IBM Flex System Products and Technology
The power-control button on the front of the server (Figure 5-46 on page 270) has two functions: When the system is fully installed in the chassis: Use this button to power the system on and off. When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-47.
The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM, too) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive (or direct access storage device (DASD)) board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) If problems occur, the light path diagnostics LEDs help with identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing this button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. Typically, you can obtain this information from the IBM Flex System Manager or Chassis Management Module before you remove the node. However, having the LEDs helps with repairs and troubleshooting if onsite assistance is needed. For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at: http://www.ibm.com/support
271
DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM
SMI
HDDs/SSDs
SMI
SMI
POWER7 Processor 0
USB controller
To front panel
SMI
4 bytes each
SMI
I/O connector 1
SMI
POWER7 Processor 1
SMI
FSP
Phy
Gb Ethernet ports
Figure 5-48 IBM Flex System p260 Compute Node and IBM Flex System p24L Compute Node block diagram
This diagram shows the two CPU slots, with eight memory slots for each processor. Each processor is connected to a P7IOC I/O hub, which connects to the I/O subsystem (I/O adapters, local storage). At the bottom, you can see a representation of the service processor (FSP) architecture.
272
IBM PureFlex System and IBM Flex System Products and Technology
5.5.7 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and reliability, availability, and serviceability (RAS). Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. As with previous generations, the design philosophy for POWER7 processor-based systems is system-wide balance. The POWER7 processor plays an important role in this balancing.
IBM Flex System p260 Compute Node - 7895-23X EPRD EPRB EPRA 4 8 8 2 2 2 8 16 16 4.0 GHz 3.6 GHz 4.1 GHz 40 MB (10 MB per core) 80 MB (10 MB per core) 80 MB (10 MB per core)
IBM Flex System p260 Compute Node - 7895-22X EPR1 EPR3 EPR5 4 8 8 2 2 2 8 16 16 3.3 GHz 3.2 GHz 3.55 GHz 16 MB (4 MB per core) 32 MB (4 MB per core) 32 MB (4 MB per core)
IBM Flex System p24L Compute Node EPR8 EPR9 EPR7 8 8 6 2 2 2 16 16 12 3.2 GHz 3.55 GHz 3.7 GHz 32 MB (4 MB per core) 32 MB (4 MB per core) 24 MB (4 MB per core)
To optimize software licensing, you can deconfigure or disable one or more cores. The feature is listed in Table 5-74.
Table 5-74 Unconfiguration of cores for p260 and p24L Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)
273
Architecture
IBM uses innovative methods to achieve the required levels of throughput and bandwidth. Areas of innovation for the POWER7 processor and POWER7 processor-based systems include (but are not limited to) the following elements: On-chip L3 cache that is implemented in embedded dynamic random-access memory (eDRAM) Cache hierarchy and component innovation Advances in memory subsystem Advances in off-chip signaling The superscalar POWER7 processor design also provides other capabilities: Binary compatibility with the prior generation of POWER processors Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility to and from IBM POWER6 and IBM POWER6+ processor-based systems Figure 5-49 shows the POWER7 processor die layout with major areas identified: Eight POWER7 processor cores, L2 cache, L3 cache and chip power bus interconnect, SMP links, GX++ interface, and integrated memory controller.
GX++ Bridge
Memory Controller
C1 Core L2
C1 Core L2
C1 Core L2
C1 Core L2
SMP
Figure 5-49 POWER7 processor architecture
5.5.8 Memory
Each POWER7 processor has an integrated memory controller. Industry standard DDR3 RDIMM technology is used to increase the reliability, speed, and density of the memory subsystems.
274
IBM PureFlex System and IBM Flex System Products and Technology
Memory Buffers
Generally, use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB). However, this configuration is not sufficient for reasonable production use of the system.
a. If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used (EM04, 8145, EEME and EEMF cannot be used).
Requirement: Because of the design of the on-cover storage connections, if you want to use 2.5-inch HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if LP DIMMs and SAS HDDs are configured in the same system. This mixture physically obstructs the cover. Solid-state drives (SSDs) and LP DIMMs can be used together, however. For more information, see 5.5.10, Storage on page 280.
275
There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-50.
SMI
DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)
SMI
POWER7 Processor 0
SMI
SMI
SMI
SMI
POWER7 Processor 1
SMI
SMI
Figure 5-50 Memory DIMM topology (IBM Flex System p260 Compute Node)
The memory-placement rules are as follows: Install DIMM fillers in unused DIMM slots to ensure effective cooling. Install DIMMs in pairs. Both must be the same size. Both DIMMs in a pair must be the same size, speed, type, and technology. Otherwise, you can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/ Table 5-77 shows the required placement of memory DIMMs for the p260 and the p24L, depending on the number of DIMMs installed.
Table 5-77 DIMM placement - p260 and p24L Number of DIMMs Processor 0 Processor 1
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
2 4 6 8 10 12
x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
276
IBM PureFlex System and IBM Flex System Products and Technology
DIMM 16
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
Number of DIMMs
Processor 0
Processor 1
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15 x
14 16
x x
x x
x x
x x
x x
x x
x x
x x
x x
x x
x x
x x
x x x
277
DIMM 16 x x
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
Figure 5-51 represents the percentage of processor that is used to compress memory for two partitions with various profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
2 % CPU utilization for expansion Very cost effective 1
1 = Plenty of spare CPU resource available 2 = Constrained CPU resource already running at significant utilization
Both cases show a knee of the curve relationship for processor resources that are required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required. The knee varies, depending on how compressible the memory contents are. This variability demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. You can use the tool to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool.
278
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-52 shows an example of the output that is returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46
Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-52 Output from the AIX Active Memory Expansion planning tool
For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html
279
5.5.10 Storage
The p260 and p24L has an onboard SAS controller that can manage up to two non-hot-pluggable internal drives. Both 2.5-inch HDDs and 1.8-inch SSDs are supported. The drives attach to the cover of the server, as shown in Figure 5-53.
Figure 5-53 The IBM Flex System p260 Compute Node showing hard disk drive location on top cover
2.5-inch SAS HDDs 7069 8274 8276 None 42D0627 49Y2022 Top cover with HDD connectors for the p260 and p24L 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS
280
IBM PureFlex System and IBM Flex System Products and Technology
1.8-inch SSDs 7068 8207 No drives 7067 None Top cover for no drives on the p260 and p24L None 74Y9114 Top cover with SSD connectors for the p260 and p24L 177 GB SATA non-hot-swap SSD
As shown in Figure 5-53 on page 280, the local drives (HDD or SDD) are mounted to the top cover of the system. When you order your p260 or p24L select the cover that is appropriate for your system (SSD, HDD, or no drives).
281
The connection for the covers drive interposer on the system board is shown in Figure 5-55.
Figure 5-55 Connection for drive interposer card mounted to the system cover
RAID capabilities
Disk drives and solid-state drives in the p260 and p24L can be used to implement and manage various types of RAID arrays. They can do so in operating systems that are on the ServerProven list. For the compute node, you must configure the RAID array through the smit sasdam command which is the SAS RAID Disk Array Manager for AIX. The AIX Disk Array Manager is packaged with the Diagnostics utilities on the Diagnostics CD. Use smit sasdam to configure the disk drives for use with the SAS controller. The diagnostics CD can be downloaded in ISO file format from: http://www14.software.ibm.com/webapp/set2/sas/f/diags/download/ For more information, see Using the Disk Array Manager in the Systems Hardware Information Center at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/p7ebj/s asusingthesasdiskarraymanager.htm Tip: Depending on your RAID configuration, you might have to create the array before you install the operating system in the compute node. Before you can create a RAID array, reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the drives, you might need to reformat the drives. Change the sector size of the drives from 528 bytes to 512 bytes.
Consideration: There is no onboard network capability in the Power Systems compute nodes other than the FSP NIC interface. All p260, p24L, and p460 configurations must include a 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node. A typical I/O adapter card is shown in Figure 5-56.
Figure 5-56 The underside of the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others, such as the CIOv, CFFv, SFF, and StFF form factors, do not.
PCI hubs
The I/O is controlled by two P7-IOC I/O controller hub chips. These chips provide additional flexibility when assigning resources within Virtual I/O Server (VIOS) to specific Virtual Machine/logical partitions (LPARs).
Available adapters
Table 5-79 shows the available I/O adapter cards for the p260 and p24L. All p260, p24L, and p460 configurations must include a 10 Gb (#1762 or #EC24) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.
Table 5-79 Supported I/O adapters for the p260 and p24L Feature code 1762a 1763a EC24a EC26 Description IBM Flex System EN4054 4-port 10Gb Ethernet Adapter IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System CN4058 8-port 10Gb Converged Adapter IBM Flex System EN4132 2-port 10Gb RoCE Adapter Number of ports 4 4 8 2
283
Description IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter
Number of ports 2 2
a. At least one 10 Gb (#1762) or 1 Gb (#1763) Ethernet adapter must be configured in each server.
284
IBM PureFlex System and IBM Flex System Products and Technology
Anchor card
The anchor card, which is shown in Figure 5-57, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.
285
IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later The IBM Flex System p260 Compute Node (model 23X) supports the following configurations: IBM i 6.1 with i 6.1.1 machine code or later IBM i 7.1 or later VIOS 2.2.2.0 or later AIX V7.1 with the 7100-02 Technology Level or later AIX V6.1 with the 6100-08 Technology Level or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later
5.6.1 Overview
The IBM Flex System p460 Compute Node is a full-wide, Power Systems compute node. It has four POWER7 processor sockets, 32 memory slots, four I/O adapter slots, and an option for up to two internal drives for local storage.
286
IBM PureFlex System and IBM Flex System Products and Technology
The IBM Flex System p460 Compute Node has the specifications that are shown in Table 5-80.
Table 5-80 IBM Flex System p260 Compute Node specifications Components Model numbers Form factor Chassis support Processor Specification 7895-42X Full-wide compute node. IBM Flex System Enterprise Chassis. p460: Four IBM POWER7 processors. Each processor contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. IBM P7IOC I/O hub. 32 DIMM sockets. RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP and VLP DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. 512 GB using 32x 16 GB DIMMs. ECC, Chipkill. Two 2.5-inch non-hot-swap drive bays that support 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, only 1.8-inch SSDs are supported. If VLP DIMMs are installed, both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. RAID support by using the operating system. None standard. Optional 1 Gb or 10 Gb Ethernet adapters. Two I/O connectors for adapters. PCI Express 2.0 x16 interface. One external USB port. FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager. Power-on password, selectable boot sequence. None. Remote management by using Serial over LAN and IBM Flex System Manager. 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
Chipset Memory
Maximum internal storage RAID support Network interfaces PCI Expansion slots Ports Systems management Security features Video Limited warranty
287
Optional service upgrades are available through IBM ServicePacs: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. Width: 437 mm (17.2"), height: 51 mm (2.0), depth: 493 mm (19.4). Maximum configuration: 14.0 kg (30.6 lb).
Dimensions Weight
32 DIMM slots
Figure 5-58 Layout of the IBM Flex System p460 Compute Node
288
IBM PureFlex System and IBM Flex System Products and Technology
Power button
The USB port on the front of the Power Systems compute nodes is useful for various tasks. These tasks include out-of-band diagnostic procedures, hardware RAID setup, operating system access to data on removable media, and local OS installation. It might be helpful to obtain a USB optical (CD or DVD) drive for these purposes, in case the need arises. Tip: There is no optical drive in the IBM Flex System Enterprise Chassis. The power-control button on the front of the server (Figure 5-46 on page 270) has these functions: When the system is fully installed in the chassis: Use this button to power the system on and off.
289
When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 5-60.
The LEDs on the light path panel indicate the status of the following devices: LP: Light Path panel power indicator S BRD: System board LED (might indicate trouble with processor or MEM) MGMT: Flexible Support Processor (or management card) LED D BRD: Drive (or DASD) board LED DRV 1: Drive 1 LED (SSD 1 or HDD 1) DRV 2: Drive 2 LED (SSD 2 or HDD 2) ETE: Sidecar connector LED (not present on the IBM Flex System p460 Compute Node) If problems occur, the light path diagnostics LEDs help with identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. Pressing the button temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts. You usually obtain this information from the IBM Flex System Manager or Chassis Management Module before you remove the node. However, having the LEDs helps with repairs and troubleshooting if onsite assistance is needed. For more information about the front panel and LEDs, see the IBM Flex System p260 and p460 Compute Node Installation and Service Guide available at: http://www.ibm.com/support
DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM
SMI SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSP BCM5387 Ethernet switch Systems Management connector Gb Ethernet ports
Phy
POWER7 Processor 0
GX++ 4 bytes
PCIe to PCI
USB controller
To front panel
I/O connector 2
POWER7 Processor 1
DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM DIMM
Flash NVRAM 256 MB DDR2 TPMD Anchor card/VPD SMI SAS SMI SMI SMI 4 bytes each SMI SMI SMI SMI FSPIO HDDs/SSDs
POWER7 Processor 2
I/O connector 4
POWER7 Processor 3
Figure 5-61 IBM Flex System p460 Compute Node block diagram
291
The four processors in the IBM Flex System p460 Compute Node are connected in a cross-bar formation as shown in Figure 5-62.
POWER7 Processor 0
POWER7 Processor 1
4 bytes each
POWER7 Processor 2
POWER7 Processor 3
Figure 5-62 IBM Flex System p460 Compute Node processor connectivity
5.6.6 Processor
The IBM POWER7 processor represents a leap forward in technology and associated computing capability. The multi-core architecture of the POWER7 processor is matched with a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS. Although the processor is an important component in servers, many elements and facilities must be balanced across a server to deliver maximum throughput. The design philosophy for POWER7 processor-based systems is system-wide balance, in which the POWER7 processor plays an important role. Table 5-81 defines the processor options for the p460.
Table 5-81 Processor options for the p460 Feature code EPR2 EPR4 EPR6 Cores per POWER7 processor 4 8 8 Number of POWER7 processors 4 4 4 Total cores 16 32 32 Core frequency 3.3 GHz 3.2 GHz 3.55 GHz L3 cache size per POWE7 processor 16 MB 32 MB 32 MB
292
IBM PureFlex System and IBM Flex System Products and Technology
To optimize software licensing, you can unconfigure or disable one or more cores. The feature is listed in Table 5-82.
Table 5-82 Unconfiguration of cores Feature code 2319 Description Factory Deconfiguration of 1-core Minimum 0 Maximum 1 less than the total number of cores (For EPR5, the maximum is 7)
5.6.7 Memory
Each POWER7 processor has two integrated memory controllers in the chip. Industry standard DDR3 RDIMM technology is used to increase reliability, speed, and density of memory subsystems.
Use a minimum of 2 GB of RAM per core. The functional minimum memory configuration for the system is 4 GB (2x2 GB) but that is not sufficient for reasonable production use of the system.
a. If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used (EM04, 8145, EEME, and EEMF cannot be used).
293
Requirement: Because of the design of the on-cover storage connections, if you use SAS HDDs, you must use VLP DIMMs (4 GB or 8 GB). The cover cannot close properly if LP DIMMs and SAS hard disk drives are configured in the same system. Combining the two physically obstructs the cover from closing. For more information, see 5.5.10, Storage on page 280. There are 16 buffered DIMM slots on the p260 and the p24L, as shown in Figure 5-63. The IBM Flex System p460 Compute Node adds two more processors and 16 additional DIMM slots, which are divided evenly (eight memory slots) per processor.
SMI
DIMM 1 (P1-C1) DIMM 2 (P1-C2) DIMM 3 (P1-C3) DIMM 4 (P1-C4) DIMM 5 (P1-C5) DIMM 6 (P1-C6) DIMM 7 (P1-C7) DIMM 8 (P1-C8) DIMM 9 (P1-C9) DIMM 10 (P1-C10) DIMM 11 (P1-C11) DIMM 12 (P1-C12) DIMM 13 (P1-C13) DIMM 14 (P1-C14) DIMM 15 (P1-C15) DIMM 16 (P1-C16)
SMI
POWER7 Processor 0
SMI
SMI
SMI
SMI
POWER7 Processor 1
SMI
SMI
The memory-placement rules are as follows: Install DIMM fillers in unused DIMM slots to ensure efficient cooling. Install DIMMs in pairs. Both must be the same size. Both DIMMs in a pair must be the same size, speed, type, and technology. You can mix compatible DIMMs from multiple manufacturers. Install only supported DIMMs, as described on the IBM ServerProven website: http://www.ibm.com/servers/eserver/serverproven/compat/us/
294
IBM PureFlex System and IBM Flex System Products and Technology
For the IBM Flex System p460 Compute Node, Table 5-85 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.
Table 5-85 DIMM placement on IBM Flex System p460 Compute Node CPU 0 CPU 1 CPU 2 Number of DIMMs
CPU 3
DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23
DIMM 24
DIMM 25
DIMM 26
DIMM 27
DIMM 28
DIMM 29
DIMM 30 x
DIMM 31 x
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
295
DIMM 32
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
Both cases show a knee of the curve relationship for processor resources that are required for memory expansion: Busy processor cores do not have resources to spare for expansion. The more memory expansion that is done, the more processor resources are required.
296
IBM PureFlex System and IBM Flex System Products and Technology
The knee varies, depending on how compressible the memory contents are. This variation demonstrates the need for a case by case study to determine whether memory expansion can provide a positive return on investment. To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4 or later. You can use this tool to sample actual workloads and estimate both how expandable the partition memory is and how much processor resource is needed. Any Power System model runs the planning tool. Figure 5-65 shows an example of the output that is returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the required effective memory, and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity.
Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.50 GB [ 45%] 3.00 GB [ 60%] CPU Usage Estimate ----------0.00 0.20 0.35 0.58 1.46
Active Memory Expansion Recommendation: --------------------The recommended AME configuration for this workload is to configure the LPAR with a memory size of 5.50 GB and to configure a memory expansion factor of 1.51. This will result in a memory expansion of 45% from the LPAR's current memory size. With this configuration, the estimated CPU usage due to Active Memory Expansion is approximately 0.58 physical processors, and the estimated overall peak CPU resource required for the LPAR is 3.72 physical processors. Figure 5-65 Output from the AIX Active Memory Expansion planning tool
For more information about this topic, see the white paper, Active Memory Expansion: Overview and Usage Guide, available at: http://www.ibm.com/systems/power/hardware/whitepapers/am_exp.html
297
5.6.9 Storage
The p460 has an onboard SAS controller that can manage up to two, non-hot-pluggable internal drives. The drives attach to the cover of the server, as shown in Figure 5-66. Even though the p460 is a full-wide server, it has the same storage options as the p260 and the p24L. The type of local drives that are used impacts the form factor of your memory DIMMs. If HDDs are chosen, then only VLP DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with LP DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration. The use of SSDs does not have the same limitation, and so LP DIMMs can be used with SSDs.
Figure 5-66 The IBM Flex System p260 Compute Node showing hard disk drive location
298
IBM PureFlex System and IBM Flex System Products and Technology
As shown in Figure 5-66 on page 298, the local drives (HDD or SDD) are mounted to the top cover of the system. When you order your p460, select the cover that is appropriate for your system (SSD, HDD, or no drives) as shown in Table 5-86.
Table 5-86 Local storage options Feature code Part number Description
2.5 inch SAS HDDs 7066 8274 8276 8311 None 42D0627 49Y2022 81Y9654 Top cover with HDD connectors for the IBM Flex System p460 Compute Node (full-wide) 300 GB 10K RPM non-hot-swap 6 Gbps SAS 600 GB 10K RPM non-hot-swap 6 Gbps SAS 900 GB 10K RPM non-hot-swap 6 Gbps SAS
1.8 inch SSDs 7065 8207 No drives 7005 None Top cover for no drives on the IBM Flex System p460 Compute Node (full-wide) None 74Y9114 Top Cover with SSD connectors for IBM Flex System p460 Compute Node (full-wide) 177 GB SATA non-hot-swap SSD
On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in Figure 5-67.
299
The connection for the covers drive interposer on the system board is shown in Figure 5-68.
Figure 5-68 Connection for drive interposer card mounted to the system cover
300
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-69 The underside of the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter
Note the large connector, which plugs into one of the I/O adapter slots on the system board. Also, notice that it has its own connection to the midplane of the Enterprise Chassis. Several of the expansion cards connect directly to the midplane such as the CFFh and HSSF form factors. Others such as the CIOv, CFFv, SFF, and StFF form factors do not.
PCI hubs
The I/O is controlled by four P7-IOC I/O controller hub chips. This configuration provides additional flexibility when assigning resources within the VIOS to specific Virtual Machine/LPARs.
301
Available adapters
Table 5-87 shows the available I/O adapter cards for the p460. All p260, p24L, and p460 configurations must include a 10 Gb (#1762 or #EC24) or 1 Gb (#1763) Ethernet adapter in slot 1 of the compute node.
Table 5-87 Supported I/O adapters for the p460 Feature code 1762a 1763a EC24a EC26 1764 1761 Description IBM Flex System EN4054 4-port 10Gb Ethernet Adapter IBM Flex System EN2024 4-port 1Gb Ethernet Adapter IBM Flex System CN4058 8-port 10Gb Converged Adapter IBM Flex System EN4132 2-port 10Gb RoCE Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System IB6132 2-port QDR InfiniBand Adapter Number of ports 4 4 8 2 2 2
a. At least one 10 Gb (#1762 or #EC24) or 1 Gb (#1763) Ethernet adapter must be configured in each server.
302
IBM PureFlex System and IBM Flex System Products and Technology
SOL offers the following advantages: Remote administration without KVM (headless servers) Reduced cabling and no requirement for a serial concentrator Standard Telnet/SSH interface, eliminating the requirement for special client software The Chassis Management Module CLI provides access to the text-console command prompt on each server through a SOL connection. You can use this configuration to manage the Power Systems compute nodes from a remote location.
Anchor card
The anchor card, which is shown in Figure 5-70, contains the vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferred from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the vital product data chip to obtain system information. The vital product data chip includes information such as system type, model, and serial number.
303
AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later Remember: AIX 5L V5.3 Service Extension is required. IBM i 6.1 with i 6.1.1 machine code, or later IBM i 7.1, or later Novell SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality Red Hat Enterprise Linux 5.7, for POWER, or later Red Hat Enterprise Linux 6.2, for POWER, or later VIOS 2.2.1.4, or later
Figure 5-71 IBM Flex System PCIe Expansion Node attached to a compute node
304
IBM PureFlex System and IBM Flex System Products and Technology
The ordering information for the PCIe Expansion Node is listed in Table 5-88.
Table 5-88 PCIe Expansion Node ordering number and feature code Part number 81Y8983 Feature codea A1BV Description IBM Flex System PCIe Expansion Node
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
The part number includes the following items: IBM Flex System PCIe Expansion Node Two riser assemblies Interposer cable assembly Double-wide shelf Two auxiliary power cables (for adapters that require additional +12 V power) Four removable PCIe slot air flow baffles Documentation CD that contains the Installation and Service Guide Warranty information and Safety flyer and Important Notices document The PCIe Expansion Node is supported when it is attached to the compute nodes that are listed in Table 5-89.
Table 5-89 Supported compute nodes p24L p260 N p460 N x220 x240 Ya x440 N Part number Expansion Node
81Y8983
Ya
5.7.1 Features
The PCIe Expansion Node has the following features: Support for up to four standard PCIe 2.0 adapters: Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and 16x adapters supported) Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters supported) Support for PCIe 3.0 adapters by operating them in PCIe 2.0 mode Support for one full-length, full-height double-wide adapter (consuming the space of the two full-length, full-height adapter slots) Support for PCIe cards with higher power requirements The Expansion Node provides two auxiliary power connections, up to 75 W each for a total of 150 W of additional power using standard 2x3, +12 V six-pin power connectors. These connectors are placed on the base system board so that they both can provide power to a single adapter (up to 225 W), or to two adapters (up to 150 W each). Power cables are used to connect from these connectors to the PCIe adapters and are included with the PCIe Expansion Node.
305
Two Flex System I/O expansion connectors The I/O expansion connectors are labeled I/O expansion 3 connector and I/O expansion 4 connector in Figure 5-75 on page 308. These I/O connectors expand the I/O capability of the attached compute node. Figure 5-72 shows the locations of the PCIe slots.
Full height PCIe adapter slot 2 Low profile PCIe adapter slot 3
x240 node
Figure 5-72 PCIe Expansion Node attached to a node showing the four PCIe slots
A double wide shelf is included with the PCIe Expansion Node. The compute node and the expansion node must both be attached to the shelf, and then the interposer cable is attached, linking the two electronically. Figure 5-73 shows installation of the compute node and the PCIe Expansion Node on the shelf.
Compute Node
Figure 5-73 Installation of a compute node and PCIe Expansion Node on to the tray
306
IBM PureFlex System and IBM Flex System Products and Technology
After the compute node and PCIe Expansion Node are installed onto the shelf, an interposer cable is connected between them. This cable provides the link for the PCIe bus between the two components. This cable is shown in Figure 5-74. The cable consists of a ribbon cable with a circuit board at each end.
307
5.7.2 Architecture
The architecture diagram is shown on Figure 5-75. PCIe version: All PCIe bays on the expansion node operate at PCIe 2.0. The interposer link is a PCIe 2.0 x16 link, which is connected to the switch on the main board of the PCIe Expansion Node. This PCIe switch provides two PCIe connections for bays 1 and 2 (the full-length, full-height adapters slots) and two PCIe connections for bays 3 and 4 (the low profile adapter slots). There are two additional I/O adapter bays (x16) available that connect into the midplane of the enterprise chassis. You can use these bays to set up a single wide node to take advantage of a double-wide nodes I/O bandwidth to the midplane.
Compute Node PCIe Expansion Node Interposer cable - PCIe 2.0 x16
I/O 1
I/O 2
Expansion connector
I/O 3
I/O 4
x16 PCIe switch x16 x16 PCIe 2.0 x16 FHFL PCIe 2.0 x16 FHFL
x16
Processor 2
x8
x8
PCIe 2.0 x8 LP
Processor 1
Number of installed processors: Two processors must be installed in the compute node because the expansion connector is routed from processor 2.
308
IBM PureFlex System and IBM Flex System Products and Technology
PCIe 2.0 x8 LP
a. Ports 3 and 4 require that a four-port card be installed in the expansion slot. b. Might require one or more port upgrades to be installed in the I/O module.
309
Table 5-91 lists the PCIe adapters that are supported in the Expansion Node. Some adapters must be installed in one of the full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used.
Table 5-91 Supported adapters Part number 46C9078 46C9081 81Y4519 81Y4527 90Y4377 90Y4397 94Y5960 Feature code A3J3 A3J4 5985 A1NB A3DY A3DZ A1R4 Description IBM 365GB High IOPS MLC Mono Adapter (low-profile adapter) IBM 785GB High IOPS MLC Mono Adapter (low-profile adapter) 640GB High IOPS MLC Duo Adapter (full-height adapter) 1.28TB High IOPS MLC Duo Adapter (full-height adapter) IBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter) IBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter) NVIDIA Tesla M2090 (full-height adapter) Maximum supported 4 4 2 2 4 2 1a
a. if the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used.
For the current list of adapters that are supported in the Expansion Node, see the IBM ServerProven site at: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html For information about the IBM High IOPS adapters, see the IBM Redbooks Product Guide IBM High IOPS SSD PCIe Adapters, TIPS0729, found at: http://www.redbooks.ibm.com/abstracts/tips0729.html?Open Although the design of Expansion Node facilitates a much greater set of standard PCIe adapters, Table 5-91 lists the adapters that are supported. If the PCI Express adapter that you require is not on the ServerProven website, use the IBM ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility with the configuration.
310
IBM PureFlex System and IBM Flex System Products and Technology
Description IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Flex System FC3172 2-port 8Gb FC Adapter IBM Flex System FC3052 2-port 8Gb FC Adapter
For the current list of adapters that are supported in the Expansion Node, see the IBM ServerProven site at: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html For information about these adapters, see the IBM Redbooks Product Guides for Flex System in the Adapters category: http://www.redbooks.ibm.com/portals/puresystems?Open&page=pg&cat=adapters
Figure 5-76 IBM Flex System Storage Expansion Node (right) connected to the IBM Flex System x240 Compute Node (left)
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
311
The part number includes the following items: The IBM Flex System Storage Expansion Node Expansion shelf, onto which you install the Compute Node and Storage Expansion Node IBM Warranty information booklet Product documentation CD that includes an installation and service guide The following features are included: Sliding tray to allow access to up to 12 SAS/SATA or SSD storage Hot-swappable drives Supports RAID 0, 1, 5, 6, 10, 50, and 60 512 MB or 1 GB with cache-to-flash super capacitor offload Includes an expansion shelf to physically support the Storage Expansion Node and its compute node Light path diagnostic lights to aid in problem determination Feature on Demand upgrades to add advanced features
68Y8588
Ya
Two processors: Two processors must be installed in the x220 or x240 compute node because the expansion connector used to connect to the Storage Expansion Node is routed from processor 2. Figure 5-77 shows the Storage Expansion Node front view when it is attached to an x240 compute node.
Figure 5-77 Storage Expansion Node front view - attached to an x240 compute node
312
IBM PureFlex System and IBM Flex System Products and Technology
The Storage Expansion Node is a PCIe Generation 3 and a SAS 2.1 complaint enclosure that supports up to twelve 2.5-inch drives. The drives may be HDD or SSD, and both SAS or SATA. Drive modes that are supported are JBOD or RAID-0, 1, 5, 6, 10, 50, and 60. The drives are accessed by opening the handle on the front of the Storage Expansion Node and sliding out the drive tray, which may be done while it is operational (hence the terra-cotta touch point on the front of the unit). The drive tray extended part way out, while connected to an x240 compute node, is shown in Figure 5-78. With the drive tray extended, the all 12 hot-swap drives can be accessed on the left side of the tray. Do not keep the drawer open: Depending on your operating environment, the expansion node might power off if the drawer is open for too long. Chassis fans might increase in speed. The drawer should be closed fully for proper cooling and to protect system data integrity. There is an LED to indicate that the drawer is not closed and that the drawer has been open too long, and that thermal thresholds have been reached.
Pull-handle
LED panel
Figure 5-78 Storage Expansion Node with drive tray part way extended
313
The Storage Expansion Node is connected to the compute node through its expansion connector. Management and PCIe connections are provided by this expansion connector, as shown in the architecture diagram in Figure 5-79. Power is obtained from the enterprise chassis midplane directly, not through the compute node.
Compute Node Storage Expansion Node PCIe 3.0 x8
Cache
Drive tray 12 11 Processor 2 10 9 Processor 1 8 7 5 6 External drive LEDs 3 4 SAS expander 6x SAS 1 2
The LSI SAS controller in the expansion node is connected directly to the PCIe bus of Processor 2 of the compute node. The result is that the compute node sees the disks in the expansion node as locally attached. Management of the Storage Expansion Node is through the IMM2 on the compute node.
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
314
IBM PureFlex System and IBM Flex System Products and Technology
FoD upgrades are system-wide: The FoD upgrades are the same ones that are used with the ServeRAID M5115 available for use internally in the x220 and x240 compute nodes. If you have an M5115 installed in the attached compute node and installed any of these upgrades, then those upgrades are automatically activated on the LSI controller in the expansion node. You do not need to purchase the FoD upgrades separately for the expansion node. RAID 6 Upgrade (90Y4410) Adds support for RAID 6 and RAID 60. This is a Feature on Demand license. Performance Upgrade (90Y4412) The Performance Upgrade for IBM Flex System (implemented using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by using a low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases. Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard disk drives (90Y4447) The SSD Caching Enabler for IBM Flex System (implemented using the LSI MegaRAID CacheCade Pro 2.0) accelerates the performance of hard disk drive (HDD) arrays with only an incremental investment in solid-state drive (SSD) technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license. This feature requires at least one SSD drive be installed.
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
No support for expansion cards: Unlike the PCIe Expansion Node, the Storage Expansion Node cannot connect additional I/O expansion cards.
315
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems sales channel (AAS) using e-config.
The front of the Storage Expansion Node has a number of LEDs on the lower right front, for identification and status purposes, which are shown in Figure 5-80. The Node is used for indicating a light path fault. Internally, there are a number of light path diagnostic LEDs that are used for fault identification.
2 4 6 8 10 12 1 3 5 7 9 1 1
Drive activity LEDs
Figure 5-80 LEDs on the front of the Storage Expansion Node
316
IBM PureFlex System and IBM Flex System Products and Technology
In addition to the lights described in Table 5-98, there are LEDs locally on each of the drive trays. A green LED indicates disk activity and an amber LED indicates a drive fault. These LEDs can be observed when the drive tray is extended and the unit operational. With the Storage Expansion Node removed from a chassis and its cover removed, there are internal LEDs located below the segmented cable track. Here there is a light path button that may be pressed and any light path indications can be observed. This button operates when the unit is not powered up because a capacitor provides a power source to illuminate the light path. When the light path diagnostics button is pressed, the light path LED is illuminated, showing the button is functional. If a fault is detected, then the relevant LED also lights. Figure 5-81 and Table 5-99 shows the various LEDs and their statuses.
Flash/RAID adapter Control panel Temperature Light path Storage expansion
Capacitor
Figure 5-81 Light path LEDs located below the segmented cable track Table 5-99 Internal light path LED status LED Flash/RAID adapter Control panel Temperature Storage expansion Light path Meaning There is a RAID Cache card fault. The LED panel card is not present. A temperature event occurred. There is a fault on the storage expansion unit. Verify that the light path diagnostic function, including the battery, is operating properly.
317
External SAS connector: There is no external SAS connector on the IBM Flex System Storage Expansion Node. The storage is internal only.
318
IBM PureFlex System and IBM Flex System Products and Technology
EN2024
Vendor name where A=01 02 = Broadcom, Brocade 05 = Emulex 09 = IBM 13 = Mellanox 17 = QLogic
319
p24L
I/O adapters
p460 Y N Y N N Y Y Y N N N Y N
x220
x240
x440
Pg
Ethernet adapters 49Y7900 90Y3466 None 90Y3554 90Y3558 None None A1BR A1QY None A1R1 A1R0 None None 1763 / A10Y EC2D / A1QY 1762 / None 1759 / A1R1 1760 / A1R0 EC24 / None EC26 / None EN2024 4-port 1Gb Ethernet Adapter EN4132 2-port 10 Gb Ethernet Adapter EN4054 4-port 10Gb Ethernet Adapter CN4054 10Gb Virtual Fabric Adapter CN4054 Virtual Fabric Adapter Upgradeb CN4058 8-port 10Gb Converged Adapter EN4132 2-port 10Gb RoCE Adapter Y Y N Y Y N N Y Y N Y Y N N Y Y N Y Y N N Y N Y N N Y Y Y N Y N N Y Y Y N Y N N Y Y 322 324 325 327 327 330 333
Fibre Channel adapters 69Y1938 95Y2375 88Y6370 A1BM A2N5 A1BP 1764 / A1BM EC25 / A2N5 EC2B / A1BP FC3172 2-port 8Gb FC Adapter FC3052 2-port 8Gb FC Adapter FC5022 2-port 16Gb FC Adapter Y Y Y Y Y Y Y Y Y Y N N Y N N Y N N 336 337 339
InfiniBand adapters 90Y3454 None SAS 90Y4390 A2XW None / A2XW ServeRAID M5115 SAS Controller,c Y Y Y N N N A1QZ None EC2C / A1QZ 1761 / None IB6132 2-port FDR InfiniBand Adapter IB6132 2-port QDR InfiniBand Adapter Y N Y N Y N N Y N Y N Y 341 342
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440. b. Requires a Feature on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade is needed per adapter. c. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. For more information, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884, found at http://www.redbooks.ibm.com/abstracts/tips0884.html?Open.
320
IBM PureFlex System and IBM Flex System Products and Technology
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config). b. 1 Gb is supported on the CN4093s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds. c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru module. d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, and EN4093R switches. e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch.
321
FC5022 16Gb 24-port ESB 90Y9356 A2RQ / 3771 Yes Yes Yes
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config).
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (e-config). b. To operate at FDR speeds, the IB6131 switch needs the FDR upgrade, as described in 4.10.12, IBM Flex System IB6131 InfiniBand Switch on page 153.
322
IBM PureFlex System and IBM Flex System Products and Technology
Table 5-104 lists the ordering part number and feature code.
Table 5-104 IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter ordering information Part number 49Y7900 HVEC feature code (x-config) A1BR AAS feature code (e-config)a 1763 / A10Y
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The EN2024 4-port 1Gb Ethernet Adapter has the following features: Dual Broadcom BCM5718 ASICs Quad-port Gigabit 1000BASE-X interface Two PCI Express 2.0 x1 host interfaces, one per ASIC Full duplex (FDX) capability, enabling simultaneous transmission and reception of data on the Ethernet network MSI and MSI-X capabilities, with up to 17 MSI-X vectors I/O virtualization support for VMware NetQueue, and Microsoft VMQ Seventeen receive queues and 16 transmit queues Seventeen MSI-X vectors supporting per-queue interrupt to host Function Level Reset (FLR) ECC error detection and correction on internal static random-access memory (SRAM) TCP, IP, and UDP checksum offload Large Send offload and TCP segmentation offload Receive-side scaling Virtual LANs (VLANs): IEEE 802.1q VLAN tagging Jumbo frames (9 KB) IEEE 802.3x flow control Statistic gathering (SNMP MIB II and Ethernet-like MIB [IEEE 802.3x, Clause 30]) Comprehensive diagnostic and configuration software suite Advanced Configuration and Power Interface (ACPI) 1.1a-compliant: multiple power modes Wake-on-LAN (WOL) support Preboot Execution Environment (PXE) support RoHS-compliant
323
Figure 5-84 shows the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter.
Figure 5-84 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System
For more information, see the IBM Redbooks Product Guide IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845, found at: http://www.redbooks.ibm.com/abstracts/tips0845.html?Open
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The IBM Flex System EN4132 2-port 10Gb Ethernet Adapter has the following features: Based on Mellanox Connect-X3 technology IEEE Std. 802.3 compliant PCI Express 3.0 (1.1 and 2.0 compatible) through an x8 edge connector up to 8 GTps 10 Gbps Ethernet 324
IBM PureFlex System and IBM Flex System Products and Technology
Processor offload of transport operations CORE-Direct application offload GPUDirect application offload RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation using Ethernet over InfiniBand (EoIB) RoHS-6 compliant Figure 5-85 shows the IBM Flex System EN4132 2-port 10Gb Ethernet Adapter.
Figure 5-85 The EN4132 2-port 10Gb Ethernet Adapter for IBM Flex System
For more information, see the IBM Redbooks Product Guide IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873, found at: http://www.redbooks.ibm.com/abstracts/tips0873.html?Open
325
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported Power Systems compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The IBM Flex System EN4054 4-port 10Gb Ethernet Adapter has the following features and specifications: Four-port 10 Gb Ethernet adapter Dual-ASIC Emulex BladeEngine 3 controller Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation) PCI Express 3.0 x8 host interface (The p260 and p460 support PCI Express 2.0 x8.) Full-duplex capability Bus-mastering support Direct memory access (DMA) support PXE support IPv4/IPv6 TCP and UDP checksum offload Large send offload Large receive offload Receive-Side Scaling (RSS) IPv4 TCP Chimney offload TCP Segmentation offload
VLAN insertion and extraction Jumbo frames up to 9000 bytes Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), teaming support, and IEEE 802.3ad Enhanced Ethernet (draft) Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz) Supports Serial over LAN (SoL) Total Max Power: 23.1 W
326
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-86 shows the IBM Flex System EN4054 4-port 10Gb Ethernet Adapter.
Figure 5-86 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter
For more information, see the IBM Redbooks Product Guide IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868, found at: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open
Description IBM Flex System CN4054 10Gb Virtual Fabric Adapter IBM Flex System CN4054 Virtual Fabric Adapter Upgrade
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320.
327
The IBM Flex System CN4054 10Gb Virtual Fabric Adapter has the following features and specifications: Dual-ASIC Emulex BladeEngine 3 controller. Operates either as a 4-port 1/10 Gb Ethernet adapter, or supports up to 16 Virtual Network Interface Cards (vNICs). In virtual NIC (vNIC) mode, it supports: Virtual port bandwidth allocation in 100 Mbps increments. Up to 16 virtual ports per adapter (four per port). With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, four of the 16 vNICs (one per port) support iSCSI or FCoE. Support for two vNIC modes: IBM Virtual Fabric Mode and Switch Independent Mode. Wake On LAN support. With the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, the adapter adds FCoE and iSCSI hardware initiator support. iSCSI support is implemented as a full offload and presents an iSCSI adapter to the operating system. TCP offload Engine (TOE) support with Windows Server 2003, 2008, and 2008 R2 (TCP Chimney) and Linux. The connection and its state are passed to the TCP offload engine. Data transmit and receive is handled by the adapter. Supported by iSCSI. Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation). PCI Express 3.0 x8 host interface. Full-duplex capability. Bus-mastering support. DMA support. PXE support. IPv4/IPv6 TCP, UDP checksum offload: Large send offload Large receive offload RSS IPv4 TCP Chimney offload TCP Segmentation offload
VLAN insertion and extraction. Jumbo frames up to 9000 bytes. Load balancing and failover support, including AFT, SFT, ALB, teaming support, and IEEE 802.3ad. Enhanced Ethernet (draft): Enhanced Transmission Selection (ETS) (P802.1Qaz). Priority-based Flow Control (PFC) (P802.1Qbb). Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz). Supports Serial over LAN (SoL). Total Max Power: 23.1 W.
328
IBM PureFlex System and IBM Flex System Products and Technology
The IBM Flex System CN4054 10Gb Virtual Fabric Adapter supports the following modes of operation: IBM Virtual Fabric Mode This mode works only with a IBM Flex System Fabric EN4093 10Gb Scalable Switch installed in the chassis. In this mode, the adapter communicates with the switch module to obtain vNIC parameters by using Data Center Bridging Exchange (DCBX). A special tag within each data packet is added and later removed by the NIC and switch for each vNIC group. This tag helps maintain separation of the virtual channels. In IBM Virtual Fabric Mode, each physical port is divided into four virtual ports, providing a total of 16 virtual NICs per adapter. The default bandwidth for each vNIC is 2.5 Gbps. Bandwidth for each vNIC can be configured at the EN4093 switch from 100 Mbps to 10 Gbps, up to a total of 10 Gb per physical port. The vNICs can also be configured to have 0 bandwidth if you must allocate the available bandwidth to fewer than eight vNICs. In IBM Virtual Fabric Mode, you can change the bandwidth allocations through the EN4093 switch user interfaces without having to reboot the server. When storage protocols are enabled on the adapter by using CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, six ports are Ethernet, and two ports are either iSCSI or FCoE. Switch Independent vNIC Mode This vNIC mode is supported by the following switches: IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System EN4091 10Gb Ethernet Pass-thru and a top-of-rack switch Switch Independent Mode offers the same capabilities as IBM Virtual Fabric Mode in terms of the number of vNICs and bandwidth that each can have. However, Switch Independent Mode extends the existing customer VLANs to the virtual NIC interfaces. The IEEE 802.1Q VLAN tag is essential to the separation of the vNIC groups by the NIC adapter or driver and the switch. The VLAN tags are added to the packet by the applications or drivers at each endstation rather than by the switch. Physical NIC (pNIC) mode In pNIC mode, the expansion card can operate as a standard 10 Gbps or 1 Gbps 4-port Ethernet expansion card. When in pNIC mode, the expansion card functions with any of the following I/O modules: IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System EN4091 10Gb Ethernet Pass-thru and a top-of-rack switch IBM Flex System EN2092 1Gb Ethernet Scalable Switch In pNIC mode, the adapter with the CN4054 Virtual Fabric Adapter Upgrade, 90Y3558, applied operates in traditional converged network adapter (CNA) mode. It operates with four ports of Ethernet and four ports of storage (iSCSI or FCoE) available to the operating system.
329
Figure 5-87 shows the IBM Flex System CN4054 10Gb Virtual Fabric Adapter.
Figure 5-87 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System
The CN4058 supports FCoE to both FC and FCoE targets. For more information, see 7.4, FCoE on page 393. For more information, see the IBM Redbooks Product GuideIBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868, found at: http://www.redbooks.ibm.com/abstracts/tips0868.html?Open
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
330
IBM PureFlex System and IBM Flex System Products and Technology
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. Figure 5-88 shows the CN4058 8-port 10Gb Converged Adapter.
Figure 5-88 The CN4054 10Gb Virtual Fabric Adapter for IBM Flex System
Features
The IBM Flex System CN4058 8-port 10Gb Converged Adapter has these features: Eight-port 10 Gb Ethernet adapter Dual-ASIC controller using the Emulex XE201 (Lancer) design PCIe Express 2.0 x8 host interface (5 GTps) MSI-X support IBM Fabric Manager support The adapter has these Ethernet features IPv4/IPv6 TCP and UDP checksum offload, Large Send Offload (LSO), Large Receive Offload, Receive Side Scaling (RSS), and TCP Segmentation Offload (TSO) VLAN insertion and extraction Jumbo frames up to 9000 bytes Priority Flow Control (PFC) for Ethernet traffic Network boot Interrupt coalescing Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), link aggregation, and IEEE 802.1AX The adapter has these FCoE features: Common driver for CNAs and HBAs 3,500 N_Port ID Virtualization (NPIV) interfaces (total for adapter) Support for FIP and FCoE Ether Types
Chapter 5. Compute nodes
331
Fabric Provided MAC Addressing (FPMA) support 2048 concurrent port logins (RPIs) per port 1024 active exchanges (XRIs) per port ISCSI support: The CN4058 does not support iSCSI hardware offload. The adapter supports the following IEEE standards: PCI Express base spec 2.0, PCI Bus Power Management Interface rev. 1.2, and Advanced Error Reporting (AER) IEEE 802.3ap (Ethernet over Backplane) IEEE 802.1q (VLAN) IEEE 802.1p (QoS/CoS) IEEE 802.1AX (Link Aggregation) IEEE 802.3x (Flow Control) Enhanced I/O Error Handing (EEH) Enhanced Transmission Selection (ETS) (P802.1Qaz) Priority-based Flow Control (PFC) (P802.1Qbb) Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX, and CEE-DCBX (P802.1Qaz)
Switches and switch upgrades IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch #ESW2 + CN4093 10Gb Converged Scalable Switch (Upgrade 1) #ESU1 + CN4093 10Gb Converged Scalable Switch (Upgrade 2) #ESU2 IBM Flex System Fabric EN4093R 10Gb Scalable Switch #ESW7 + EN4093 10Gb Scalable Switch (Upgrade 1) #3596 + EN4093 10Gb Scalable Switch (Upgrade 2) #3597
332
IBM PureFlex System and IBM Flex System Products and Technology
Switches and switch upgrades IBM Flex System Fabric EN4093 10Gb Scalable Switch #3593 + EN4093 10Gb Scalable Switch (Upgrade 1) #3596 + EN4093 10Gb Scalable Switch (Upgrade 2) #3597 IBM Flex System EN4091 10Gb Ethernet Pass-thru #3700 IBM Flex System EN2092 1Gb Ethernet Scalable Switch #3598 + EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) #3594
2 4
a. This column indicates the number of adapter ports that are active if all the upgrades are installed. See the following list for details.
To take advantage of the capabilities of the CN4048 adapter, I/O modules should be upgraded as follows to maximize the number of active internal ports: For CN4093, EN4093, and EN4093R switches: Upgrade 1 and 2 are both required, as indicated in Table 5-109 on page 332, for the CN4093, EN4093, and EN4093R to use six ports on the adapter. If only Upgrade 1 is applied, only four ports per adapter are connected. If neither upgrade is applied, only two ports per adapter are connected. For the EN4091 Pass-thru: The EN4091 Pass-thru has only 14 internal ports and therefore supports only ports 1 and 2 of the adapter. For the EN2092: Upgrade 1 of the EN2092 is required, as indicated in Table 5-109 on page 332, to use four ports of the adapter. If Upgrade 1 is not applied, only two ports per adapter are connected.
FCoE support
The CN4058 supports FCoE to both FC and FCoE targets. For more information, see 7.4, FCoE on page 393.
333
Clustered IBM DB2 databases, web infrastructure, and high frequency trading are just a few applications that achieve significant throughput and latency improvements, resulting in faster access, real-time response, and more users per server. This adapter improves network performance by increasing available bandwidth while it decreases the associated transport load on the processor. Table 5-110 lists the ordering part number and feature code.
Table 5-110 Ordering information Part number None HVEC feature code (x-config) None AAS feature code (e-config)a EC26 / None Description
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported Power Systems compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. Figure 5-89 shows the EN4132 2-port 10Gb RoCE Adapter.
Features
The IBM Flex System EN4132 2-port 10Gb RoCE Adapter has the following features: RDMA over Converged Ethernet (RoCE) EN4132 2-port 10Gb RoCE Adapter, which is based on Mellanox ConnectX-2 technology, uses the InfiniBand Trade Association's RDMA over Converged Ethernet (RoCE) technology to deliver similar low latency and high performance over Ethernet networks. Using Data Center Bridging capabilities, RoCE provides efficient low-latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency-sensitive applications. With link-level interoperability in the existing Ethernet infrastructure, network administrators can use existing data center fabric management solutions. 334
IBM PureFlex System and IBM Flex System Products and Technology
Sockets acceleration Applications using TCP/UDP/IP transport can achieve industry-leading throughput over InfiniBand or 10 GbE adapters. The hardware-based stateless offload engines in ConnectX-2 reduce the processor impact of IP packet transport, allowing more processor cycles to work on the application. I/O virtualization ConnectX-2 with Virtual Intelligent Queuing (Virtual-IQ) technology provides dedicated adapter resources and ensured isolation and protection for virtual machines within the server. I/O virtualization with ConnectX-2 gives data center managers better server usage while it reduces cost, power, and cable complexity.
Specifications
The IBM Flex System EN4132 2-port 10Gb RoCE Adapter has the following specifications (based on Mellanox Connect-X2 technology): PCI Express 2.0 (1.1 compatible) through an x8 edge connector with up to 5 GTps 10 Gbps Ethernet Processor offload of transport operations CORE-Direct application offload GPUDirect application offload RDMA over Converged Ethernet (RoCE) End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless off-load Ethernet encapsulation (EoIB) 128 MAC/VLAN addresses per port RoHS-6 compliant The adapter meets the following IEEE specifications: IEEE 802.3ae 10 Gigabit Ethernet IEEE 802.3ad Link Aggregation and Failover IEEE 802.3az Energy Efficient Ethernet IEEE 802.1Q, .1p VLAN tags and priority IEEE 802.1Qau Congestion Notification IEEE P802.1Qbb D1.0 Priority-based Flow Control IEEE 1588 Precision Clock Synchronization Jumbo frame support (10 KB)
335
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The IBM Flex System FC3172 2-port 8Gb FC Adapter has the following features: QLogic ISP2532 controller PCI Express 2.0 x4 host interface Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port 8/4/2 Gbps auto-negotiation Support for FCP SCSI initiator and target operation Support for full-duplex operation Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet Protocol (FCP-IP) Support for point-to-point fabric connection (F-port fabric login) Support for Fibre Channel Arbitrated Loop (FC-AL) public loop profile: Fibre Loop-(FL-Port)-Port Login Support for Fibre Channel services class 2 and 3 Configuration and boot support in UEFI Power usage: 3.7 W typical RoHS 6 compliant
336
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-90 shows the IBM Flex System FC3172 2-port 8Gb FC Adapter.
Figure 5-90 The IBM Flex System FC3172 2-port 8Gb FC Adapter
For more information, see the IBM Redbooks Product Guide IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867, found at: http://www.redbooks.ibm.com/abstracts/tips0867.html?Open
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320.
337
The IBM Flex System FC3052 2-port 8Gb FC Adapter has the following features and specifications: Uses the Emulex Saturn 8 Gb Fibre Channel I/O Controller chip Multifunction PCIe 2.0 device with two independent FC ports Auto-negotiation between 2-Gbps, 4-Gbps, and 8-Gbps FC link attachments Complies with the PCIe base and CEM 2.0 specifications Enablement of high-speed and dual-port connection to a Fibre Channel SAN Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV) and Virtual Fabric Simplified installation and configuration by using common HBA drivers Common driver model that eases management and enables upgrades independent of HBA firmware Fibre Channel specifications: Bandwidth: Burst transfer rate of up to 1600 MBps full-duplex per port Support for point-to-point fabric connection: F-Port Fabric Login Support for FC-AL and FC-AL-2 FL-Port Login Support for Fibre Channel services class 2 and 3 Single-chip design with two independent 8 Gbps serial Fibre Channel ports, each of which provides these features: Reduced instruction set computer (RISC) processor Integrated serializer/deserializer Receive DMA sequencer Frame buffer Onboard DMA: DMA controller for each port (transmit and receive) Frame buffer first in, first out (FIFO): Integrated transmit and receive frame buffer for each data channel
338
IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-91 shows the IBM Flex System FC3052 2-port 8Gb FC Adapter.
For more information, see the IBM Redbooks Product Guide IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869, found at: http://www.redbooks.ibm.com/abstracts/tips0869.html?Open
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320.
339
The IBM Flex System FC5022 2-port 16Gb FC Adapter has the following features: 16 Gbps Fibre Channel. Uses 16 Gbps bandwidth to eliminate internal oversubscription Investment protection with the latest Fibre Channel technologies Reduces the number of ISL external switch ports, optics, cables, and power Over 500,000 IOPS per port, which maximizes transaction performance and the density of VMs per compute node. Achieves performance of 315,000 IOPS for email exchange and 205,000 IOPS for SQL Database. Boot from SAN allows the automation SAN Boot LUN discovery to simplify boot from SAN and reduce image management complexity. Brocade Server Application Optimization (SAO) provides quality of service (QoS) levels assignable to VM applications. Direct I/O enables native (direct) I/O performance by allowing VMs to bypass the hypervisor and communicate directly with the adapter. Brocade Network Advisor simplifies and unifies the management of Brocade adapter, SAN, and LAN resources through a single user interface. LUN Masking, which is an Initiator-based LUN masking for storage traffic isolation. NPIV allows multiple host initiator N_Ports to share a single physical N_Port, dramatically reducing SAN hardware requirements. Target Rate Limiting (TRL) throttles data traffic when accessing slower speed storage targets to avoid back pressure problems. RoHS-6 compliant. Figure 5-92 shows the IBM Flex System FC5022 2-port 16Gb FC Adapter.
For more information, see the IBM Redbooks Product Guide IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869, found at: http://www.redbooks.ibm.com/abstracts/tips0891.html?Open
340
IBM PureFlex System and IBM Flex System Products and Technology
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported x86 compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The IB6132 2-port FDR InfiniBand Adapter has the following features and specifications: Based on Mellanox Connect-X3 technology Virtual Protocol Interconnect (VPI) InfiniBand Architecture Specification V1.2.1 compliant Supported InfiniBand speeds (auto-negotiated): 1X/2X/4X SDR (2.5 Gbps per lane) DDR (5 Gbps per lane) QDR (10 Gbps per lane) FDR10 (40 Gbps, 10 Gbps per lane) FDR (56 Gbps, 14 Gbps per lane)
IEEE Std. 802.3 compliant PCI Express 3.0 x8 host-interface up to 8 GTps bandwidth Processor offload of transport operations CORE-Direct application offload
341
GPUDirect application offload Unified Extensible Firmware Interface (UEFI) WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Ethernet encapsulation (EoIB) RoHS-6 compliant Power consumption: Typical: 9.01 W, maximum 10.78 W Figure 5-93 shows the IBM Flex System IB6132 2-port FDR InfiniBand Adapter.
Figure 5-93 IBM Flex System IB6132 2-port FDR InfiniBand Adapter
For more information, see the IBM Redbooks Product Guide IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872, found at: http://www.redbooks.ibm.com/abstracts/tips0872.html?Open
342
IBM PureFlex System and IBM Flex System Products and Technology
Table 5-115 lists the ordering part number and feature code.
Table 5-115 IBM Flex System IB6132 2-port QDR InfiniBand Adapter ordering information Part number None HVEC feature code (x-config) None AAS feature code (e-config)a 1761 / None Description
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260, and p460 (when supported). The second is for the x220 and x440.
Here are the supported compute nodes and switches: Supported Power Systems compute nodes: See 5.9.3, Supported compute nodes on page 320. Supported switches: See 5.9.4, Supported switches on page 320. The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following features and specifications: ConnectX2 based adapter VPI InfiniBand Architecture Specification v1.2.1 compliant IEEE Std. 802.3 compliant PCI Express 2.0 (1.1 compatible) through an x8 edge connector up to 5 GTps Processor offload of transport operations CORE-Direct application offload GPUDirect application offload UEFI WoL RoCE End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload RoHS-6 compliant Figure 5-94 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter.
Figure 5-94 IBM Flex System IB6132 2-port QDR InfiniBand Adapter
343
For more information, see the IBM Redbooks Product Guide IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890, found at: http://www.redbooks.ibm.com/abstracts/tips0890.html?Open
344
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 6.
Network integration
This chapter describes different aspects of planning and implementing a network infrastructure of the IBM Flex System Enterprise Chassis. You must take several factors into account to achieve a successful implementation. These factors include network management, performance, high-availability and redundancy features, VLAN implementation, interoperability, and others. This chapter includes the following sections: 6.1, Ethernet switch module selection on page 346 6.2, Scalable switches on page 347 6.3, VLAN on page 348 6.4, High availability and redundancy on page 349 6.5, Performance on page 354 6.6, IBM switch stacking on page 356 6.8, VMready on page 359
345
Gigabit Ethernet to nodes/10 Gb Ethernet Uplinks 10 Gb Ethernet to nodes/10 Gb Ethernet Uplinks 40 Gb Ethernet Uplinks Basic Layer 2 switching (VLAN, port aggregation) Advanced Layer 2 switching: IEEE features (Failover, QoS) Layer 3 IPv4 switching (forwarding, routing, ACL filtering) Layer 3 IPv6 switching (forwarding, routing, ACL filtering) 10 Gb Ethernet CEE/FCoE Omni Ports (configurable as 4/8 FC or 10 GbE) Switch stacking Switch stacking with FCoE vNIC support UFP support 802.1Qbg support VMready
346
IBM PureFlex System and IBM Flex System Products and Technology
14 internal ports
Logical partition 1
Base Switch: Enables fourteen internal 10 Gb ports (one to each server) and ten external 10 Gb ports Supports the 2 port 10 Gb LOM and Virtual Fabric capability
Pool of uplink ports
42 10 Gb KR lanes
14 internal ports
Logical partition 2
First Upgrade via FoD: Enables second set of fourteen internal 10 Gb ports (one to each server) and two 40 Gb ports Each 40 Gb port can be used as four 10 Gb ports Supports the 4-port Virtual Fabric adapter Second Upgrade via FoD: Enables third set of fourteen internal 10 Gb ports (one to each server) and four external 10 Gb ports Capable of supporting a six port card in the future
14 internal ports
Figure 6-1 Logical partitions for the IBM Flex System Fabric EN4093 10Gb Scalable Switch
347
Figure 6-2 shows a node using a two port LAN on Motherboard (LOM). Port 1 is connected to the first switch. The second port is connected to the second switch.
Figure 6-3 shows a 4-port 10 Gb Ethernet adapter (IBM Flex System CN4054 10Gb Virtual Fabric Adapter) and a 2-port Fibre Channel (FC) I/O Adapter (IBM Flex System FC5022 2-port 16Gb FC Adapter). These adapters deliver six fabrics to each node.
10 Gb Enet switch 4p 10 Gb Enet card Node 16 Gb FC card 16 Gb FC switch 16 Gb FC switch (2nd switch optional)
Figure 6-3 Showing six port connections to six fabric implementation of Ethernet combined with FC
6.3 VLAN
VLANs are commonly used in the Layer 2 network to split up groups of network users into manageable broadcast domains. They are also used to create logical segmentation of workgroups, and to enforce security policies among logical segments. VLAN considerations include the number and types of VLANs supported, tagging protocols supported, and configuration protocols implemented. All switch modules for Enterprise Chassis support the 802.1Q protocol for VLAN tagging. Another use of 802.1Q VLAN tagging is to divide one physical Ethernet interface into several logical interfaces that belong to different VLANs. In other words, a compute node can send and receive tagged traffic from different VLANs on the same physical interface. This process can be done with network adapter management software. This software is the same as used for network interface card (NIC) teaming, as described in 6.5.3, NIC teaming on page 355. Each logical interface displays as a separate network adapter in the operating system with its own set of characteristics. These characteristics include IP addresses, protocols, and services.
348
IBM PureFlex System and IBM Flex System Products and Technology
Use several logical interfaces when an application requires more than two separate interfaces, and you do not want to dedicate a whole physical interface to it. This configuration might be the case if you do not have enough interfaces or low traffic. VLANs might also be helpful if you must implement strict security policies for separating network traffic. Implementing such policies with VLANs might eliminate the need to implement Layer 3 routing in the network. This configuration can be done without needing to implement Layer 3 routing in the network. To ensure that the application supports logical interfaces, check the documentation for possible restrictions that are applied to the NIC teaming configurations. Checking documentation is especially important in a clustering solutions implementation. For more information about Ethernet switch modules available with the Enterprise Chassis, see 4.10, I/O modules on page 114.
349
Virtual Router Redundancy Protocol Routing Protocol such as Router Information Protocol (RIP) or Open Shortest Path First (OSPF)
Topology 1
TOR Switch 1 Rest of Network Switch 1 NIC 1 Compute node
Chassis
Trunk
TOR Switch 2
Switch 2
NIC 2
Topology 2
TOR Switch 1 Rest of Network Switch 1 NIC 1 Compute node NIC 2
Chassis
TOR Switch 2
Figure 6-4 IBM redundant paths
Switch 2
Topology 1 in Figure 6-4 has each switch module in Enterprise Chassis directly connected to the one of the top of rack switches. The switch modules are connected through aggregation links by using some of the external ports on the switch. The specific number of external ports that are used for link aggregation depends on your redundancy requirements, performance considerations, and real network environments. This topology is the simplest way to integrate the Enterprise Chassis into an existing network, or to build a new one. Topology 2 in Figure 6-4 has each switch module in the Enterprise Chassis with two direct connections to a pair of top of rack switches. This topology is more advanced, and has a higher level of redundancy. However, protocols such as Spanning Tree or Virtual Link Aggregation Groups must be implemented. Otherwise, network loops and broadcast storms might cause network failures.
350
IBM PureFlex System and IBM Flex System Products and Technology
351
Look at topology 1 in Figure 6-4 on page 350. Assume that NIC Teaming is on, and that the compute node NIC port that is connected to switch 1 is active and the other is on standby. If something goes wrong with the internal link to switch 1, the teaming driver detects the NIC port failure and runs a failover. If external connections are lost, such as the connection from Enterprise Chassis switch 1 to top of rack switch 1, nothing happens. There is no failover because the internal link is still on and the teaming driver does not detect any failure. Therefore, the network service becomes unavailable. To address this issue, use the Layer 2 Failover technique. Layer 2 Failover can disable all internal ports on switch module in the case of an upstream links failure. A disabled port means no link, so the NIC teaming driver runs a failover. This process is a special feature that is supported on Enterprise Chassis switch modules. If Layer 2 Failover is enabled and you lose connectivity with top of rack switch 1, the NIC teaming driver runs a failover. Service is then available through top of rack switch 2 and Enterprise Chassis switch 2. Use Layer 2 Failover with NIC active/standby teaming. Before you use NIC teaming, verify whether it is supported by the operating system and applications deployed. Remember: Generally, do not use automatic failback for NIC teaming to avoid issues when you replace the failed switch module. A newly installed switch module has no configuration data, and can cause service disruption.
ISL Aggregation Layer STP blocks implicit loops Access Layer VLAGs VLAG Peers Links remain active
Servers
352
IBM PureFlex System and IBM Flex System Products and Technology
A switch in the access layer can be connected to more than one switch in the aggregation layer to provide network redundancy. Typically, STP is used to prevent broadcast loops, blocking redundant uplink paths. This protocol has the unwanted consequence of reducing the available bandwidth between the layers by as much as 50%. In addition, STP might be slow to resolve topology changes that occur during a link failure, and can result in considerable Media Access Control (MAC) address flooding. Using Virtual Link Aggregation Groups (VLAGs), the redundant uplinks remain active using all available bandwidth. Using the VLAG feature, the paired VLAG peers display to the downstream device as a single virtual entity for establishing a multi-port trunk. The VLAG-capable switches synchronize their logical view of the access layer port structure and internally prevent implicit loops. The VLAG topology also responds more quickly to link failure, and does not result in unnecessary MAC address flooding. VLAGs are also useful in multi-layer environments for both uplink and downlink redundancy to any regular LAG-capable device as shown in Figure 6-6.
VLAG 4
Servers
LACP-capable Server
353
VRRP enables redundant router configurations within a LAN, providing alternative router paths for a host to eliminate single point of failure within a network. Each participating routing device with VRRP function is configured with the same virtual router IPv4 address and ID number. One of the routing devices is elected as the master router and controls the shared virtual router IPv4 address. If the master fails, one of the backup routing devices takes control of the virtual router IPv4 address and actively processes traffic that is addressed to it. Currently, switch modules use VRRP version 2, which supports only IPv4 protocol. VRRP version 3 is defined in RFC 5798. VRRPv3 introduces support for IPv6 in addition to IPv4. But implementation for IPv6 is still not stable, so current switch operating systems do not support IPv6 for VRRP. All IBM Flex System Ethernet switches support the VRRP function.
6.5 Performance
Another major topic to be considered during network planning is network performance. Planning network performance is a complicated task, so the following sections provide guidance about the performance features of IBM Flex System network infrastructures. The commonly used features include link aggregation, jumbo frames, NIC Teaming, and network or server load balancing.
6.5.1 Trunking
Trunking (also commonly referred as Etherchannel in Cisco switches) is a simple way to acquire more network bandwidth between switches. Trunking is a technique that combines several physical links into one logical link to get more bandwidth. A trunk group also provides some level of redundancy for its physical links. That is, if one of the physical links in the trunk group fails, traffic is distributed between the remaining functional links. There are two main ways of establishing a trunk group: Static and dynamic. Static trunk groups can be mostly used without any limitations. It is simple and easy to manage. As for dynamic trunk group, the widely used protocol is Link Aggregation Control Protocol (LACP). All IBM Flex System Ethernet switches support trunking.
354
IBM PureFlex System and IBM Flex System Products and Technology
355
Besides performance, Server Load Balancing also provides high availability by redistributing client requests to the operational servers in case of any server or application failure. Server Load Balancing uses virtual server concept similar to virtual router Together with VRRP, it can provide even higher level of availability for network applications. VRRP and Server Load Balancing can also be used for inter-chassis redundancy and even disaster recovery solutions.
356
IBM PureFlex System and IBM Flex System Products and Technology
The recommended stacking topology is a bidirectional ring (see Figure 6-7). To achieve this topology, two external 10 Gb Ethernet ports on each switch must be reserved for stacking. By default, the first two 10Gb Ethernet ports are used.
For enhanced redundancy when you create port trunks, include ports from different stack members in the trunks. When switches operate in stacking mode, the following features are not supported: Active Multipath Protocol (AMP) sFlow port monitoring Uni-Directional Link Detection (UDLD) Port flood blocking BCM rate control Link Layer Detection Protocol (LLDP) Protocol-based VLANs Routing protocols (RIP, OSPF, OSPFv3, and BGP) IPv6 Virtual Router Redundancy Protocol (VRRP) Loopback Interfaces Router IDs Route maps MAC address notification Static MAC address adding Static multicast Multiple Spanning Tree Protocol (MSTP) Internet Group Management Protocol (IGMP) Relay and IGMPv3 Virtual NICs Converged Enhanced Ethernet (CEE) Fibre Channel over Ethernet (FCoE)
357
358
IBM PureFlex System and IBM Flex System Products and Technology
IBM Virtual Fabric mode Capability Supports changing rate dynamically Requires a dedicated uplink per vNIC group Support for node OS based tagging Support for failover per vNIC group Support for more than one uplink per vNIC group Dedicated uplink Yes Yes Yes Yes No Shared uplink Yes No No Yes Yes
6.8 VMready
VMready is a unique solution that enables the network to be virtual machine aware. The network can be configured and managed for virtual ports (v-ports) rather than just for physical ports. VMready allows for a define-once-use-many configuration. That means the network attributes are bundled with a v-port. The v-port belongs to a VM, and is movable. Wherever the VM migrates, even to a different physical host, the network attributes of the v-port remain the same.
359
The hypervisor manages the various virtual entities (VEs) on the host server: Virtual machines (VMs), virtual switches, and so on. Currently, VMready function supports up to 2048 VEs in a virtualized data center environment. The switch automatically discovers the VEs attached to switch ports, and distinguishes between regular VMs, Service Console Interfaces, and Kernel/Management Interfaces in a VMware environment. VEs can be placed into VM groups on the switch to define communication boundaries. VEs in the same VM group can communicate with each other, whereas VEs in different groups cannot. VM groups also allow for configuring group-level settings such as virtualization policies and access control lists (ACLs). The administrator can also pre-provision VEs by adding their MAC addresses (or their IPv4 addresses or VM names in a VMware environment) to a VM group. When a VE with a pre-provisioned MAC address becomes connected to the switch, the switch automatically applies the appropriate group membership configuration. In addition, VMready together with IBM NMotion allows seamless migration/failover of VMs to different hypervisor hosts, preserving network connectivity configurations. VMready works with all major virtualization products, including VMware, Hyper-V, Xen, and KVM and Oracle VM, without modification of virtualization hypervisors or guest operating systems. A VMready switch can also connect to a virtualization management server to collect configuration information about associated VEs. It can automatically push VM group configuration profiles to the virtualization management server. This process in turn configures the hypervisors and VEs, providing enhanced VE mobility. All IBM Flex System Ethernet switches support VMready.
360
IBM PureFlex System and IBM Flex System Products and Technology
Chapter 7.
Storage integration
IBM Flex System Enterprise Chassis offers several possibilities for integration into storage infrastructures, such as Fibre Channel, iSCSI, and Converged Enhanced Ethernet. This chapter addresses major considerations to take into account during IBM Flex System Enterprise Chassis storage infrastructure planning. These considerations include storage system interoperability, I/O module selection and interoperability rules, performance, high availability and redundancy, backup, and Boot from SAN. This chapter covers both internal and external storage. This chapter includes the following sections: 7.1, IBM Flex System V7000 Storage Node on page 362 7.2, External storage on page 381 7.3, Fibre Channel on page 388 7.4, FCoE on page 393 7.5, iSCSI on page 394 7.6, High availability and redundancy on page 396 7.7, Performance on page 397 7.8, Backup solutions on page 398 7.9, Boot from SAN on page 400
361
Figure 7-2 shows a V7000 Storage Node installed within an Enterprise Chassis. Power, management, and I/O connectors are provided by the chassis midplane.
362
IBM PureFlex System and IBM Flex System Products and Technology
The V7000 Storage Node offers the following features: Physical chassis Plug and Play integration Automated deployment and discovery Integration into the Flex System Manager Chassis map FCoE optimized offering (plus FC and iSCSI) Advanced storage efficiency capabilities Thin provisioning, IBM FlashCopy, IBM Easy Tier, IBM Real-time Compression, and nondisruptive migration External virtualization for rapid data center integration Metro and Global Mirror for multi-site recovery Scalable up to 240 SFF drives (HDD and SSD) Clustered systems support up to 960 SFF drives Support for Flex System compute nodes across multiple chassis The functionality is comparable somewhat to the Storwize V7000 external product. Table 7-1 compares the two products in more detail.
Table 7-1 Function Management software IBM Storwize V7000 versus IBM Flex System V7000 Storage Node Function IBM Storwize V7000 Storwize V7000 and Storwize V7000 Unified IBM Flex System V7000 Storage Node Flex System Manager: Integrated server, storage, and networking management Flex System V7000 management GUI: Detailed storage setup Graphical user interface (GUI) 240 per Control Enclosure; 960 per clustered system Physically integrated into IBM Flex System Chassis SAN-attached 8 Gbps FC (FC), 10 Gbps iSCSI/FCoE
Graphical user interface (GUI) 240 per Control Enclosure; 960 per clustered system Storwize V7000 and Storwize V7000 Unified SAN-attached 8 Gbps FC, 1Gbps iSCSI and optional iSCSI/FCoE NAS Attached 1Gbps Ethernet (Storwize V7000 Unified) 8 GB / 16 GB / 64 GB
Cache per controller / enclosure / clustered system Integrated features Mirroring Virtualization (internal and external), data migration Compression Unified Support
8 GB / 16 GB / 64 GB
IBM System Storage Easy Tier, FlashCopy, and thin provisioning Metro Mirror and Global Mirror Yes
System Storage Easy Tier, FlashCopy, and thin provisioning Metro Mirror and Global Mirror Yes
Yes NAS connectivity that is supported by Storwize V7000 Unified; IBM Active Cloud Engine integrated
Yes No
363
IBM Tivoli Storage Productivity Center Select, IBM Tivoli Storage Manager, and IBM Tivoli Storage Manager FastBack
IBM Tivoli Storage Productivity Center Select integrated into Flex System Manager, Tivoli Storage Productivity Center, Tivoli Storage Manager, and IBM Tivoli Storage Manager FastBack supported
When installed within the Enterprise Chassis, the V7000 Storage Node takes up a total of four node bays, as it is a double wide and double high unit. A total of three V7000 Storage Nodes can be installed within a single Enterprise Chassis. Installation of the V7000 Storage Node might require removal of the following items from the chassis: Up to four front filler panels Up to two compute node selves After the fillers and the compute node shelves are removed, two chassis rails must be removed from the chassis. Compute node shelf removal is shown in Figure 7-3.
Shelf
Tabs
364
IBM PureFlex System and IBM Flex System Products and Technology
After the compute node shelf is removed, the two compute node rails (left and right) must be removed from within the chassis by reaching inside and sliding up the blue touchpoint, as shown in Figure 7-4.
The V7000 Storage Node is simply slid into the double high chassis opening and the two locking levers closed, as shown in Figure 7-5.
365
When the levers are closed and the unit is installed within the Enterprise Chassis, the V7000 Storage Node connects physically and electrically to the chassis midplane, which provides the following items: Power Management The I/O connections between the storage node host interface cards (HICs) and the I/O modules that are installed within the chassis
a. The first Machine Type Model (MTM) number that is listed is the IBM System x sales channel, and the second MTM is the Power Systems channel.
The IBM Flex System V7000 Control Enclosure has the following components: An enclosure of 24 disks. Two Controller Modules. Up to 24 SFF drives. A battery inside each node canister. Each Control Enclosure supports up to nine Expansion Enclosures that are attached in a single SAS chain. Up to two Expansion Enclosures can be attached to each Control Enclosure within the Enterprise chassis. Figure 7-6 shows the V7000 Control Enclosure front view.
366
IBM PureFlex System and IBM Flex System Products and Technology
The IBM Flex System V7000 Expansion Enclosure has the following components: An enclosure for up to 24 disks with two Expansion Modules installed Two SAS ports on each Expansion module Figure 7-7 shows the front view of the V7000 Expansion Enclosure.
Figure 7-8 shows the layout of the enclosure with the outer and Controller Modules covers removed. The HICs can be seen at the rear of the enclosure, where they connect to the midplane of the Enterprise Chassis.
HIC
Controller module
367
Figure 7-9 shows the V7000 Storage Node with Controller Modules.
Table 7-3 explains the meanings of the numbers that are shown in Figure 7-9.
Table 7-3 Descriptions of the numbers in Figure 7-9 Figure 7-9 1 2 3 4 Description SAS Port1 and Node Canister 1 Node Canister 1 Node Canister 2 SAS Port1 and Node Canister 2
The Controller Module has: One or two host interface cards (HICs) installed in the rear. The first HIC must always be Two 10 Gb Ethernet ports (FCoE and iSCSI). The second HIC can be either four 2/4/8 Gb FC ports or Two 10 Gb Ethernet Ports (FCoE or iSCSI). One internal 10/100/1000 Mbps Ethernet for management (no iSCSI). One external 6 Gbps SAS ports (four lanes). Usage optional. Two external USB ports (not used for normal operation). One battery. Each Controller Module has a single SAS connector for the interconnection of expansion units, along with two USB ports. The USB ports are used when servicing the system. When a USB flash drive is inserted into one of the USB ports on a node canister in a Control Enclosure, the node canister searches for a control file on the USB flash drive and runs the command that is specified in the file.
368
IBM PureFlex System and IBM Flex System Products and Technology
Figure 7-10 shows the Controller Module front view with the LEDs highlighted.
Green
3 4
Amber Amber
Green
6 7
Amber Amber
369
LED number 8
State and description Off: There is no power to the canister. Make sure that the CMM powered on the storage node. Try reseating the canister. If the state persists, follow the hardware replacement procedures for the parts in the following order: node canister and then Control Enclosure. On solid: The canister is powered on. Flashing: The canister is in a powered down state. Use the CMM to power on the canister. Fast flashing: The management controller is in the process of communicating with the CMM during the initial insertion of the canister. If the canister remains in this state for more than 10 minutes, try reseating the canister. If the state persists, follow the hardware replacement procedure for the node canister. Off: The canister is not operational. On solid: The canister is active. You should not power off, or remove, a node canister whose status LED is on solid. You might lose access to data or corrupt volume data. Follow the procedures to shut down a node so that access to data is not compromised. Flashing: The canister is in the candidate or service state. Off: There is no host I/O activity. Flashing: The canister is actively processing input/output (IO) traffic. Off: There are no isolated failures on the storage enclosure. On solid: There are one or more isolated failures in the storage enclosure that require service or replacement. Off: There are no conditions that require the user to log in to the management interface and review the error logs. On solid: The system requires the attention of the user through one of the management interfaces. There are multiple reasons that the Check Log LED could be illuminated. Off: The canister is not identified by the canister management system. On solid: The canister is identified by the canister management system. Flashing: Occurs during power-on and power-on self-test (POST) activities.
Canister status
Green
10 11
Green Amber
12
Amber
13
Blue
Figure 7-11 shows a Controller Module with its cover removed. With the cover removed, the HIC can be removed or replaced as needed. Figure 7-11 shows two HICs installed in the Controller Modules (1) and the direction of removal of a HIC (2).
370
IBM PureFlex System and IBM Flex System Products and Technology
The battery within the Controller Module contains enough capacity to shut down the node canister twice from fully charged. The batteries do not provide any brownout protection or ride-through timers. When AC power is lost to the node canister, it shuts down. The ride-through behavior is provided by the Enterprise Chassis. The batteries need only 1 second of testing every 3 months, rather than the full discharge and recharge cycle that is needed for the Storwize V7000 batteries. The battery test is performed while the node is online. It is performed only if the other node in the Control Enclosure is online. If the battery fails the test, the node goes offline immediately. The battery is automatically tested every time that the controllers operating system is powered up. Special battery shutdown mode: If (and only if) you are shutting down the node canister and are going to remove the battery, you must perform the following shutdown command: satask stopnode poweroff battery This command puts the battery into a mode where it can safely be removed from the node canister after the power is off. The principal (and probably only) use case for this shutdown is a node canister replacement where you must swap the battery from the old node canister to the new node canister. Removing the canister without shutdown: If a node canister is removed from the enclosure without shutting it down, the battery keeps the node canister powered while the node canister performs a shutdown.
371
Figure 7-12 shows the Internal V7000 Control Enclosures architecture. The HICs provide the I/O to the I/O Module bays, where switches are generally installed. The management network and power are also connected.
Chassis Midplane RAID Controller (Left)
Host Controller 4C JF PCIe SW Host Controller
HIC 1 (Left)
HIC 2 (Right)
IMM
Disk Midplane
Sensor Farm 1 G SW
SAS Expander
Power Interposer
1 G SW
Disk Tray
VPD
24 Disk Trays
VPD
Disk Tray
1 G SW SAS Expander 1 G SW Battery FHD SSD SAS HBA 3 DIMMs IBEX IMM Host Controller Sensor Farm
HIC 1 (Left)
4C JF
HIC 2 (Right)
372
IBM PureFlex System and IBM Flex System Products and Technology
Table 7-5 explains the meanings of the numbers in Figure 7-13 on page 372.
Table 7-5 Expansion Module LEDs LED number 1 LED name SAS Port Status LED color Amber State and description Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and will not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port. Off: Power is not present or there is no SAS link connectivity established. On solid: There is at least one active SAS link in the wide port that is established and there is no external port activity. Flashing: The expansion port activity LED should flash at a rate proportional to the level of SAS port interface activity as determined by the expansion canister. The port also flashes when routing updates or configuration changes are being performed on the port. Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and will not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port. Off: Power is not present or there is no SAS link connectivity established. On solid: There is at least one active SAS link in the wide port that is established and there is no external port activity. Flashing: The expansion port activity LED should flash at a rate proportional to the level of SAS port interface activity as determined by the expansion canister. The port also flashes when routing updates or configuration changes are being performed on the port. Off: There are no isolated FRU failures on the expansion canister. On solid: There are one or more isolated FRU failures in the expansion canister that require service or replacement. Off: There are no failures that are isolated to the internal components of the expansion canister. On solid: An internal component requires service or replacement. Flashing: An internal component is being identified on this expansion canister. Off: There is no power to the expansion canister. On solid: The expansion canister is powered on. Flashing: The expansion canister is in a powered down state. Fast flashing: The management controller is in the process of communicating with the Chassis Management Module (CMM) during the initial insertion of the expansion canister. Off: The expansion canister is not identified by the controller management system. On solid: The expansion canister is identified by the controller management system Flashing: Occurs during power-on and power-on self-test (POST) activities.
Green
Amber
Green
Amber
Amber
Green
Identify
Blue
373
LED number 9
State and description Off: There are no faults or conditions that are detected by the expansion canister on the SAS port or down stream device that is connected to the port. On solid: There is a fault condition that is isolated by the expansion canister on the external SAS port. Slow flashing: The port is disabled and will not service SAS traffic. Flashing: One or more of the narrow ports of the SAS links on the wide SAS port link failed, and the port is not operating as a full wide port.
The Expansion Module has two 6 Gbps SAS ports at the front of the unit. Usage of port 1 is mandatory, and usage of port 2 is optional. These ports are used to connect to the Storage Controller Modules. Mini SAS ports: The SAS ports on the Flex System V7000 expansion canisters are HD Mini SAS ports. IBM Storwize V7000 canister SAS ports are Mini SAS.
374
IBM PureFlex System and IBM Flex System Products and Technology
Figure 7-14 shows an example of using both the V7000 internal and external Expansion Enclosures, with one Control Enclosure. The initial connections are made to the internal Expansion Enclosures within the Flex System Chassis, and then the SAS cables are chained to the external Expansion Enclosures. This diagram also shows the internal management connections. The cables that are used for linking to the Flex System V7000 Control and Expansion Enclosures are different from the cables that are used to link externally attached enclosures.
Control Enclosure
SVC SVC
A
IMM OSES OSES IMM
Ethernet
HD Mini SAS
SAS IMM
SAS
IMM
SAS
V7000 Expansion
SAS
SAS
A pair of the Internal Expansion Cables is shipped as standard with the Expansion Unit. The cables for internal connection are the HD SAS to HD SAS0 type.
375
The cables that are used to link an internal Controller or Expansion unit are of a different type and must be ordered separately. These cables are HD SAS to Mini SAS and are supplied in a package of two. The cables are described in Table 7-6.
Table 7-6 The cables that are used to link between internal and external Storwize V7000 Enclosures Part number 90Y7682 Feature code ADA6 Product Name External Expansion Cable Pack (Dual 6M SAS Cables - HD SAS to Mini SAS)
376
IBM PureFlex System and IBM Flex System Products and Technology
Consideration: It is not possible to connect to the V7000 Storage Node over the Chassis Midplane in FCoE mode without using the CN4093 Converged Scalable Switch. For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC), found at: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
377
FlashCopy (included with the base IBM Flex System V7000 Storage Node license) Provides a volume level point-in-time copy function for any storage being virtualized by IBM Flex System V7000 Storage Node. This function is designed to create copies for backup, parallel processing, testing, and development, and have the copies available almost immediately. IBM Flex System V7000 Storage Node includes the following FlashCopy functions: Full / Incremental copy This function copies only the changes from either the source or target data since the last FlashCopy operation and enables completion of point-in-time online backups much more quickly than using traditional FlashCopy. Multitarget FlashCopy IBM Flex System V7000 Storage Node supports copying of up to 256 target volumes from a single source volume. Each copy is managed by a unique mapping and, in general, each mapping acts independently and is not affected by other mappings sharing the source volume. Cascaded FlashCopy This function is used to create copies of copies and supports full, incremental, or nocopy operations. Reverse FlashCopy This function allows data from an earlier point-in-time copy to be restored with minimal disruption to the host. FlashCopy nocopy with thin provisioning This function provides a combination of using thin-provisioned volumes and FlashCopy together to reduce disk space requirements when making copies. There are two variations of this option: Space-efficient source and target with background copy Copies only the allocated space. Space-efficient target with no background copy Copies only the space that is used for changes between the source and target and is generally referred to as snapshots. This function may be used with multi-target, cascaded, and incremental FlashCopy. Consistency groups Consistency groups address the issue where application data is on multiple volumes. By placing the FlashCopy relationships into a consistency group, commands can be issued against all of the volumes in the group. This action enables a consistent point-in-time copy of all of the data, even if it might be on a physically separate volume. FlashCopy mappings can be members of a consistency group, or they can be operated in a stand-alone manner, that is, not as part of a consistency group. FlashCopy commands can be issued to a FlashCopy consistency group, which affects all FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not part of a defined FlashCopy consistency group. Remote Copy feature Remote Copy is a licensed feature that is based on the number of enclosures that are being used at the smallest configuration location. Remote Copy provides for the capability to perform either Metro Mirror or Global Mirror operations.
378
IBM PureFlex System and IBM Flex System Products and Technology
Metro Mirror Provides a synchronous remote mirroring function up to approximately 300 km between sites. As the host I/O completes only after the data is cached at both locations, performance requirements might limit the practical distance. Metro Mirror provides fully synchronized copies at both sites with zero data loss after the initial copy is completed. Metro Mirror can operate between multiple IBM Flex System V7000 Storage Node systems. Global Mirror Provides a long distance asynchronous remote mirroring function up to approximately 8,000 km between sites. With Global Mirror, the host I/O completes locally and the changed data is sent to the remote site later. This function is designed to maintain a consistent recoverable copy of data at the remote site, which lags behind the local site. Global Mirror can operate between multiple IBM Flex System V7000 Storage Node systems. Data Migration (no charge for temporary usage) IBM Flex System V7000 Storage Node provides a data migration function that can be used to import external storage systems into the IBM Flex System V7000 Storage Node system. You can use these function to perform the following actions: Move volumes nondisruptively onto a newly installed storage system. Move volumes to rebalance a changed workload. Migrate data from other back-end storage to IBM Flex System V7000 Storage Node managed storage. IBM System Storage Easy Tier (no charge) Provides a mechanism to seamlessly migrate hot spots to the most appropriate tier within the IBM Flex System V7000 Storage Node solution. This migration could be to internal drives within IBM Flex System V7000 Storage Node or to external storage systems that are virtualized by IBM Flex System V7000 Storage Node. Real Time Compression (RTC) Provides for data compression using the IBM Random-Access Compression Engine (RACE), which can be performed on a per volume basis in real time on active primary workloads. RTC can provide as much as a 50% compression rate for data that is not already compressed. This function can reduce the amount of capacity that is needed for storage, which can delay further growth purchases. RTC supports all storage that is attached to the IBM Flex System V7000 Storage Node, whether it is internal, external, or external virtualized storage. A compression evaluation tool that is called the IBM Comprestimator Utility can be used to determine the value of using compression on a specific workload for your environment. It can be found at: http://ibm.com/support/docview.wss?uid=ssg1S4001012
379
7.1.9 Licenses
IBM Flex System V7000 Storage Node requires licenses for the following features: Enclosure External Virtualization Remote Copy (Advanced Copy Services: Metro Mirror / Global Mirror) Real Time Compression (RTC) Table 7-8 gives a summary of the licenses.
Table 7-8 Licenses License type Enclosure External Virtualization Remote Copy Real Time Compression Unit Base+expansion Physical Enclosure Number Physical Enclosure Number Of External Storage Physical Enclosure Number Physical Enclosure Number License number 5639-VM1 5639-EV1 5639-RM1 5639-CP1 License required? Yes Optional add-on feature Optional add-on feature Optional add-on feature
These functions do not need a license: FlashCopy Volume Mirroring Thin Provisioning Volume Migration Easy Tier For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC) found at: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
380
IBM PureFlex System and IBM Flex System Products and Technology
For further information about configuration limits and restrictions with Version 6.4 of the Flex System V7000 software, go to: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004243
381
For the latest support matrixes for storage products, see the storage vendor interoperability guides. IBM storage products can be referenced in the System Storage Interoperability Center (SSIC): http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
382
IBM PureFlex System and IBM Flex System Products and Technology
The levels of integration of Storwize V7000 with IBM Flex System provide these additional features: Starting Level IBM Flex System Single Point of Management Higher Level Datacenter Management IBM Flex System Manager Storage Control Detailed Level Data Management Storwize V7000 Storage User GUI Upgrade Level Datacenter Productivity Tivoli Storage Productivity Center for Replication Storage Productivity Center IBM Storwize V7000 provides a number of configuration options that simplify the implementation process. It also provides automated wizards, called directed maintenance procedures (DMP), to help resolve any events. IBM Storwize V7000 is a clustered, scalable, and midrange storage system, as well as an external virtualization device. IBM Storwize V7000 Unified is the latest release of the product family. This virtualized storage system is designed to consolidate block and file workloads into a single storage system. This consolidation provides simplicity of management, reduced cost, highly scalable capacity, performance, and high availability. IBM Storwize V7000 Unified Storage also offers improved efficiency and flexibility through built-in solid-state drive (SSD) optimization, thin provisioning, and nondisruptive migration of data from existing storage. The system can virtualize and reuse existing disk systems, providing a greater potential return on investment. For more information about IBM Storwize V7000, see: http://www.ibm.com/systems/storage/disk/storwize_v7000/overview.html
383
Built-in thin provisioning can help reduce direct and indirect costs. Synchronous and asynchronous remote mirroring provides protection against primary site outages, disasters, and site failures. Offers FC and iSCSI attach for flexibility in server connectivity. For more information about the XIV, see: http://www.ibm.com/systems/storage/disk/xiv/index.html
384
IBM PureFlex System and IBM Flex System Products and Technology
Scalable up to 448 drives with the EXP5000 enclosure, and up to 960 TB of high-density storage with the EXP5060 enclosure Support for intermixing drive types (FC, FC-SAS, SED, SATA, and SSD) and host interfaces (Fibre Channel and iSCSI) for investment protection and cost-effective tiered storage Supports business continuance with its optional high-availability software and advanced Enhanced Remote Mirroring function Helps protect customer data with its multi-RAID capability, including RAID 6, and hot-swappable redundant components For more information about the DS5000 series, see: http://www.ibm.com/systems/storage/disk/ds5000/index.html
385
Creates snapshot copies to automate error-free data restores, and enables application-aware disaster recovery. Thin Provisioning: Allows applications and users to get more space dynamically and nondisruptively without IT staff intervention. Ease of installation: Offers installation tools that are designed to simplify installation and setup. Increased access: Allows heterogeneous access to IP attached storage and Fibre Channel attached storage subsystems. Operating system: Optimized and finely tuned for storing and sharing data assets. The OS is designed to enable greater efficiency within your organization, and help lower total cost of ownership (TCO) through improved efficiency and productivity. Flexibility: Enables cross-platform data access for Microsoft Windows, UNIX, and Linux environments. This access can help reduce network complexity and expense, and allow data to be shared across the organization. Network-attached storage (NAS): Supports Network File System (NFS), Common Internet File System (CIFS) protocols for attachment to Microsoft Windows, UNIX, and Linux systems. IP SAN: Supports Internet Small Computer System Interface (iSCSI) protocols for IP SAN that can be attached to host servers that include Microsoft Windows, Linux, and UNIX systems. FC SAN: Supports Fibre Channel Protocol (FCP) for accommodating attachment and participation in Fibre Channel SAN environments. FCoE: Supports Fibre Channel flow over Ethernet networks. Expandability: Supports nondisruptive capacity increases and thin-provisioning, which you can use to dynamically increase and decrease user capacity assignments. You can increase your storage infrastructure to keep pace with company growth. Designed to maintain availability and productivity during upgrades. Manageability: Includes integrated system diagnostics and management tools, which are designed to help minimize downtime. Redundancy: Several redundancy and hot-swappable features provide the highest system availability characteristics. Copy Services: Provides extensive outboard services that help recover data in disaster recovery environments. SnapMirror provides one-to-one, one-to-many, and many-to-one mirroring over Fibre Channel or IP infrastructures. NearStore (near-line) feature: SATA drive technology enables online and quick access to archived and nonintensive transactional data. Deduplication: Provides block-level deduplication of data that is stored in NearStore volumes. Compliance and data retention: Software and hardware features offer nonerasable and nonrewritable data protection to meet the industrys highest regulatory requirements for retaining company data assets. For more information about the N series, see: http://www.ibm.com/systems/storage/network/hardware/index.html
386
IBM PureFlex System and IBM Flex System Products and Technology
387
388
IBM PureFlex System and IBM Flex System Products and Technology
Almost every vendor of storage systems or storage fabrics has extensive compatibility matrixes that include supported HBAs, SAN switches, and operating systems. For more information about IBM System Storage compatibility, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic
For more information, see the Brocade Access Gateway Administrators Guide.
389
Considerations for the FC3171 8Gb SAN Pass-thru and FC3171 8Gb SAN Switch
Both these I/O Modules provide seamless integration of IBM Flex System Enterprise Chassis into existing Fibre Channel fabric. They avoid any multivendor interoperability issues by using NPIV technology. All ports are licensed on both these switches (there are no port licensing requirements). The I/O module has 14 internal ports and 6 external ports that are presented at the rear of the chassis. Attention: If you will need Full Fabric capabilities at any time in the future, purchase the Full Fabric Switch Module (FC3171 8Gb SAN Switch) instead of the Pass-Thru module (FC3171 8Gb SAN Pass-thru). The pass-through module can never be upgraded. You can reconfigure the FC3171 8Gb SAN Switch to become a Pass-Thru module by using the switch GUI or CLI. The module can be converted back to a full function SAN switch at any time. The switch requires a reset when you turn on or off transparent mode. Operating in pass-through mode adds ports to the fabrics, and not domain IDs like switches. This process is not apparent to the switches in the fabric. This section describes how the NPIV concept works for the Intelligent pass-through Module (and the Brocade Access Gateway). Several basic types of ports are used in Fibre Channel fabrics: N_Ports (node ports) represent an end-point FC device (such as host, storage system, or tape drive) connected to the FC fabric. F_Ports (fabric ports) are used to connect N_Ports to the FC switch (that is, the host HBAs N_port is connected to the F_Port on the switch). E_Ports (expansion ports) provide interswitch connections. If you must connect one switch to another, E_ports are used. The E_port on one switch is connected to the E_Port on another switch. When one switch is connected to another switch in the existing FC fabric, it uses the Domain ID to uniquely identify itself in the SAN (like a switch address). Because every switch in the fabric has the Domain ID and this ID is unique in the SAN, the number of switches and number of ports is limited. This in turn limits SAN scalability. For example, QLogic theoretically supports up to 239 switches, and McDATA supports up to 31 switches. Another concern with E_Ports is an interoperability issue between switches from different vendors. In many cases only the so-called interoperability mode can be used in these fabrics, thus disabling most of the vendors advanced features. Each switch requires some management tasks to be performed on it. Therefore, an increased number of switches increases the complexity of the management solution, especially in heterogeneous SANs consisting of multivendor fabrics. NPIV technology helps to address these issues. Initially, NPIV technology was used in virtualization environments to share one HBA with multiple virtual machines, and assign unique port IDs to each of them. You can use this configuration to separate traffic between virtual machines (VMs). You can deal with VMs in the same way as physical hosts, by zoning fabric or partitioning storage.
390
IBM PureFlex System and IBM Flex System Products and Technology
For example, if NPIV is not used, every virtual machine shares one HBA with one WWN. This restriction means that you are not able to separate traffic between these systems and isolate LUNs because all of them use the same ID. In contrast, when NPIV is used, every VM has its own port ID, and these port IDs are treated as N_Ports by the FC fabric. You can perform storage partitioning or zoning based on the port ID of the VM. The switch that the virtualized HBAs are connected to must support NPIV as well. Check the documentation that comes with the FC switch. The IBM Flex System FC3171 8Gb SAN Switch in pass-through mode, the IBM Flex System FC3171 8Gb SAN Pass-thru, and the Brocade Access Gateway use the NPIV technique. The technique presents the nodes port IDs as N_Ports to the external fabric switches. This process eliminates the need for E_Ports connections between the Enterprise Chassis and external switches. In this way, all 14 internal nodes FC ports are multiplexed and distributed across external FC links and presented to the external fabric as N_Ports. This configuration means that external switches connected to the chassis that are configured for Fibre pass-through do not see the pass-through module. They see only N_ports connected to the F_ports. This configuration can help to achieve a higher port count for better scalability without using Domain IDs, and avoid multivendor interoperability issues. However, modules that operate in Pass-Thru cannot be directly attached to the storage system. They must be attached to an external NPIV-capable FC switch. See the switch documentation about NPIV support. Select a SAN module that can provide the required functionality together with seamless integration into the existing storage infrastructure (Table 7-10). There are no strict rules to follow during integration planning. However, several considerations must be taken into account.
Table 7-10 SAN module feature comparison and interoperability FC5022 16Gb SAN Scalable Switch Basic FC connectivity FC-SW-2 interoperability Zoning Maximum number of Domain IDs Advanced FC connectivity Port Aggregation Advanced fabric security Interoperability (existing fabric) Brocade fabric interoperability QLogic fabric interoperability Cisco fabric interoperability Yes No No No No No Yes No No Yes No Yes Yes Yes Nob Yes Not applicable Not applicable Not applicable Not applicable Yesa Yes 239 Yes Yes 239 Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable FC3171 8Gb SAN Switch FC5022 16Gb SAN Scalable Switch in Brocade Access Gateway mode FC3171 8Gb SAN Pass-thru (and FC3171 8Gb SAN Switch in pass-through mode)
a. Indicates that a feature is supported without any restrictions for existing fabric, but with restrictions for added fabric, and vice versa. b. Does not necessarily mean that a feature is not supported. Instead, it means that severe restrictions apply to the existing fabric. Some functions of the existing fabric potentially must be disabled (if used).
391
Almost all switches support interoperability standards, which means that almost any switch can be integrated into existing fabric by using interoperability mode. Interoperability mode is a special mode that is used for integration of different vendors FC fabrics into one. However, only standards-based functionality is available in the interoperability mode. Advanced features of a storage fabrics vendor might not be available. Broadcom, McDATA, and Cisco have interoperability modes on their fabric switches. Check the compatibility matrixes for a list of supported and unsupported features in the interoperability mode. Table 7-10 on page 391 provides a high-level overview of standard and advanced functions available for particular Enterprise Chassis SAN switches. It lists how these switches might be used for designing new storage networks or integrating with existing storage networks. Remember: Advanced (proprietary) FC connectivity features from different vendors might be incompatible with each other, even those features that provide almost the same function. For example, both Brocade and Cisco support port aggregation. However, Brocade uses ISL trunking and Cisco uses PortChannels, and they are incompatible with each other. For example, if you integrate FC3052 2-port 8Gb FC Adapter (Brocade) into QLogic fabric, you cannot use Brocade proprietary features such as ISL trunking. However, QLogic fabric does not lose functionality. Conversely, if you integrate QLogic fabric into existing Brocade fabric, placing all Brocade switches in interoperability mode loses Advanced Fabric Services functions. If you plan to integrate Enterprise Chassis into a Fibre Channel fabric that is not listed here, QLogic might be a good choice. However, this configuration is possible with interoperability mode only, so extended functions are not supported. A better way would be to use the FC3171 8Gb SAN Pass-thru or Brocade Access Gateway. Switch selection and interoperability has the following rules: FC3171 8Gb SAN Switch is used when Enterprise Chassis is integrated into existing QLogic fabric or when basic FC functionality is required. That is, with one Enterprise Chassis with a direct-connected storage server. FC5022 16Gb SAN Scalable Switch is used when Enterprise Chassis is integrated into existing Brocade fabric or when advanced FC connectivity is required. You might use this switch when several Enterprise Chassis are connected to high performance storage systems. If you plan to use advanced features such as ISL trunking, you might need to acquire specific licenses for these features. Tip: Using FC storage fabric from the same vendor often avoids possible operational, management, and troubleshooting issues. If Enterprise Chassis is attached to a non-IBM storage system, support is provided by the storage systems vendor. Even if non-IBM storage is listed on IBM ServerProven, it means only that the configuration has been tested. It does not mean that IBM provides support for it. See the vendor compatibility information for supported configurations. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
392
IBM PureFlex System and IBM Flex System Products and Technology
7.4 FCoE
One common way to do reduce administration costs is by converging technologies that are implemented on separate infrastructures. Just as office phone systems were reduced from a separate cabling plant and components to a common IP infrastructure, Fibre Channel networks are also converging to Ethernet. FCoE (Fibre Channel over Ethernet) removes the need for separate HBAs on the servers and separate Fibre Channel cables that come out of the server or chassis. Instead, a Converged Network Adapter (CNA) is installed in the server. This adapter presents what appears to be both a NIC and an HBA to the OS, but the output out of the server is 10 Gb Ethernet. This section lists FCoE support. Table 7-11 lists FCoE support using Fibre Channel targets. Table 7-12 on page 394 lists FCoE support using native FCoE targets (that is, end-to-end FCoE). Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) website, found at: http://ibm.com/systems/support/storage/ssic/interoperability.wss
Table 7-11 FCoE support using FC targets Ethernet adapter 10Gb onboard LOM (x240) + FCoE upgrade, 90Y9310 10Gb onboard LOM (x440) + FCoE upgrade, 90Y9310 CN4054 10Gb Adapter, 90Y3554 + FCoE upgrade, 90Y3558 Flex System I/O module FC Forwarder (FCF) Supported SAN fabric Operating systems Storage targets DS8000 IBM SAN Volume Controller IBM Storwize V7000 V7000 Storage Node (FC) TS3200, TS3310, TS3500
EN4093 10Gb Switch (vNIC1, vNIC2, UFP, and pNIC) EN4093R 10Gb Switch (vNIC1, vNIC2, UFP and pNIC)
IBM B-type
Cisco MDS
Windows Server 2008 R2 SLES 10 SLES 11 RHEL 5 RHEL 6 ESX 4.1 vSphere 5.0
DS8000 SAN Volume Controller Storwize V7000 V7000 Storage Node (FC) IBM XIV
393
Ethernet adapter
Flex System I/O module EN4093 10Gb (pNIC only) EN4093R 10Gb Switch (pNIC only) EN4093 10Gb Switch (pNIC only) EN4093R 10Gb Switch (pNIC only)
FC Forwarder (FCF)
Operating systems
Storage targets
IBM B-type AIX V6.1 AIX V7.1 VIOS 2.2 SLES 11.2 RHEL 6.3 DS8000 SAN Volume Controller Storwize V7000 V7000 Storage Node (FC) IBM XIV
Cisco MDS
CN4093 10Gb Converged Switch (pNIC only) Table 7-12 FCoE support using FCoE targets (end-to-end FCoE) Ethernet adapter Flex System I/O module
Operating systems Windows Server 2008 R2 SLES 10 SLES 11 RHEL 5 RHEL 6 ESX V4.1 vSphere 5.0 AIX V6.1 AIX V7.1 VIOS 2.2 SLES 11.2 RHEL 6.3
Storage targets
10Gb onboard LOM (x240) + FCoE upgrade, 90Y9310 10Gb onboard LOM (x440) + FCoE upgrade, 90Y9310 CN4054 10Gb Adapter, 90Y3554 + FCoE upgrade, 90Y3558
7.5 iSCSI
iSCSI uses a traditional Ethernet network for block I/O between storage system and servers. Servers and storage systems are connected to the LAN, and use iSCSI to communicate with each other. Because iSCSI uses a standard TCP/IP stack, you can use iSCSI connections across LAN or wide area network (WAN) connections. iSCSI targets IBM System Storage DS3500 iSCSI models, an optional DHCP server, and a management station with iSCSI Configuration Manager. The software iSCSI initiator is specialized software that uses a servers processor for iSCSI protocol processing. A hardware iSCSI initiator exists as microcode that is built in to the LAN on Motherboard (LOM) on the node or on the I/O Adapter providing it is supported. Both software and hardware initiator implementations provide iSCSI capabilities for Ethernet NICs. However, an operating system driver can be used only after the locally installed operating system is turned on and running. In contrast, the NIC built-in microcode is used for boot-from-SAN implementations, but cannot be used for storage access when the operating system is already running.
394
IBM PureFlex System and IBM Flex System Products and Technology
Table 7-13 lists iSCSI support using a hardware-based iSCSI initiator. IBM System Storage Interoperation Center normally lists support only for iSCSI storage that is attached using hardware iSCSI offload adapters in the servers. Flex System compute nodes support any type of iSCSI (1Gb or 10Gb) storage if the software iSCSI initiator device drivers that meet the storage requirements for operating system and device driver levels are met. Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) website: http://ibm.com/systems/support/storage/ssic/interoperability.wss
Table 7-13 Hardware-based iSCSI support Ethernet adapter 10Gb onboard LOM (x240)a 10Gb onboard LOM (x440)a CN4054 10Gb Virtual Fabric Adapter, 90Y3554b Flex System I/O module EN4093 10Gb Switch (vNIC1, vNIC2, UFP, and pNIC) EN4093R 10Gb Switch (vNIC1, vNIC2, UFP and pNIC) Operating systems Windows Server 2008 R2 SLES 10 and 11 RHEL 5 & 6 ESX 4.1 vSphere 5.0 Storage targets SAN Volume Controller Storwize V7000 V7000 Storage Node (iSCSI) IBM XIV
a. FCoE upgrade is required, IBM Virtual Fabric Advanced Software Upgrade (LOM), 90Y9310 b. FCoE upgrade is required, IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, 90Y3558
iSCSI on Enterprise Chassis nodes can be implemented on the IBM Flex System CN4054 10Gb Virtual Fabric Adapter and the embedded 10 Gb Virtual Fabric adapter LOM. Remember: Both of these NIC solutions require a Feature on Demand (FoD) upgrade, which enables and provides iSCSI initiator. Software initiators can be obtained from the operating system vendor. For example, Microsoft offers a software iSCSI initiator for download. Or they can be obtained as a part of an NIC firmware upgrade (if supported by NIC). For more information about IBM Flex System CN4054 10Gb Virtual Fabric Adapter, see 5.6.1, Overview on page 286 and 5.9.14, IBM Flex System IB6132 2-port FDR InfiniBand Adapter on page 341. For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic Tip: Consider using a separate network segment for iSCSI traffic. That is, isolate NICs, switches (or virtual local area network (VLANs)), and storage system ports that participate in iSCSI communications from other traffic. If you plan for redundancy, you must use multipath drivers. Generally, they are provided by the operating system vendor for iSCSI implementations, even if you plan to use hardware initiators.
395
It is possible to implement high availability (HA) clustering solutions by using iSCSI, but certain restrictions might apply. For more information, see the storage system vendor compatibility guides. When you plan your iSCSI solution, consider the following items: IBM Flex System Enterprise Chassis nodes, the initiators, and the operating system are supported by an iSCSI storage system. For more information, see the compatibility guides from the storage vendor. Multipath drivers exist, and are supported by the operating system and the storage system (when redundancy is planned). For more information, see the compatibility guides from the operating system vendor and storage vendor. For more information, see the following publications: IBM SSIC http://www.ibm.com/systems/support/storage/config/ssic IBM System Storage N series Interoperability Matrix, found at: http://ibm.com/support/docview.wss?uid=ssg1S7003897 Microsoft Support for iSCSI (from Microsoft), found at: http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiscsi. mspx
Node
Chassis
Storage Network
I/O Module
Figure 7-16 IBM Enterprise Chassis LAN infrastructure topology
396
IBM PureFlex System and IBM Flex System Products and Technology
This topology includes a dual port FC I/O Adapter that is installed onto the node. A pair of FC I/O Modules is installed into bays 3 and 4 of the Enterprise Chassis. In a failure, the specific operating system driver that is provided by the storage system manufacturer is responsible for the automatic failover process. This process is also known as multipathing capability. If you plan to use redundancy and high availability for storage fabric, ensure that failover drivers satisfy the following requirements: They are available from the vendor of the storage system. They come with the system or can be ordered separately (remember to order them in such cases). They support the node operating system. They support the redundant multipath fabric that you plan to implement (that is, they support the required number of redundant paths). For more information, see the storage system documentation from the vendor.
7.7 Performance
Performance is an important consideration during storage infrastructure planning. Providing the required end-to-end performance for your SAN can be accomplished in several ways. First, the storage systems failover driver can provide load balancing across redundant paths in addition to high availability. IBM System Storage Multi-path Subsystem Device Driver (SDD) used with DS8000 provides this function. If you plan to use such drivers, ensure that they satisfy the following requirements: They are available from the storage system vendor. They come with the system, or can be ordered separately. They support the node operating system. They support the multipath fabric that you plan to implement. That is, they support the required number of paths implemented. Also, you can use static LUN distribution between two storage controllers in the storage system. Some LUNs are served by controller 1, and others are served by controller 2. A zoning technique can also be used together with static LUN distribution if you have redundant connections between FC switches and the storage system controllers. Trunking or PortChannels between FC or Ethernet switches can be used to increase network bandwidth, increasing performance. Trunks in the FC network use the same concept as in standard Ethernet networks. Several physical links between switches are grouped into one logical link with increased bandwidth. This configuration is typically used when an Enterprise Chassis is integrated into existing advanced FC infrastructures. However, keep in mind that only the FC5022 16Gb SAN Scalable Switch supports trunking. Also be aware that this feature is an optional one that requires the purchase of an additional license. For more information, see the storage system vendor documentation and the switch vendor documentation.
397
398
IBM PureFlex System and IBM Flex System Products and Technology
Figure 7-17 shows possible topologies and traffic flows for LAN backups and FC-attached storage devices.
FCSM
Chassis
FC Switch Module
Storage Network
Backup data is moved from disk storage to backup server's disk storage through LAN by backup agent
Backup data is moved from disk backup storage to tape backup storage by backup server
The topology that is shown in Figure 7-17 has the following characteristics: Each Node participating in backup, except the backup server itself, has dual connections to the disk storage system. The backup server has only one disk storage connection (shown in red). The other port of the FC HBA is dedicated for tape storage. A backup agent is installed onto each Node requiring backup. The backup traffic flow starts with the backup agent transfers backup data from the disk storage to the backup server through LAN. The backup server stores this data on its disk storage, for example on the same storage system. Then, the backup server transfers data from its storage directly to the tape device. Zoning is implemented on an FC Switch Module to separate disk and tape data flows. Zoning is almost like VLANs in networks.
399
FCSM
Chassis
Storage Network
FCSM 2
Tape Autoloader
Figure 7-18 shows the simplest topology for LAN-free backup. With this topology, the backup server controls the backup process, and the backup agent moves the backup data from the disk storage directly to the tape storage. In this case, there is no redundancy that is provided for the disk storage and tape storage. Zones are not required because the second Fibre Channel Switching Module (FCSM) is exclusively used for the backup fabric. Backup software vendors can use other (or additional) topologies and protocols for backup operations. Consult the backup software vendor documentation for a list of supported topologies and features, and additional information.
400
IBM PureFlex System and IBM Flex System Products and Technology
You can also check the documentation for the operating system that is used for Boot from SAN support and requirements as well as storage vendors. See the following sources for additional SAN boot-related information: Windows Boot from Fibre Channel SAN Overview and Detailed Technical Instructions for the System Administrator can be found at: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2815 SAN Configuration Guide (from VMware), found at: http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf For IBM System Storage compatibility information, see the IBM System Storage Interoperability Center at: http://www.ibm.com/systems/support/storage/config/ssic
401
402
IBM PureFlex System and IBM Flex System Products and Technology
403
LPC LR LR-DIMM MAC MB MSTP NIC NL NS NTP OPM OSPF PCI PCIe PDU PF PSU QDR QPI RAID RAM RAS RDIMM RFC RHEL RIP ROC ROM RPM RSS SAN SAS SATA SDMC SerDes SFF SLC SLES SLP SNMP SSD
Local Procedure Call long range load-reduced DIMM media access control megabyte Multiple Spanning Tree Protocol network interface card nearline not supported Network Time Protocol Optical Pass-Thru Module Open Shortest Path First Peripheral Component Interconnect PCI Express power distribution unit power factor power supply unit quad data rate QuickPath Interconnect redundant array of independent disks random access memory remote access services; row address strobe registered DIMM request for comments Red Hat Enterprise Linux Routing Information Protocol RAID-on-Chip read-only memory revolutions per minute Receive-Side Scaling storage area network Serial Attached SCSI Serial ATA Systems Director Management Console Serializer-Deserializer small form factor Single-Level Cell SUSE Linux Enterprise Server Service Location Protocol Simple Network Management Protocol solid-state drive
SSH SSL STP TCG TCP TDP TFTP TPM TXT UDIMM UDLD UEFI UI UL UPS URL USB VE VIOS VLAG VLAN VM VPD VRRP VT WW WWN
Secure Shell Secure Sockets Layer Spanning Tree Protocol Trusted Computing Group Transmission Control Protocol thermal design power Trivial File Transfer Protocol Trusted Platform Module text unbuffered DIMM Unidirectional link detection Unified Extensible Firmware Interface user interface Underwriters Laboratories uninterruptible power supply Uniform Resource Locator universal serial bus Virtualization Engine Virtual I/O Server Virtual Link Aggregation Groups virtual LAN virtual machine vital product data Virtual Router Redundancy Protocol Virtualization Technology worldwide Worldwide Name
404
IBM PureFlex System and IBM Flex System Products and Technology
IBM Redbooks
The following publications from IBM Redbooks provide additional information about IBM Flex System. These publications are available from the following website: http://www.redbooks.ibm.com/portals/puresystems IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 IBM Flex System Networking in an Enterprise Data Center, REDP-4834 Chassis, Compute Nodes, and Expansion Nodes IBM Flex System Enterprise Chassis, TIPS0863 IBM Flex System Manager, TIPS0862 IBM Flex System p24L, p260 and p460 Compute Nodes, TIPS0880 IBM Flex System PCIe Expansion Node, TIPS0906 IBM Flex System Storage Expansion Node, TIPS0914 IBM Flex System x220 Compute Node, TIPS0885 IBM Flex System x240 Compute Node, TIPS0860 IBM Flex System x440 Compute Node, TIPS0886 Switches: IBM Flex System EN2092 1Gb Ethernet Scalable Switch, TIPS0861 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module, TIPS0865 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches, TIPS0864 IBM Flex System FC3171 8Gb SAN Switch and Pass-thru, TIPS0866 IBM Flex System FC5022 16Gb SAN Scalable Switches, TIPS0870 IBM Flex System IB6131 InfiniBand Switch, TIPS0871 Adapters: IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Ethernet Adapter, TIPS0868 IBM Flex System CN4058 8-port 10Gb Converged Adapter, TIPS0909 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter, TIPS0845 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter, TIPS0873 IBM Flex System EN4132 2-port 10Gb RoCE Adapter, TIPS0913 IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869 IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867 IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891 IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872 IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890 ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884
Copyright IBM Corp. 2012, 2013. All rights reserved.
405
Other relevant documents: IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849 You can search for, view, download, or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks
IBM education
The following are IBM educational offerings for IBM Flex System. Some course numbers and titles might have changed slightly after publication. Note: IBM courses that are prefixed with NGTxx are traditional, face-to-face classroom offerings. Courses that are prefixed with NGVxx are Instructor Led Online (ILO) offerings. Courses that are prefixed with NGPxx are Self-paced Virtual Class (SPVC) offerings. NGT10/NGV10/NGP10, IBM Flex System - Introduction NGT20/NGV20/NGP20, IBM Flex System x240 Compute Node NGT30/NGV30/NGP30, IBM Flex System p260 and p460 Compute Nodes NGT40/NGV40/NGP40, IBM Flex System Manager Node NGT50/NGV50/NGP50, IBM Flex System Scalable Networking For more information about these, and many other IBM System x educational offerings, visit the global IBM Training website at: http://www.ibm.com/training
Online resources
These websites are also relevant as further information sources: Configuration and Option Guide: http://www.ibm.com/systems/xbc/cog/ IBM Flex System Enterprise Chassis Power Requirements Guide: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 IBM Flex System Information Center: http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp IBM Flex System Interoperability Guide: http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=sa&subtype=wh&htmlfid=WZL12 345USEN IBM System Storage Interoperation Center: http://www.ibm.com/systems/support/storage/ssic Integrated Management Module II Users Guide http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346
406
IBM PureFlex System and IBM Flex System Products and Technology
ServerProven compatibility page for operating system support http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.sh tml ServerProven for IBM Flex System http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html xREF - IBM x86 Server Reference: http://www.redbooks.ibm.com/xref
407
408
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
Back cover
IBM PureFlex System and IBM Flex System Products and Technology
Describes the IBM Flex System Enterprise Chassis and compute node technology Provides details about available I/O modules and expansion options Explains networking and storage configurations
To meet todays complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete, optimized solutions. At the heart of PureFlex System is the IBM Flex System Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale to meet your needs in the future. This IBM Redbooks publication describes IBM PureFlex System and IBM Flex System. It highlights the technology and features of the chassis, compute nodes, management features, and connectivity options. Guidance is provided about every major component, and about networking and storage connectivity. This book is intended for customers, Business Partners, and IBM employees who want to know the details about the new family of products. It assumes that you have a basic understanding of blade server concepts and general IT knowledge.