Flexpod Vmware Horizon n9k Aff Ucsm32 PDF
Flexpod Vmware Horizon n9k Aff Ucsm32 PDF
Flexpod Vmware Horizon n9k Aff Ucsm32 PDF
3 and
VMware vSphere 6.5 Update 1 with Cisco UCS Manager
3.2 for 5000 Seats
Cisco Validated Design for a 5000 Seat Virtual Desktop Infrastructure Built
on Cisco UCS B200 M5 and Cisco UCS Manager 3.2 with NetApp AFF A -
Series on VMware Horizon View 7.3 and VMware vSphere ESXi 6.5 Update
1 Hypervisor Platform
Last Updated: January 2, 2018
1
About the Cisco Validated Design Program
The Cisco Validated Design (CVD) Program consists of systems and solutions designed, tested, and
documented to facilitate faster, more reliable, and more predictable customer deployments. For more
information visit
http://www.cisco.com/go/designzone.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER
PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR
OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON
FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the
Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers,
Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS
Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000
Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco
MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step,
Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,
LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking
Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet,
Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and
the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and
certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The
use of the word partner does not imply a partnership relationship between Cisco and any other company.
(0809R)
2
Table of Contents
Executive Summary ............................................................................................................................................................... 10
Introduction ....................................................................................................................................................................... 11
Audience ........................................................................................................................................................................... 11
..................................................................................................................................................................... 11
3
Horizon Agent ................................................................................................................................................................ 42
SAN Boot....................................................................................................................................................................... 50
VLANs ........................................................................................................................................................................... 62
VSANs ........................................................................................................................................................................... 62
4
Edit Chassis Discovery Policy ........................................................................................................................................ 78
Create VSANs................................................................................................................................................................ 89
5
Disable Unused FCoE Capability on CNA Ports ............................................................................................................ 126
6
NAS Logical Interface Settings .................................................................................................................................... 142
Building the Virtual Machines and Environment for Workload Testing ............................................................................... 173
Create the Golden Image for VMware Horizon RDS Deployment .................................................................................. 181
Create the Golden Image for Horizon Linked Clone Desktops ...................................................................................... 183
7
VSImax 4.1.x Description ............................................................................................................................................. 219
Single-Server Recommended Maximum Workload for RDS Persistent Hosted Server Sessions: 285 Users................. 225
Single-Server Recommended Maximum Workload for VDI Persistent and Non-Persistent with 210 Users .................. 229
Cluster Workload Testing with 1680 Persistent RDS Users .............................................................................................. 233
Performance Data from One RDSH Server: 1680 Users RDSH Sessions Cluster Testing .............................................. 237
Cluster Workload Testing with 1660 Persistent and Non-Persistent VDI Instant Desktop Users ....................................... 237
Full Scale Mixed Workload Testing with 5000 Users ........................................................................................................ 243
RDSH Server Performance Monitor Data for One Sample RDSH Server: 5000 Users Mixed Scale Testing ....................... 247
AFF A300 Storage Detailed Test Results for Cluster Scalability Test ................................................................................ 255
AFF A300 Storage Test Results for 1680 RDS Windows 2016 Sessions ...................................................................... 256
NetApp AFF A300 Test Results for 1660 Non-Persistent Windows 10 x64 VMware Horizon Desktops ........................ 258
NetApp AFF A300 Test Results for 1660 Persistent Windows 10x64 VMware Horizon Desktops ................................. 260
NetApp AFF A300 Storage Test Results for 5000 User Full Scale, Mixed Workload Scalability .................................... 262
8
Executive Summary
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations ............................................................ 271
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs .......... 324
5000 Mixed Workload VMware Horizon RDSH and VDI Non Persistent Instant Clones and Persistent Full Clones Virtual
Machines Testing............................................................................................................................................................. 324
VDI Host Metrics for 5000 Users Scale Test ................................................................................................................ 335
NETAPP AFF A300 Storage Charts for 5000 Users Scale Test ..................................................................................... 362
9
Executive Summary
Executive Summary
Cisco® Validated Designs include systems and solutions that are designed, tested, and documented to
facilitate and improve customer deployments. These designs incorporate a wide range of technologies and
products into a portfolio of solutions that have been developed to address the business needs of customers.
Cisco, NetApp and VMware have partnered to deliver this document, which serves as a specific step by
step guide for implementing this solution. This Cisco Validated Design provides an efficient architectural
design that is based on customer requirements. The solution that follows is a validated approach for
deploying Cisco, NetApp and VMware technologies as a shared, high performance, resilient, virtual desktop
infrastructure.
This document provides a reference architecture, design guide, and deployment for up to a 5000 seat
mixed workload end user computing environment on FlexPod Datacenter with Cisco UCS and NetApp® All
Flash FAS (AFF) A300 storage. The solution includes VMware Horizon server-based Remote Desktop Sever
Hosted sessions, VMware Horizon persistent Microsoft Windows 10 virtual desktops and VMware Horizon
non-persistent Microsoft Windows 10 instant clone virtual desktops on VMware vSphere 6.5.
The solution is a predesigned, best-practice data center architecture built on the FlexPod reference
architecture. The FlexPod Datacenter used in this validation includes Cisco Unified Computing System
(UCS), the Cisco Nexus® 9000 family of switches, Cisco MDS 9000 family of Fibre Channel (FC) switches
and a NetApp AFF A300 system.
This solution is 100 percent virtualized on fifth generation Cisco UCS B200 M5 blade servers, booting
VMware vSphere 6.5 Update 1 through FC SAN from the AFF A300 storage array. The virtual desktop
sessions are powered by VMware Horizon 7. VMware Horizon Remote Desktop Server Hosted Sessions
(1680 RDS Server sessions) and 1660 VMware Horizon Instant and 1660 Full Clones Window 10 desktops
(3320 virtual desktops) were provisioned on the AFF A300 storage array. Where applicable the document
provides best practice recommendations and sizing guidelines for customer deployment of this solution.
This solution delivers the design 5000 user payload with five fewer blade servers than previous 5000 seat
solutions on fourth generation Cisco UCS Blade Servers making it more efficient and cost effective in the
data center.
The solution provides outstanding virtual desktop end user experience as measured by the Login VSI
4.1.25.6 Knowledge Worker workload running in benchmark mode.
The 5000-seat solution provides a large-scale building block that can be replicated to confidently scale-out
to tens of thousands of users.
10
Solution Overview
Solution Overview
Introduction
The current industry trend in data center design is towards shared infrastructures. By using virtualization
along with pre-validated IT platforms, enterprise customers have embarked on the journey to the cloud by
moving away from application silos and toward shared infrastructure that can be quickly deployed, thereby
increasing agility and reducing costs. Cisco, NetApp storage, and VMware have partnered to deliver this
Cisco Validated Design, which uses best of breed storage, server and network components to serve as the
foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly
and confidently deployed.
Audience
The audience for this document includes, but is not limited to; sales engineers, field consultants, professional
services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure
built to deliver IT efficiency and enable IT innovation.
What’s New?
This is the first VMware Horizon desktop virtualization Cisco Validated Design with Cisco UCS 5th generation
servers and a NetApp AFF A-Series system.
Cisco UCS B200 M5 blade servers with Intel Xeon Scalable Family processors and 2666 MHz memory
Support for the Cisco UCS 3.2(1d) release and Cisco UCS B200-M5 servers
Support for the latest release of NetApp AFF A300 hardware and NetApp ONTAP® 9.1
11
Solution Overview
The data center market segment is shifting toward heavily virtualized private, hybrid and public cloud
computing models running on industry-standard systems. These environments require uniform design
points that can be repeated for ease of management and scalability.
These factors have led to the need for predesigned computing, networking and storage building blocks
optimized to lower the initial design cost, simplify management, and enable horizontal scalability and high
levels of utilization.
Enterprise Datacenter
12
Solution Summary
Solution Summary
This Cisco Validated Design prescribes a defined set of hardware and software that serves as an integrated
foundation for both VMware Horizon RDSH server desktop sessions based on Microsoft Server 2016,
VMware Horizon VDI persistent virtual machines and VMware Horizon VDI non persistent virtual machines
based on Windows 10 operating system.
The mixed workload solution includes NetApp AFF A300 storage, Cisco Nexus® and MDS networking, the
Cisco Unified Computing System (Cisco UCS®), VMware Horizon and VMware vSphere software in a single
package. The design is space optimized such that the network, compute, and storage required can be
housed in one data center rack. Switch port density enables the networking components to accommodate
multiple compute and storage configurations of this kind.
The infrastructure is deployed to provide Fibre Channel-booted hosts with block-level access to shared
storage. The reference architecture reinforces the "wire-once" strategy, because as additional storage is
added to the architecture, no re-cabling is required from the hosts to the Cisco UCS fabric interconnect.
The combination of technologies from Cisco Systems, Inc., NetApp Inc. and VMware Inc. produced a highly
efficient, robust and affordable desktop virtualization solution for a hosted virtual desktop and hosted shared
desktop mixed deployment supporting different use cases. Key components of the solution include the
following:
More power, same size. Cisco UCS B200 M5 half-width blade with dual 18-core 2.3 GHz Intel ® Xeon
® Gold (6140) processors and 768 GB of memory for VMware Horizon Desktop hosts supports more
virtual desktop workloads than the previously released generation processors on the same hardware.
The Intel 18-core 2.3 GHz Intel ® Xeon ® Gold (6140) processors used in this study provided a
balance between increased per-blade capacity and cost.
Fewer servers. Because of the increased compute power in the Cisco UCS B200 M5 servers, we
supported the 5000 seat design with 16% fewer servers compared to previous generation B200 M4s.
Fault-tolerance with high availability built into the design. The various designs are based on using
one Unified Computing System chassis with multiple Cisco UCS B200 M5 blades for virtualized
desktop and infrastructure workloads. The design provides N+1 server fault tolerance for hosted virtual
desktops, hosted shared desktops and infrastructure services.
Stress-tested to the limits during aggressive boot scenario. The 5000-user mixed RDS hosted
virtual sessions and VDI pooled shared desktop environment booted and registered with the VMware
Horizon 7 Administrator in under 20 minutes, providing our customers with an extremely fast, reliable
cold-start desktop virtualization system.
Stress-tested to the limits during simulated login storms. All 5000 simulated users logged in and
started running workloads up to steady state in 48-minutes without overwhelming the processors,
exhausting memory or exhausting the storage subsystems, providing customers with a desktop
virtualization system that can easily handle the most demanding login and startup storms.
Ultra-condensed computing for the datacenter. The rack space required to support the system is
less than a single 42U rack, conserving valuable data center floor space.
13
Solution Summary
All Virtualized: This Cisco Validated Design (CVD) presents a validated design that is 100 percent
virtualized on VMware ESXi 6.5. All of the virtual desktops, user data, profiles, and supporting
infrastructure components, including Active Directory, Provisioning Servers, SQL Servers, VMware
Horizon Connection Servers, VMware Horizon Composer Server, VMware Horizon Replica Servers,
VMware Horizon Remote Desktop Server Hosted sessions and VDI virtual machine desktops. This
provides customers with complete flexibility for maintenance and capacity additions because the
entire system runs on the FlexPod converged infrastructure with stateless Cisco UCS Blade servers
and NetApp FC storage.
Cisco maintains industry leadership with the new Cisco UCS Manager 3.2(1d) software that
simplifies scaling, guarantees consistency, and eases maintenance. Cisco
efforts with Cisco UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer
environments are consistent locally, across Cisco UCS Domains and across the globe, our software
suite offers increasingly simplified operational and deployment management and it continues to widen
network.
Our 40G unified fabric story gets additional validation on Cisco UCS 6300 Series Fabric Interconnects
as Cisco runs more challenging workload testing, while maintaining unsurpassed user response times.
NetApp AFF A300 array provides industry-leading storage solutions that efficiently handle the most
demanding I/O bursts (for example, login storms), profile management, and user data management,
deliver simple and flexible business continuance, and help reduce storage cost per desktop.
NetApp AFF A300 array provides a simple to understand storage architecture for hosting all user data
components (VMs, profiles, user data) on the same storage array.
NetApp clustered Data ONTAP software enables to seamlessly add, upgrade or remove storage from
the infrastructure to meet the needs of the virtual desktops.
NetApp Virtual Storage Console (VSC) for VMware vSphere hypervisor has deep integrations with
vSphere, providing easy-button automation for key storage tasks such as storage repository
provisioning, storage resize, data deduplication, directly from vCenter.
VMware Horizon 7. Latest and greatest virtual desktop and application product. VMware Horizon 7
follows a new unified product architecture that supports both hosted-shared desktops and
applications (RDS) and complete virtual desktops (VDI). This new VMware Horizon release simplifies
tasks associated with large-scale VDI management. This modular solution supports seamless delivery
of Windows apps and desktops as the number of users increase. In addition, Horizon enhancements
help to optimize performance and improve the user experience across a variety of endpoint device
types, from workstations to mobile devices including laptops, tablets, and smartphones.
Optimized to achieve the best possible performance and scale. For hosted shared desktop
sessions, the best performance was achieved when the number of vCPUs assigned to the VMware 7
RDS virtual machines did not exceed the number of hyper-threaded (logical) cores available on the
server. In other words, maximum performance is obtained when not overcommitting the CPU
resources for the virtual machines running virtualized RDS systems.
14
Solution Summary
Provisioning desktop machines made easy. Remote Desktop Server Hosted (RDSH) shared virtual
machines and VMware Horizon 7, Microsoft Windows 10 virtual machines were created for this
solution using VMware Instant and Composer pooled desktops.
increasingly diverse and geographically dispersed, including offshore contractors, distributed call center
operations, knowledge and task workers, partners, consultants, and executives connecting from locations
around the world at all times.
This workforce is also increasingly mobile, conducting business in traditional offices, conference rooms
across the enterprise campus, home offices, on the road, in hotels, and at the local coffee shop. This
workforce wants to use a growing array of client computing and mobile devices that they can choose based
on personal preference. These trends are increasing pressure on IT to ensure protection of corporate data
and prevent data leakage or loss through any combination of user, endpoint device, and desktop access
scenarios (Figure 1).
These challenges are compounded by desktop refresh cycles to accommodate aging PCs and bounded
local storage and migration to new operating systems, specifically Microsoft Windows 10 and productivity
tools, specifically Microsoft Office 2016.
15
Solution Summary
Some of the key drivers for desktop virtualization are increased data security and reduced TCO through
increased control and reduced management costs.
Simplified
Cisco UCS provides a radical new approach to industry-standard computing and provides the core of the
data center infrastructure for desktop virtualization. Among the many features and benefits of Cisco UCS are
the drastic reduction in the number of servers needed and in the number of cables used per server, and the
capability to rapidly deploy or reprovision servers through Cisco UCS service profiles. With fewer servers
and cables to manage and with streamlined server and virtual desktop provisioning, operations are
significantly simplified. Thousands of desktops can be provisioned in minutes with Cisco UCS Manager
-based cloning. This approach accelerates the time to
productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.
Cisco UCS Manager (UCSM) automates many mundane, error-prone data center operations such as
configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS
B-Series Blade Servers and C-Series Rack Servers with large memory footprints enable high desktop
density that helps reduce server infrastructure requirements.
Simplification also leads to more successful desktop virtualization implementation. Cisco and its technology
partners like VMware Technologies and NetApp have developed integrated, validated architectures,
including predefined converged architecture infrastructure packages such as FlexPod. Cisco Desktop
Virtualization Solutions have been tested with VMware vSphere, VMware Horizon.
Secure
Although virtual desktops are inherently more secure than their physical predecessors, they introduce new
security challenges. Mission-critical web and application servers using a common infrastructure such as
virtual desktops are now at a higher risk for security threats. Inter virtual machine traffic now poses an
important security consideration that IT managers need to address, especially in dynamic environments in
which virtual machines, using VMware vMotion, move across the server infrastructure.
Desktop virtualization, therefore, significantly increases the need for virtual machine level awareness of
policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an
extended computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the
importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure
(Cisco UCS and Cisco Nexus Family solutions) for desktop virtualization provides strong data center,
network, and desktop security, with comprehensive security from the desktop to the hypervisor. Security is
enhanced with segmentation of virtual desktops, virtual machine aware policies and administration, and
network security across the LAN and WAN infrastructure.
Scalable
Growth of a desktop virtualization solution is all but inevitable, so a solution must be able to scale, and scale
predictably, with that growth. The Cisco Desktop Virtualization Solutions built on FlexPod Datacenter
infrastructure supports high virtual-desktop density (desktops per server), and additional servers and
16
Solution Summary
storage scale with near-linear performance. FlexPod Datacenter provides a flexible platform for growth and
improves business agility. Cisco UCS Manager service profiles allow on-demand desktop provisioning and
make it just as easy to deploy dozens of desktops as it is to deploy thousands of desktops.
Cisco UCS servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco
Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1
terabyte (TB) of memory with 2- and 4-socket servers). Using unified fabric technology as a building block,
Cisco UCS server aggregate bandwidth can scale to up to 80 Gbps per server, and the northbound Cisco
UCS fabric interconnect can output 2 terabits per second (Tbps) at line rate, helping prevent desktop
virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-
based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution
video and communications traffic. In addition, Cisco storage partners NetApp help maintain data availability
and optimal performance during boot and login storms as part of the Cisco Desktop Virtualization Solutions.
Recent Cisco Validated Designs for End User Computing based on FlexPod solutions have demonstrated
scalability and performance, with up to 5000 desktops up and running in 20 minutes.
FlexPod Datacenter provides an excellent platform for growth, with transparent scaling of server, network,
and storage resources to support desktop virtualization, data center applications, and cloud computing.
The simplified deployment of Cisco UCS for desktop virtualization accelerates the time to productivity and
enhances business agility. IT staff and end users are more productive more quickly, and the business can
respond to new opportunities quickly by deploying virtual desktops whenever and wherever they are
needed. The high-performance Cisco systems and network deliver a near-native end-user experience,
allowing users to be productive anytime and anywhere.
The ultimate measure of desktop virtualization for any organization is its efficiency and effectiveness in both
the near term and the long term. The Cisco Desktop Virtualization Solutions are very efficient, allowing rapid
deployment, requiring fewer devices and cables, and reducing costs. The solutions are also very effective,
providing the services that end users need on their devices of choice while improving IT operations, control,
-in-class partnerships with leaders in
virtualization and storage, and through tested and validated designs and services to help customers
throughout the solution lifecycle. Long-
and secure architecture as the platform for desktop virtualization.
Physical Topology
Figure 2 illustrates the physical architecture.
17
Solution Summary
18
Solution Summary
Two Cisco UCS B200 M4 Blade Servers (2 Infra Server hosting Infrastructure VMs)
For desktop virtualization, the deployment includes VMware Horizon 7 running on VMware vSphere 6.5.
The design is intended to provide a large-scale building block for both VMware Horizon RDS Hosted server
sessions and Windows 10 non-persistent and persistent VDI desktops in the following ratio:
The data provided in this document will allow our customers to adjust the mix of RDSH and VDI desktops to
suite their environment. For example, additional blade servers and chassis can be deployed to increase
compute capacity, additional disk shelves can be deployed to improve I/O capability and throughput, and
special hardware or software features can be added to introduce new features. This document guides you
through the detailed steps for deploying the base architecture. This procedure covers everything from
physical cabling to network, compute and storage device configurations.
Configuration Guidelines
This Cisco Validated Design provides details for deploying a fully redundant, highly available 5000 seat
mixed workload virtual desktop solution with VMware on a FlexPod Datacenter architecture. Configuration
guidelines are provided that refer the reader to which redundant component is being configured with each
step. For example, storage controller 01and storage controller 02 are used to identify the two AFF A300
storage controllers that are provisioned with this document, Cisco Nexus A or Cisco Nexus B identifies the
pair of Cisco Nexus switches that are configured and Cisco MDS A or Cisco MDS B identifies the pair of
Cisco MDS switches that are configured.
The Cisco UCS 6332-16UP Fabric Interconnects are similarly configured. Additionally, this document details
the steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01,
VM-Host-Infra-02, VM-Host-RDSH-01, VM-Host-VDI-01 and so on. Finally, to indicate that you should
include information pertinent to your environment in a given step, <text> appears as part of the command
structure.
19
Solution Components
Solution Components
This section describes the components used in the solution outlined in this study.
What is FlexPod?
FlexPod is a best practice data center architecture that includes the following components:
These components are connected and configured according to the best practices of both Cisco and NetApp
to provide an ideal platform for running a variety of enterprise workloads with confidence. FlexPod can scale
up for greater performance and capacity (adding compute, network, or storage resources individually as
needed), or it can scale out for environments that require multiple consistent deployments (such as rolling
out of additional FlexPod stacks). The reference architecture covered in this document leverages Cisco
Nexus 9000 for the network switching element and pulls in the Cisco MDS 9000 for the SAN switching
component.
One of the key benefits of FlexPod is its ability to maintain consistency during scale. Each of the component
families shown (Cisco UCS, Cisco Nexus, and NetApp AFF) offers platform and resource options to scale the
20
Solution Components
infrastructure up or down, while supporting the same features and functionality that are required under the
configuration and connectivity best practices of FlexPod.
Why FlexPod?
The following lists the benefits of FlexPod:
Operational Simplicity
Lowest TCO
Dramatic savings in power, cooling, and space with 100 percent Flash
Enterprise-Grade Resiliency
Suitable for even large resource-intensive workloads such as real-time analytics or heavy
transactional databases
Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access.
The platform, optimized for virtual environments, is designed using open industry-standard technologies and
21
Solution Components
aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-
latency; lossless 40 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It
is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management
domain.
Compute: The system is based on an entirely new class of computing system that incorporates blade
servers based on Intel® Xeon® processor E5-2600/4600 v3 and E7-2800 v3 family CPUs.
Network: The system is integrated on a low-latency, lossless, 40-Gbps unified network fabric. This
network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks,
which are separate networks today. The unified fabric lowers costs by reducing the number of network
adapters, switches, and cables needed, and by decreasing the power and cooling requirements.
Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability,
performance, and operational control of virtual environments. Cisco security, policy enforcement, and
diagnostic features are now extended into virtualized environments to better support changing
business and IT requirements.
Storage access: The system provides consolidated access to local storage, SAN storage, and
network-attached storage (NAS) over the unified fabric. With storage access unified, Cisco UCS can
access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Small Computer
System Interface over IP (iSCSI) protocols. This capability provides customers with choice for storage
access and investment protection. In addition, server administrators can preassign storage-access
policies for system connectivity to storage resources, simplifying storage connectivity and
management and helping increase productivity.
Management: Cisco UCS uniquely integrates all system components, enabling the entire solution to
be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a CLI, and a
robust API for managing all system configuration processes and operations.
22
Solution Components
A cohesive, integrated system that unifies the technology in the data center; the system is managed,
serviced, and tested as a whole
Scalability through a design for hundreds of discrete servers and thousands of virtual machines and
the capability to scale I/O bandwidth to match demand
Cisco UCS Manager provides unified, embedded management of all software and hardware components of
the Cisco Unified Computing System across multiple chassis, rack servers, and thousands of virtual
machines. Cisco UCS Manager manages Cisco UCS as a single entity through an intuitive GUI, a command-
line interface (CLI), or an XML API for comprehensive access to all Cisco UCS Manager Functions.
23
Solution Components
The fabric interconnects provide the management and communication backbone for the Cisco UCS B-Series
Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades,
attached to the fabric interconnects become part of a single, highly available management domain. In
addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity
for all blades in the domain.
For networking, the Cisco UCS 6300 Series uses a cut-through architecture, supporting deterministic, low-
latency, line-rate 40 Gigabit Ethernet on all ports, 2.4 plus terabit (Tb) switching capacity, and 320 Gbps of
bandwidth per chassis IOM, independent of packet size and enabled services. The product series supports
Cisco low-latency, lossless, 40 Gigabit Ethernet unified network fabric capabilities, increasing the reliability,
efficiency, and scalability of Ethernet networks. The fabric interconnects support multiple traffic classes over
a lossless Ethernet fabric, from the blade server through the interconnect. Significant TCO savings come
from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs),
cables, and switches can be consolidated.
24
Solution Components
Cisco UCS combines Cisco UCS B-Series Blade Servers and C-Series Rack Servers with networking and
storage access into a single converged system with simplified management, greater cost efficiency and
agility, and increased visibility and control. The Cisco UCS B200 M5 Blade Server is one of the newest
servers in the Cisco UCS portfolio.
The Cisco UCS B200 M5 delivers performance, flexibility, and optimization for data centers and remote sites.
This enterprise-class server offers market-leading performance, versatility, and density without compromise
for workloads ranging from web infrastructure to distributed databases. The Cisco UCS B200 M5 can quickly
deploy stateless physical and virtual workloads with the programmable ease of use of the Cisco UCS
Manager software and simplified server access with Cisco® Single Connect technology. Based on the Intel
Xeon processor 6140 Gold product family, it offers up to 3 TB of memory using 128GB DIMMs, up to two
disk drives, and up to 320 Gbps of I/O throughput. The Cisco UCS B200 M5 offers exceptional levels of
performance, flexibility, and I/O throughput to run your most demanding applications.
In addition, Cisco UCS has the architectural advantage of not having to power and cool excess switches,
NICs, and HBAs in each blade server chassis. With a larger power budget per blade server, it provides
uncompromised expandability and capabilities, as in the new Cisco UCS B200 M5 server with its leading
memory-slot capacity and drive capacity.
Product Overview
The Cisco UCS B200 M5 Blade Server delivers performance, flexibility, and optimization for deployments in
data centers, in the cloud, and at remote sites. This enterprise-class server offers market-leading
25
Solution Components
performance, versatility, and density without compromise for workloads including Virtual Desktop
Infrastructure (VDI), web infrastructure, distributed databases, converged infrastructure, and enterprise
applications such as Oracle and SAP HANA. The Cisco UCS B200 M5 server can quickly deploy stateless
physical and virtual workloads through programmable, easy-to-use Cisco UCS Manager software and
simplified server access through Cisco SingleConnect technology. It includes:
Intel 3D XPoint-ready support, with built-in support for next-generation nonvolatile memory
technology
Two GPUs
Main Features
The Cisco UCS B200 M5 server is a half-width blade. Up to eight servers can reside in the 6-Rack-Unit
(6RU) Cisco UCS 5108 Blade Server Chassis, offering one of the highest densities of servers per rack unit of
blade chassis in the industry. You can configure the Cisco UCS B200 M5 to meet your local storage
requirements without having to buy, power, and cool components that you do not need.
24 DIMM slots for industry-standard DDR4 memory at speeds up to 2666 MHz, with up to 3 TB of total
memory when using 128-GB DIMMs
Modular LAN On Motherboard (mLOM) card with Cisco UCS Virtual Interface Card (VIC) 1340, a 2-
port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) capable mLOM mezzanine adapter
Optional rear mezzanine VIC with two 40-Gbps unified I/O ports or two sets of 4 x 10-Gbps unified I/O
ports, delivering 80 Gbps to the server; adapts to either 10- or 40-Gbps fabric connections
Two optional, hot-pluggable, hard-disk drives (HDDs), solid-state drives (SSDs), or NVMe 2.5-inch
drives with a choice of enterprise-class RAID or passthrough controllers
Cisco FlexStorage local drive storage subsystem, which provides flexible boot and local storage
capabilities and allows you to boot from dual, mirrored SD cards
26
Solution Components
27
Solution Components
28
Solution Components
For detailed information, refer to the Cisco UCS B200 M5 Blade Server Spec Sheet and the Cisco UCS B200
M5 Blade Server Data Sheet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over
256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network
interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco®
Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric
interconnect ports to virtual machines, simplifying server virtualization deployment and management.
29
Solution Components
Figure 9 illustrates the Cisco UCS VIC 1340 Virtual Interface Cards Deployed in the Cisco UCS B-Series
B200 M5 Blade Servers.
Figure 9 Cisco UCS VIC 1340 Deployed in the Cisco UCS B200 M5
Cisco Switching
Architectural Flexibility
Includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-
spine architectures
Leaf node support for Cisco ACI architecture is provided in the roadmap
Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support
30
Solution Components
Feature Rich
Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability,
and programmability
ACI-ready infrastructure helps users take advantage of automated policy-based systems management
Cisco Nexus 9372PX-E supports IP-based endpoint group (EPG) classification in ACI mode
Simplified Operations
Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration
file installation
An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML)
over a HTTP/HTTPS infrastructure
Python Scripting for programmatic access to the switch command-line interface (CLI)
Investment Protection
A Cisco 40 Gb bidirectional transceiver allows reuse of an existing 10 Gigabit Ethernet multimode cabling
plant for 40 Gigabit Ethernet Support for 1 Gb and 10 Gb access connectivity for data centers migrating
access switching infrastructure to faster speed. The following is supported:
6 fixed 40-Gbps QSFP+ for uplink connectivity that can be turned into 10 Gb ports through a QSFP to
SFP or SFP+ Adapter (QSA)
Latency of 1 to 2 microseconds
31
Solution Components
The Nexus 9372PX switch will go end-of-sale in February 2018. For an all 40GB infrastructure, we rec-
ommend substituting the Nexus 9332PQ switch featuring 32 x 40GB ports.
MDS 9148S has a pay-as-you-grow model which helps you scale from a 12 port base license to a 48 port
with an incremental 12 port license. This helps customers to pay and activate only the required ports.
MDS 9148S has a dual power supply and FAN trays to provide physical redundancy. The software features,
like ISSU and ISSD, helps with upgrading and downgrading code without reloading the switch and without
interrupting the live traffic.
Features
In-Service Software Upgrade and dual redundant hot-swappable power supplies for high availability
32
Solution Components
Specifications at-a-Glance
Performance and Port Configuration
Up to 256 buffer credits per group of 4 ports (64 per port default, 253 maximum for a single port in
the group)
Supports configurations of 12, 24, 36, or 48 active ports, with pay-as-you-grow, on-demand
licensing
Advanced Functions
VMware vSphere 6.5 introduces many enhancements to vSphere Hypervisor, VMware virtual machines,
vCenter Server, virtual storage, and virtual networking, further extending the core capabilities of the vSphere
platform.
Today VMware announced vSphere 6.5, which is one of the most feature rich releases of vSphere in quite
some time. The vCenter Server Appliance is taking charge in this release with several new features which
feel. Users of both Linux and Mac will also be ecstatic since the installer is now supported on those
platforms along with Microsoft Windows.
features that are exclusive such as:
Migration
33
Solution Components
vSphere Client
-based vSphere
Client that will run alongside the vSphere Web Client. The vSphere Client is built right into vCenter Server 6.5
(both Windows and Appliance) and is enabled by default. While the vSphere Client does not yet have full
feature parity the team have prioritized many of the day to day tasks of administrators and continue to seek
outside of the normal vCenter Server release cycle. To make sure it is easy and simple for customers to stay
up to date the vSphere Client will be able to be updated without any effects to the rest of vCenter Server.
The following are some of the benefits of the new vSphere Client:
Users of the Fling have been extremely positive about its performance
Scalability Improvements
ESXi 6.5 dramatically increases the scalability of the platform. With vSphere Hypervisor 6.0, clusters
can scale to as many as 64 hosts, up from 32 in previous releases. With 64 hosts in a cluster, vSphere
6.0 can support 8000 virtual machines in a single cluster. This capability enables greater consolidation
ratios, more efficient use of VMware vSphere Distributed Resource Scheduler (DRS), and fewer
clusters that must be separately managed. Each vSphere Hypervisor 6.5 instance can support up to
480 logical CPUs, 12 terabytes (TB) of RAM, and 1024 virtual machines. By using the newest
hardware advances, ESXi 6.5 enables the virtualization of applications that previously had been
thought to be non-virtualizable.
Security Enhancements
Account management: ESXi 6.5 enables management of local accounts on the ESXi server using
new ESXi CLI commands. The capability to add, list, remove, and modify accounts across all hosts
34
Solution Components
in a cluster can be centrally managed using a vCenter Server system. Previously, the account and
permission management functions for ESXi hosts were available only for direct host connections.
The setup, removal, and listing of local permissions on ESXi servers can also be centrally managed.
Account lockout: ESXi Host Advanced System Settings have two new options for the management
of failed local account login attempts and account lockout duration. These parameters affect
Secure Shell (SSH) and vSphere Web Services connections, but not ESXi direct console user
interface (DCUI) or console shell access.
Password complexity rules: In previous versions of ESXi, password complexity changes had to be
made by manually editing the /etc/pam.d/passwd file on each ESXi host. In vSphere 6.0, an entry in
Host Advanced System Settings enables changes to be centrally managed for all hosts in a cluster.
Improved auditability of ESXi administrator actions: Prior to vSphere 6.0, actions at the vCenter
Server level by a named user appeared in ESXi logs with the vpxuser username: for example,
[user=vpxuser]. In vSphere 6.5, all actions at the vCenter Server level for an ESXi server appear in
the ESXi logs with the vCenter Server username: for example, [user=vpxuser: DOMAIN\User]. This
approach provides a better audit trail for actions run on a vCenter Server instance that conducted
corresponding tasks on the ESXi hosts.
Flexible lockdown modes: Prior to vSphere 6.5, only one lockdown mode was available. Feedback
from customers indicated that this lockdown mode was inflexible in some use cases. With vSphere
6.5, two lockdown modes are available:
o In normal lockdown mode, DCUI access is not stopped, and users on the DCUI access list can
access the DCUI.
o In strict lockdown mode, the DCUI is stopped.
o Exception users: vSphere 6.0 offers a new function called exception users. Exception users are
local accounts or Microsoft Active Directory accounts with permissions defined locally on the
host to which these users have host access. These exception users are not recommended for
general user accounts, but they are recommended for use by third-party applications for ser-
vice accounts, for example that need host access when either normal or strict lockdown mode
is enabled. Permissions on these accounts should be set to the bare minimum required for the
application to perform its task and with an account that needs only read-only permissions on the
ESXi host.
Smart card authentication to DCUI: This function is for U.S. federal customers only. It enables DCUI
login access using a Common Access Card (CAC) and Personal Identity Verification (PIV). The ESXi
host must be part of an Active Directory domain.
35
Solution Components
VMware Horizon
VMware Horizon desktop virtualization solutions built on a unified architecture so they are simple to manage
and flexible enough to meet the needs of all your organization's users. You use the same architecture and
management tools to manage public, private, and hybrid cloud deployments as you do for on premises
deployments
VMware Horizon Virtual machines and RDSH known as server-based hosted sessions: These are
applications hosted from Microsoft Windows servers to any type of device, including Windows PCs,
Macs, smartphones, and tablets. Some VMware editions include technologies that further optimize the
experience of using Windows applications on a mobile device by automatically translating native
mobile-device display, navigation, and controls to Windows applications; enhancing performance over
mobile networks; and enabling developers to optimize any custom Windows application for any mobile
environment.
VMware Horizon RDSH session users also known as server-hosted desktops: These are inexpensive,
locked-down Windows virtual desktops hosted from Windows server operating systems. They are well
suited for users, such as call center employees, who perform a standard set of tasks.
Instant Clones
A new type of desktop virtual machines that can be provisioned significantly faster than the
traditional Composer linked clones.
Recreating a desktop pool with a new OS image can be accomplished in a fraction of the time it
takes a Composer desktop pool because the parent image can be prepared well ahead of the
scheduled time of pool recreation.
Administrators can select the VMware Blast display protocol as the default or available protocol for
pools, farms, and entitlements.
End users can select the VMware Blast display protocol when connecting to remote desktops and
applications.
36
Solution Components
For VMware Identity Manager integration, True SSO streamlines the end-to-end login experience.
After users log in to VMware Identity Manager using a smart card or an RSA SecurID or RADIUS
token, users are not required to also enter Active Directory credentials in order to use a remote
desktop or application.
System health status for True SSO appears in the Horizon Administrator dashboard.
Can be used in a single domain, in a single forest with multiple domains, and in a multiple-forest,
multiple-domain setup.
Smart Policies
Control of the clipboard cut-and-paste, client drive redirection, USB redirection, and virtual printing
desktop features through defined policies.
Conditional policies based on user location, desktop tagging, pool name, and Horizon Client
registry values.
Configure the clipboard memory size for VMware Blast and PCoIP sessions
Horizon administrators can configure the server clipboard memory size by setting GPOs for VMware
Blast and PCoIP sessions. Horizon Client 4.1 users on Windows, Linux, and Mac OS X systems can
configure the client clipboard memory size. The effective memory size is the lesser of the server
and client clipboard memory size values.
Network recovery is now supported for VMware Blast sessions initiated from iOS, Android, Mac OS
X, Linux, and Chrome OS clients. Previously, network recovery was supported only for Windows
client sessions. If you lose your network connection unexpectedly during a VMware Blast session,
Horizon Client attempts to reconnect to the network and you can continue to use your remote
desktop or application. The network recovery feature also supports IP roaming, which means you
can resume your VMware Blast session after switching to a WiFi network.
Horizon administrators can configure not to display the Remember user name check box and
therefore not remember the administrator's login name.
37
Solution Components
Horizon administrators can configure Connection Server to allow Horizon Client Mac OS X systems to
remember a user's user name, password, and domain information. If users choose to have their
credentials saved, the credentials are added to the login fields in Horizon Client on subsequent
connections.
Microsoft Windows 10
The Horizon User Profile Migration tool migrates Windows 7, 8/8.1, Server 2008 R2, or Server 2012
R2 user profiles to Windows 10 user profiles.
View Composer. View Composer and linked clones provide automated and efficient management
of RDS server farms.
Graphics Support. Existing 3D vDGA and GRID vGPU graphics solutions on VDI desktops have
been extended to RDS hosts, enabling graphics-intensive applications to run on RDS desktops and
Hosted Apps.
Enhanced Load Balancing. A new capability provides load balancing of server farm applications
based on memory and CPU resources.
One-Way AD Trusts. One-way AD trust domains are now supported. This feature enables
environments with limited trust relationships between domains without requiring Horizon
Connection Server to be in an external domain.
Hosted App Support. Support for application remoting allows applications to be launched using
global entitlements across a pod federation.
HTML Access (Blast) Support. Users can use HTML Access to connect to remote desktops and
applications in a Cloud Pod Architecture deployment.
Access Point is a hardened Linux-based virtual appliance that protects virtual desktop and application
resources to allow secure remote access from the Internet. Access Point provides a new
authenticating DMZ gateway to Horizon Connection Server. Smart card support on Access Point is
available as a Tech Preview. Security server will continue to be available as an alternative
configuration. For more information, see Deploying and Configuring Access Point.
FIPS
Install-time FIPS mode allows customers with high security requirements to deploy Horizon 6.
Graphics Enhancements
38
Solution Components
AMD vDGA enables vDGA pass-through graphics for AMD graphics hardware.
Horizon Administrator shows additional licensing information, including license key, named user
and concurrent connection user count.
Additional Features
Horizon Administrator security protection layer. See VMware Knowledge Base (KB) article 2144303
for more information:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144
303
DPI scaling is supported if it is set at the system level and the scaling level is greater than 100.
An RDS host is a server computer that hosts applications and desktop sessions for remote access. An
RDS host can be a virtual machine or a physical server.
An RDS host has the Microsoft Remote Desktop Services role, the Microsoft Remote Desktop Session
Host service, and Horizon Agent installed. Remote Desktop Services was previously known as
Terminal Services. The Remote Desktop Session Host service allows a server to host applications and
remote desktop sessions. With Horizon Agent installed on an RDS host, users can connect to
applications and desktop sessions by using the display protocol PCoIP or Blast Extreme. Both
protocols provide an optimized user experience for the delivery of remote content, including images,
audio and video.
The performance of an RDS host depends on many factors. For information on how to tune the
performance of different versions of Windows Server, see
http://msdn.microsoft.com/library/windows/hardware/gg463392.aspx.
39
Solution Components
Horizon 7 supports at most one desktop session and one application session per user on an RDS host.
When users submit print jobs concurrently from RDS desktops or applications that are hosted on the
same RDS host, the ThinPrint server on the RDS host processes the print requests serially rather than
in parallel. This can cause a delay for some users. Note that the print server does not wait for a print
job to complete before processing the next one. Print jobs that are sent to different printers will print in
parallel.
If a user launches an application and also an RDS desktop, and both are hosted on the same RDS host,
they share the same user profile. If the user launches an application from the desktop, conflicts may
result if both applications try to access or modify the same parts of the user profile, and one of the
applications may fail to run properly.
The process of setting up applications or RDS desktops for remote access involves the following
tasks:
Installing Applications
If you plan to create application pools, you must install the applications on the RDS hosts. If you
want Horizon 7 to automatically display the list of installed applications, you must install the
applications so that they are available to all users from the Start menu. You can install an
application at any time before you create the application pool. If you plan to manually specify an
application, you can install the application at any time, either before or after creating an application
pool.
IMPORTANT: When you install an application, you must install it on all the RDS hosts in a farm and in the
same location on each RDS host. If you do not, a health warning will appear on the View Administrator
dashboard. In such a situation, if you create an application pool, users might encounter an error when they
try to run the application.
When you create an application pool, Horizon 7 automatically displays the applications that are available
to all users rather than individual users from the Start menu on all of the RDS hosts in a farm. You can
choose any applications from that list. In addition, you can manually specify an application that is not
available to all users from the Start menu. There is no limit on the number of applications that you can in-
stall on an RDS host.
RDS Hosts
RDS hosts are server computers that have Windows Remote Desktop Services and View Agent
installed. These servers host applications and desktop sessions that users can access remotely. To
use RDS desktop pools or applications, your end users must have access to Horizon Client 3.0 or later
software.
Desktop Pools
40
Solution Components
There are three types of desktop pools: automated, manual, and RDS. Automated desktop pools use a
vCenter Server virtual machine template or snapshot to create a pool of identical virtual machines.
Manual desktop pools are a collection of existing vCenter Server virtual machines, physical computers,
or third-party virtual machines. In automated or manual pools, each machine is available for one user
to access remotely at a time. RDS desktop pools are not a collection of machines, but instead, provide
users with desktop sessions on RDS hosts. Multiple users can have desktop sessions on an RDS host
simultaneously.
Application Pools
Application pools let you deliver applications to many users. The applications in application pools run
on a farm of RDS hosts.
Farms
Farms are collections of RDS hosts and facilitate the management of those hosts. Farms can have a
variable number of RDS hosts and provide a common set of applications or RDS desktops to users.
When you create an RDS desktop pool or an application pool, you must specify a farm. The RDS hosts
in the farm provide desktop and application sessions to users.
Flash Redirection
You can compile a black list to ensure that the URLs specified in the list will not be able to redirect
Flash content. You must enable the GPO setting FlashMMRUrlListEnableType to use either a white
list or black list.
The VMwareAgentCIT policy setting enables remote connections to Internet Explorer to use the
Client's IP address instead of the IP address of the remote desktop machine.
The FlashMMRUrlListEnableType and FlashMMRUrlList policy settings specify and control the white
list or black list that enables or disables the list of URLs from using Flash Redirection.
Horizon PowerCLI
View PowerCLI is deprecated. Horizon PowerCLI replaces View PowerCLI and includes cmdlets
that you can use with VMware PowerCLI.
For more information about Horizon PowerCLI cmdlets, read the VMware PowerCLI Cmdlets
Reference.
For information on the API specifications to create advanced functions and scripts to use with
Horizon PowerCLI, see the API Reference at the VMware Developer Center
For more information on sample scripts that you can use to create your own Horizon PowerCLI
scripts, see the Horizon PowerCLI community on GitHub.
41
Solution Components
KDE support
K Desktop Environment (KDE) support is now also available on CentOS 7, RHEL 7, Ubuntu 14.04,
Ubuntu 16.04, and SLED 11 SP4 platforms.
MATE support
MATE desktop environment is supported on Ubuntu 14.04 and 16.04 virtual machines.
Horizon Agent
HTML5 Multimedia Redirection
You can install the HTML5 Multimedia Redirection feature by selecting the HTML5 Multimedia
Redirection custom setup option in the Horizon Agent installer. With HTML5 Multimedia Redirection, if
an end user uses the Chrome browser, HTML5 multimedia content is sent from the remote desktop to
the client system, reducing the load on the ESXi host. The client system plays the multimedia content
and the user has a better audio and video experience.
SHA-256 support
Horizon Agent has been updated to support the SHA-256 cryptographic hash algorithm. SHA-256 is
also supported in Horizon Client 4.6 and Horizon 7 version 7.2 and later.
The default User Environment Manager timeout value has been increased. This change makes sure
that the USB redirection smart policy takes effect even when the login process takes a long time.
With Horizon Client 4.6, the User Environment Manager timeout value is configured only on the
agent and is sent from the agent to the client.
You can now bypass User Environment Manager control of USB redirection by setting a registry key
on the agent machine. This change helps ensure that smart card SSO works on Teradici zero
clients.
42
Solution Components
Composer
For enhanced security, you can enable the digest access authentication method for Composer.
Persona Management
Persona Management supports guest operating systems that use the "v6" version of the user
profile.
You can use the migration tool to migrate the "v2" and "v5" user profiles versions to the "v6" user
profile version. The tool is installed with the Persona binary file.
View application and process names and resource use within a virtual or published desktop to
identify which applications and process are using up machine resources.
View updated metrics such as Horizon Client version and the Blast protocol.
View additional session metrics such as the VM information, CPU, or memory usage.
You can assign predefined administrator roles to Horizon Help Desk Tool administrators to delegate
the troubleshooting tasks between administrator users. You can also create custom roles and add
privileges based on the predefined administrator roles.
You can verify the product license key for Horizon Help Desk Tool and apply a valid license.
Monitoring
If the event database shuts down, Horizon Administrator maintains an audit trail of the events that
occur before and after the event database shutdown.
Instant Clones
Windows Server operating systems are supported for instant clones in this release. For an updated
list of supported Windows Server operating systems, see the VMware Knowledge Base (KB) article
2150295.
You can copy, paste, or enter the path for the AD tree in the AD container field when you create an
instant-clone desktop pool.
If there are no internal VMs in all four internal folders created in vSphere Web Client, these folders
are unprotected and you can delete these folders.
You can use the enhanced instant-clone maintenance utility IcUnprotect.cmd to unprotect or delete
template, replica, or parent VMs or folders from vSphere hosts.
43
Solution Components
Instant clones are compatible with Storage DRS (sDRS). Therefore, instant clones can reside in a
datastore that is part of an sDRS cluster.
You can configure Windows Start menu shortcuts for global entitlements. When an entitled user
connects to a Connection Server instance in the pod federation, Horizon Client for Windows places
these shortcuts in the Start menu on the user's Windows client device.
You can restrict access to entitled desktop pools, application pools, global entitlements, and global
application entitlements from certain client computers.
You can configure Windows start menu shortcuts for entitled desktop and application pools. When
an entitled user connects to a Connection Server instance, Horizon Client for Windows places these
shortcuts in the Start menu on the user's Windows client device.
Blast Extreme provides network continuity during momentary network loss on Windows clients.
Performance counters displayed using PerfMon on Windows agents for Blast session, imaging,
audio, CDR, USB, and virtual printing provide an accurate representation of the current state of the
system that also updates at a constant rate.
Details regarding the data collected through CEIP and the purposes for which it is used by VMware
are set forth at the Trust Assurance Center.
Security
With the USB over Session Enhancement SDK feature, you do not need to open TCP port 32111 for
USB traffic in a DMZ-based security server deployment. This feature is supported for both virtual
desktops and published desktops on RDS hosts.
Database Support
The Always On Availability Groups feature for Microsoft SQL Server 2014 is supported in this
release of Horizon 7. For more information, refer to the Release Notes.
44
Solution Components
Windows 10 version 1703 Semi Annual Channel (broad deployment) Current Branch (CBB)
For the complete list of supported Windows 10 on Horizon including all VDI (Full Clones, Linked and Instant
clones) click the following links: https://kb.vmware.com/s/article/2149393 and
https://kb.vmware.com/s/article/2150295?r=2&Quarterback.validateRoute=1&KM_Utility.getArticleData=1&
KM_Utility.getGUser=1&KM_Utility.getArticleLanguage=2&KM_Utility.getArticle=1
-streaming
technology. After installing and configuring the composed pooled desktops, a single shared disk image
(Master Image) is taken a snapshot of the OS and application image, and then storing that snapshot file
accessible to host(s).
45
Solution Components
46
Solution Components
Figure 15 Horizon VDI and RDSH Desktop Delivery Based on Display Protocol (PCoIP/Blast/RDP)
47
Solution Components
Series lineup includes the A200, A300, A700, and A700s. These controllers and their specifications listed in
Table 5 . For more information about the A-Series AFF controllers, see:
http://www.netapp.com/us/products/storage-systems/all-flash-array/aff-a-series.aspx
https://hwu.netapp.com/Controller/Index
NAS Scale-out 2-8 nodes 2-24 nodes 2-24 nodes 2-24 nodes
SAN Scale-out 2-8 nodes 2-12 nodes 2-12 nodes 2-12 nodes
This solution utilizes the NetApp AFF A300, seen in Figure 8 and Figure 9. This controller provides the high-
performance benefits of 40GbE and all flash SSDs, offering better performance than previous models, and
occupying only 3U of rack space versus 6U with the AFF8040. When combined with the 2U disk shelf of
3.8TB disks, this solution can provide ample horsepower and over 90TB of raw capacity, all while occupying
only 5U of valuable rack space. This makes it an ideal controller for a shared workload converged
infrastructure. The A700s would be an ideal fit for situations where more performance is needed
The FlexPod reference architecture supports a variety of NetApp FAS controllers such as FAS9000,
FAS8000, FAS2600 and FAS2500; AFF A-Series platforms such as AFF8000; and legacy NetApp storage.
For more information about the AFF A-Series product family, see:
http://www.netapp.com/us/products/storage-systems/all-flash-array/aff-a-series.aspx
The 40GbE cards are installed in the expansion slot 2 and the ports are e2a, e2e.
48
Solution Components
Storage Efficiency
Storage efficiency has always been a primary architectural design point of ONTAP. A wide array of features
allows businesses to store more data using less space. In addition to deduplication and compression,
businesses can store their data more efficiently by using features such as unified storage, multi-tenancy, thin
provisioning, and NetApp Snapshot® technology.
Starting with ONTAP 9, NetApp guarantees that the use of NetApp storage efficiency technologies on AFF
systems reduce the total logical capacity used to store customer data by 75 percent, a data reduction ratio
of 4:1. This space reduction is a combination of several different technologies, such as deduplication,
compression, and compaction, which provide additional reduction to the basic features provided by ONTAP.
Compaction, which is introduced in ONTAP 9, is the latest patented storage efficiency technology released
by NetApp. In the NetApp WAFL® file system, all I/O takes up 4KB of space, even if it does not actually
require 4KB of data. Compaction combines multiple blocks that are not using their full 4KB of space together
into one block. This one block can be more efficiently stored on the disk-to-save space. This process is
illustrated in Figure 18.
49
Solution Components
An SVM can support multiple data protocols concurrently. Volumes within the SVM can be joined together to
form a single NAS namespace, which makes all of an SVM's data available through a single share or mount
point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs can be created and
exported by using iSCSI, FC, or FCoE. Any or all of these data protocols can be configured for use within a
given SVM.
Because it is a secure entity, an SVM is only aware of the resources that are assigned to it and has no
knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct
entity with its own security domain. Tenants can manage the resources allocated to them through a
delegated SVM administration account. Each SVM can connect to unique authentication zones such as
Active Directory, LDAP, or NIS. A NetApp cluster can contain multiple SVMs. If you have multiple SVMs, you
can delegate an SVM to a specific application. This allows administrators of the application to access only
the dedicated SVMs and associated storage, increasing manageability, and reducing risk.
SAN Boot
NetApp recommends implementing SAN boot for Cisco UCS servers in the FlexPod Datacenter solution.
Doing so enables the operating system to be safely secured by the NetApp All Flash FAS storage system,
providing better performance. In this design, FC SAN boot is validated.
50
Solution Components
In FC SAN boot, each Cisco UCS server boots by connecting the NetApp All Flash FAS storage to the Cisco
MDS switch. The 16G FC storage ports, in this example 0g and 0h, are connected to Cisco MDS switch. The
FC LIFs are created on the physical ports and each FC LIF is uniquely identified by its target WWPN. The
storage system target WWPNs can be zoned with the server initiator WWPNs in the Cisco MDS switches.
The FC boot LUN is exposed to the servers through the FC LIF using the MDS switch; this enables only the
authorized server to have access to the boot LUN. Refer Figure 19 for the port and LIF layout
Unlike NAS network interfaces, the SAN network interfaces are not configured to fail over during a failure.
Instead if a network interface becomes unavailable, the host chooses a new optimized path to an available
network interface. ALUA is a standard supported by NetApp used to provide information about SCSI targets,
which allows a host to identify the best path to the storage.
51
Architecture and Design Considerations for Desktop Virtualization
Knowledge Workers today do not just work in their offices all day; they attend meetings, visit branch
offices, work from home, and even coffee shops. These anywhere workers expect access to all of
their same applications and data wherever they are.
External Contractors are increasingly part of your everyday business. They need access to certain
portions of your applications and data, yet administrators still have little control over the devices they
use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of
providing these workers a device vs. the security risk of allowing them access from their own devices.
Task Workers perform a set of well-defined tasks. These workers access a small set of applications
and have limited requirements from their PCs. However, since these workers are interacting with your
customers, partners, and employees, they have access to your most critical data.
Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to
connect to a network. In addition, these workers expect the ability to personalize their PCs, by
installing their own applications and storing their own data, such as photos and music, on these
devices.
Shared Workstation users are often found in state-of-the-art university and business computer labs,
conference rooms or training centers. Shared workstation environments have the constant
requirement to re-provision desktops with the latest operating systems and applications as the needs
of the organization change, tops the list.
After the user classifications have been identified and the business requirements for each user classification
have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on
user requirements. There are essentially five potential desktops environments for each user:
Traditional PC: A traditional PC is what typically constitutes a desktop environment: physical device
with a locally installed operating system.
Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts
through a delivery protocol. With hosted, server-based desktops, a single installed instance of a
server operating system, such as Microsoft Windows Server 2016, is shared by multiple users
simultaneously. Each user receives a desktop "session" and works in an isolated memory space.
Remoted Desktop Server Hosted Server sessions: A hosted virtual desktop is a virtual desktop running
on a virtualization layer (ESX). The user does not work with and sit in front of the desktop, but instead
the user interacts through a delivery protocol.
Published Applications: Published applications run entirely on the VMware Horizon RDS hosted server
virtual machines and the user interacts through a delivery protocol. With published applications, a
52
Architecture and Design Considerations for Desktop Virtualization
single installed instance of an application, such as Microsoft Office, is shared by multiple users
simultaneously. Each user receives an application "session" and works in an isolated memory space.
device and are sent from a server on demand. The user interacts with the application or desktop
directly but the resources may only available while they are connected to the network.
used as a type 1 hypervisor and is synced with the data center when the device is connected to the
network.
For the purposes of the validation represented in this document, both VMware Horizon hosted virtual
desktops and Remote Desktop Server Hosted sessions were validated. Each of the sections provides
some fundamental design decisions for this environment.
The process of analyzing the variety of application and data pairs for an organization will likely be
complicated by the inclusion cloud applications, for example, SalesForce.com. This application and data
analysis is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning
process. There are a variety of third party tools available to assist organizations with this crucial exercise.
Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications
and data?
Are the required skill sets to execute the VDI project available? Can we hire or contract for them?
Do we have end user experience performance metrics identified for each desktop sub-group?
Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:
53
Architecture and Design Considerations for Desktop Virtualization
How many virtual desktops will be deployed in the pilot? In production? All Windows 8/10?
Will VMware Horizon RDSH be used for Hosted Shared Server applications planned? Are they are any
applications installed?
What is the desktop OS planned for RDS Server Roles? Windows server 2012 or Server 2016?
Will VMware Horizon Composer or Instant Clones or another method be used for virtual desktop
deployment?
Are there sufficient IOPS available for the write-intensive VDI workload?
What is the SQL server version for database? SQL server 2012 or 2016?
Is user profile management (for example, non-roaming profile based) part of the solution?
Hypervisor Selection
VMware vSphere has been identified the hypervisor for both RDS Hosted Sessions and VDI based desktops:
VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center server
software and the hypervisor software that virtualizes the hardware resources on the servers. It offers
features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a
multi-pathing storage layer. More information on vSphere can be obtained at the VMware web site:
http://www.vmware.com/products/datacentervirtualization/vsphere/overview.html.
For this CVD, the hypervisor used was VMware ESXi 6.5 Update 1.
Server OS and Desktop OS Machines configured in this CVD to support Remote Desktop Server
Hosted (RDSH) shared sessions and Hosted Virtual Desktops (both non-persistent and persistent).
54
Architecture and Design Considerations for Desktop Virtualization
Application types: Applications that might not work well with other
applications or might interact with the operating system, such as
.NET framework. These types of applications are ideal for hosting
on virtual machines.
55
Architecture and Design Considerations for Desktop Virtualization
For the Cisco Validated Design described in this document, a mix of Remote Desktop Server Hosted
sessions (RDSH) using RDS based Server OS and VMware Horizon pooled Linked Clone Virtual Machine
Desktops using VDI based desktop OS machines were configured and tested.
The mix consisted of a combination of both use cases. The following sections discuss design decisions
relative to the VMware Horizon deployment, including the CVD test environment.
56
Solution Hardware and Software
Products Deployed
configuration, the reference architecture contained in this document once built, can easily be scaled as
requirements and demands change. This includes scaling both up (adding additional resources within a
Cisco UCS Domain) and out (adding additional Cisco UCS Domains and NetApp AFF A300 storage arrays).
The FlexPod Datacenter solution includes Cisco networking, Cisco UCS and NetApp AFF A300 storage,
which efficiently fit into a single data center rack, including the access layer network switches.
This CVD document details the deployment of 5000 users for a mixed VMware Horizon desktop workload
featuring the following software:
This validated design document details the deployment of the multiple configurations extending to 5000
users for a mixed Horizon workload featuring the following software:
VMware Horizon 7 Shared Remote Desktop Server Hosted Sessions (RDSH) on NetApp AFF A300 FC
storage
VMware Horizon 7 Non-Persistent Virtual Desktops (VDI) on NetApp AFF A300 on FC storage
VMware Horizon 7 Persistent Virtual Desktops (VDI) on NetApp AFF A300 on FC storage
Windows 10 64-bit virtual machine Operating Systems for Non- Persistent and Persistent virtual
machine users
Figure 20 details the physical hardware and cabling deployed to enable this solution.
57
Solution Hardware and Software
Figure 20 Virtual Desktop Workload Architecture for the 5000 seat on VMware Horizon 7 on FlexPod
Hardware Deployed
The solution contains the following hardware as shown in Figure 20:
Four Cisco UCS 5108 Blade Server Chassis with two Cisco UCS-IOM-2304 IO Modules
Two Cisco UCS B200 M4 Blade servers with Intel Xeon E5-2660v3 2.60-GHz 10-core processors,
128GB 2133MHz RAM, and one Cisco VIC1340 mezzanine card for the hosted infrastructure,
providing N+1 server fault tolerance
58
Solution Hardware and Software
Seven Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors,
768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the VMware Horizon Remote
Desktop Server Hosted Sessions workload, providing N+1 server fault tolerance at the workload
cluster level
Nine Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors,
768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the VMware Horizon Instant Clones
VDI desktops workload, providing N+1 server fault tolerance at the workload cluster level
Nine Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors,
768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the VMware Horizon Full Clones
VDI desktops workload, providing N+1 server fault tolerance at the workload cluster level
NetApp AFF A300 Storage System with dual redundant controllers, 1x DS224C disk shelf, and 24x 3.8
TB solid-state drives.
Table 7 lists the software and firmware version used in the study.
Logical Architecture
The logical architecture of this solution is designed to support up to 5000 users within four Cisco UCS 5108
Blade server chassis containing 27 blades, which provides physical redundancy for the blade servers for
each workload type.
Figure 21 outlines the logical architecture of the test environment, including the Login VSI session launcher
self-contained end user experience benchmarking platform.
59
Solution Hardware and Software
This document is intended to allow you to fully configure your environment. In this process, various
steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes,
as well as to record appropriate MAC addresses.
Figure 22 identifies the server roles in the 27 server deployment to support the 5000 seat workload. We also
break out the infrastructure virtual machine fault tolerant design.
60
Solution Hardware and Software
Chassis-1/2/3, Slot 7 and Chassis-4 Slot 7 and 8 are not being used.
61
Solution Hardware and Software
VLANs
The VLAN configuration recommended for the environment includes a total of seven VLANs as outlined in
Table 9 .
VSANs
Two virtual SANs for communications and fault tolerance were used in this design.
VMware Clusters
The following four VMware Clusters were used in one vCenter data center to support the solution and testing
environment:
Infrastructure Cluster: Infra VMs (vCenter Appliance, Active Directory (2), DNS, DHCP, VMware
Horizon Connection Servers, VMware Horizon Replica Servers, VMware Horizon Composer Server,
Microsoft SQL Server.
RDSH: VMware Horizon RDSH (Remote Desktop Server Hosted) VMs (Windows Server 2016 RDS
Roles) provisioned with VMware View Composer.
62
Solution Hardware and Software
VDI Non-Persistent: VMware Horizon VDI VMs (Windows 10 64-bit Non-Persistent Instant Clones
virtual desktops provisioned.
VDI Persistent: VMware Horizon VDI VMs (Windows 10 64-bit persistent virtual desktops
provisioned with VMware Horizon Composer.
Launcher Cluster: Login VSI Cluster (The Login VSI launcher infrastructure was connected using the
same set of switches and vCenter instance, but was hosted on separate local storage and servers)
Solution Configuration
This section details the configuration and tuning that was performed on the individual components to
produce a complete, validated solution. Figure 24 illustrates the configuration topology for this solution.
63
Solution Hardware and Software
Component Layers
Figure 24 Solution Component Layers
Figure 24 above captures the architectural diagram for the purpose of this study. The architecture is divided
into three distinct layers:
Solution Cabling
The following sections detail the physical connectivity configuration of the FlexPod 5000 seat VMware
Horizon 7 environment.
The information in this section is provided as a reference for cabling the physical equipment in this Cisco
Validated Design environment. To simplify cabling requirements, the tables include both local and remote
device and port locations.
The tables in this section contain the details for the prescribed and supported configuration of the AFF A300
to the Cisco 6332-16UP Fabric Interconnects via Cisco MDS 9148S FC switches.
64
Solution Hardware and Software
This document assumes that out-of-band management ports are plugged into an existing management
infrastructure at the deployment site. These interfaces will be used in various configuration steps.
Be sure to follow the cabling directions in this section. Failure to do so will result in necessary changes
to the deployment procedures that follow because specific port locations are mentioned.
Figure 25 illustrates a cabling diagram for a VMware Horizon configuration using the Cisco Nexus 9000,
Cisco MDS 9100 Series, and NetApp AFF A300 array.
65
Solution Hardware and Software
66
Solution Hardware and Software
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC-T=).
67
Solution Hardware and Software
68
Solution Hardware and Software
Figure 26 Cable connectivity between Cisco UCS 6332-16UP Fabric Interconnects and Cisco Nexus
9372PX Switches
We used four 16Gb FC connections from each Fabric Interconnect to each MDS switch and utilized two
16Gb FC connections from the AFF A300 storage controller to each MDS switch.
69
Solution Hardware and Software
70
Solution Hardware and Software
Figure 28 Fibre Channel Cable Connectivity from AFF A300 to Cisco MDS 9148S to Cisco 6332 -16UP
Fabric Interconnects
71
Solution Hardware and Software
www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-installation-guides-
list.html) and it is beyond the scope of this document.
For more information about each step, refer to the following documents: Cisco UCS Manager Configuration
Guides GUI and Command Line Interface (CLI) Cisco UCS Manager - Configuration Guides - Cisco
1. Connect a console cable to the console port on what will become the primary fabric interconnect.
2. If the fabric interconnects was previously deployed and you want to erase it to redeploy, follow these
steps:
4.
5.
72
Solution Hardware and Software
6.
9.
73
Solution Hardware and Software
10. Console onto second fabric interconnect, select console as the configuration method and provide the
following inputs.
11. Open a web browser and go to the Virtual IP address configured above.
1. Open a web browser and navigate to the Cisco UCS 6332-16UP Fabric Interconnect cluster address.
2. Click the Launch UCS Manager link to download the Cisco UCS Manager software.
74
Solution Hardware and Software
4. When prompted, enter admin as the user name and enter the administrative password.
1. On the Equipment tab, expand the Fabric Interconnects node and click Fabric Interconnect A.
2. On the General tab in the Actions pane, click Set FC End Host mode.
Both Fabric Interconnects automatically reboot sequentially when you confirm you want to operate in
this mode.
1. After the restarts are complete, from the General tab, Actions pane, click Configure Unified ports.
75
Solution Hardware and Software
Ports to the right of the slider will become FC ports. For our study, we configured the last four ports on the
Expansion Module as FC ports.
76
Solution Hardware and Software
5. Click Finish, then click Yes to confirm. This action will cause a reboot of the Expansion Module.
After the expansion module reboot, your FC Ports configuration should look like the screenshot below:
77
Solution Hardware and Software
7. Insert Cisco SFP 16 Gbps FC (DS-SFP-FC16-SW) modules into ports 1 through 4 on both Fabric Inter-
connects and cable as prescribed later in this document.
1. In Cisco UCS Manager, in the navigation pane, click the Equipment node and select Equipment in the list
on the left.
78
Solution Hardware and Software
6. Click OK.
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
1. In Cisco UCS Manager, in the navigation pane, click the Admin tab.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
79
Solution Hardware and Software
7. Click OK.
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module.
4. Select ports 17 through 24 that are connected to the Cisco IO Modules of the four B-Series 5108 Chas-
sis, right-click them, and select Configure as Server Port.
6. In the left pane, navigate to Fabric Interconnect A. In the right pane, navigate to the Physical Ports tab >
Ethernet Ports tab. Confirm that ports have been configured correctly in the in the Role column.
7. Repeat the above steps for Fabric Interconnect B. The screenshot below shows the server ports for Fab-
ric B.
80
Solution Hardware and Software
To configure network ports used to uplink the Fabric Interconnects to the Cisco Nexus 9172PX switches,
complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module.
4. Select ports 35 through 36 that are connected to the Nexus 9372PX switches, right-click them, and se-
lect Configure as Network Port.
6. In the left pane, navigate to Fabric Interconnect A. In the right pane, navigate to the Physical Ports tab >
Ethernet Ports tab. Confirm that ports have been configured correctly in the in the Role column.
7. Verify the Ports connected to Cisco Nexus upstream switches are now configured as network ports.
8. Successful configuration should result in ports 35-36 configured as network ports as shown in the
screen shot below:
9. Repeat the above steps for Fabric Interconnect B. The screenshot shows the network uplink ports for
Fabric B.
81
Solution Hardware and Software
To configure the necessary port channels in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
7. Click Next.
9. Click Finish.
Repeat steps 1-9 for Fabric Interconnect B, substituting 12 for the port channel number and FI-B-Uplink for
the name. The configuration should look like the screenshot below:
82
Solution Hardware and Software
To configure the necessary port channels in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Under SAN > SAN Cloud, right-click Fabric A Create Resource Pools.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
7. Enter the seed MAC address and provide the number of MAC addresses to be provisioned.
83
Solution Hardware and Software
1. Click the LAN tab in UCS Manager, expand the Pools node, expand the root node, right-click IP Pools,
then click Create IP Pool.
84
Solution Hardware and Software
4. Complete the starting IP address, size, subnet mask, default gateway, primary and secondary DNS val-
ues for your network, then click OK.
5. Click Finish.
6. Click OK.
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
3. Under WWPN Pools, right click WWPN Pools and select Create WWPN Pool.
85
Solution Hardware and Software
6. Click Next.
9. Click Finish.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
86
Solution Hardware and Software
8. Click Next.
11. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
Consider creating unique server pools to achieve the granularity that is required in your environment.
87
Solution Hardware and Software
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
7. Click Next.
8. Select two (or more) servers to be used for the VMware management cluster and click >> to add them to
the Infra_Pool server pool.
9. Click Finish.
11. Create additional Server Pools for Horizon Linked Clone servers and Horizon RDSH servers
Create VLANs
To configure the necessary virtual local area networks (VLANs) for the Cisco UCS environment, complete the
following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
88
Solution Hardware and Software
3. Right-click VLANs.
5. Enter MGMT as the name of the VLAN to be used for in-band management traffic.
6. Keep the Common/Global option selected for the scope of the VLAN.
10. Repeat the above steps to create all VLANs and configure the Default VLAN as native.
Create VSANs
To configure the necessary virtual storage area networks (VSANs) for the Cisco UCS environment, complete
the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
89
Solution Hardware and Software
In this procedure, two VSANs are created. When these VSANs are created, be sure to add them to the
port-channel uplink created earlier.
5. Enter VSAN-400-A as the name of the VSAN to be used for in-band management traffic.
9. Repeat the above steps on Fabric B with VSAN-401-B to create the VSANs necessary for this solution.
90
Solution Hardware and Software
11. Go to the Port-Channel for each Fabric and assign the VSAN appropriately.
91
Solution Hardware and Software
To create a firmware management policy for a given server configuration in the Cisco UCS environment,
complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
92
Solution Hardware and Software
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
4. On the Best Effort row, enter 9216 in the box under the MTU column.
6. Click OK.
93
Solution Hardware and Software
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
94
Solution Hardware and Software
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
95
Solution Hardware and Software
96
Solution Hardware and Software
97
Solution Hardware and Software
98
Solution Hardware and Software
7. Click Finish.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
99
Solution Hardware and Software
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
10. Under VLANs, select the checkboxes for MGMT, Default, Infra, VDI and vMotion.
100
Solution Hardware and Software
26. Under VLANs, select the checkboxes for MGMT, Default, VDI, Infra, and vMotion.
101
Solution Hardware and Software
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
102
Solution Hardware and Software
103
Solution Hardware and Software
2. :
3.
name needs to match exactly. We will use the vHBA templates created in the previous step.
104
Solution Hardware and Software
5. Add the SAN Boot Targets to the primary and secondary. The SAN boot targets will also include primary
and secondary options in order to maximize resiliency and number of paths.
6. Using the following command, find and record the WWPN for each FC LIF:
105
Solution Hardware and Software
7. When the AFF A300 WWNs have been recorded, use fcp_01a for the first Boot Target WWPN:
8. Add a secondary SAN Boot Target by clicking Add SAN Boot Target to SAN Primary while the primary
SAN Boot option is highlighted. This time enter the AFF A300 WWPN for fcp_02a.
106
Solution Hardware and Software
9. Repeat these steps for the secondary SAN boot target and use WWPN fcp_01b and fcp_02b in the pri-
mary and secondary SAN boot options.
10. For information about configuring boot and data LUNs on the NetApp A300 storage system, please refer
to section NetApp A300 Storage System Configuration.
4. Select the UUID pool created earlier from the dropdown in the UUID Assignment dialog.
107
Solution Hardware and Software
5. Click Next.
7. x-
pert radio button.
8. Click Add.
108
Solution Hardware and Software
9. Name it vNIC-A.
109
Solution Hardware and Software
110
Solution Hardware and Software
27. Use the Boot Policy drop-down list to select the Boot Policy created earlier, then click Finish.
111
Solution Hardware and Software
1. From the Servers tab in UCS Manager, under the Service Profile Templates node, right-click the Service
Profile Template created in the previous step, then click Create Service Profiles from Template.
112
Solution Hardware and Software
2. Provide a naming prefix, a starting number, and the number of services profiles to create, then click OK.
The requested number of service profiles (for example, 25) are created in the Service Profiles root
organization.
113
Solution Hardware and Software
Site Preparation
Circuit Breaker, Power Outlet Balancing, System Cabinet Power Cord Plugs, and Console Pinout
Requirements
AFF Systems
1. Confirm that the hardware and software components that you would like to use are supported with the
version of ONTAP that you plan to install by using the HWU application at the NetApp Support site.
2. Access the HWU application to view the System Configuration guides. Click the Controllers tab to view
the compatibility between different version of the ONTAP software and the NetApp storage appliances
with your desired specifications.
Controllers
Follow the physical installation procedures for the controllers found in the AFF A300 Series product
documentation at the NetApp Support site.
Disk Shelves
NetApp storage systems support a wide variety of disk shelves and disk drives. The complete list of disk
shelves that are supported by the AFF A300 is available at the NetApp Support site.
When using SAS disk shelves with NetApp storage controllers, refer to the SAS Disk Shelves Universal SAS
and ACP Cabling Guide for cabling guidelines.
114
Solution Hardware and Software
Configure Node 01
To configure node 01, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the storage
system is in a reboot loop, press Ctrl-C to exit the autoboot loop when the following message displays:
autoboot
If ONTAP 9.1 is not the version of software being booted, continue with the following steps to install
new software. If ONTAP 9.1 is the version being booted, select option 8 and y to reboot the node.
Then continue with step 14.
6. Select e0M for the network port you want to use for the download.
115
Solution Hardware and Software
<url-boot-software>
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
When installing new software, the system might perform firmware upgrades to the BIOS and adapter
cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the system
might deviate from this procedure.
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
The initialization and creation of the root aggregate can take 90 minutes or more to complete, depend-
ing on the number and type of disks attached. When initialization is complete, the storage system re-
boots. Note that SSDs take considerably less time to initialize. You can continue with the node 02 con-
figuration while the disks for node 01 are zeroing.
Configure Node 02
To configure node 02, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the storage
system is in a reboot loop, press Ctrl-C to exit the autoboot loop when the following message displays:
autoboot
If ONTAP 9.1 is not the version of software being booted, continue with the following steps to install
new software. If ONTAP 9.1 is the version being booted, select option 8 and y to reboot the node.
Then continue with step 14.
116
Solution Hardware and Software
6. Select e0M for the network port you want to use for the download.
<url-boot-software>
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
When installing new software, the system might perform firmware upgrades to the BIOS and adapter
cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the system
might deviate from this procedure.
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
The initialization and creation of the root aggregate can take 90 minutes or more to complete, depend-
ing on the number and type of disks attached. When initialization is complete, the storage system re-
boots. Note that SSDs take considerably less time to initialize.
Set Up Node
From a console port program attached to the storage controller A (node 01) console port, run the node
setup script. This script appears when ONTAP 9.1 boots on the node for the first time.
117
Solution Hardware and Software
You can return to cluster setup at any time by typing “cluster setup”.
To accept a default or omit a question, do not enter a value.
This system will send event messages and weekly reports to NetApp Technical Support.
To disable this feature, enter "autosupport modify -support disable" within 24 hours.
Enabling AutoSupport can significantly speed problem determination and resolution should a problem
occur on your system.
For further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Otherwise press Enter to complete cluster setup using the command line interface:
2. To complete the cluster setup, open a web browser and navigate to https://<node01-mgmt-ip.
118
Solution Hardware and Software
Cluster setup can also be done using command line interface. This document describes the cluster
setup using NetApp System Manager guided setup.
119
Solution Hardware and Software
The nodes are discovered automatically; if they are not, click the Refresh link. By default, the cluster
interfaces will be created on all the new factory shipping storage controllers.
If all the nodes are not discovered, then configure the cluster using the command line.
Cluster license and feature licenses can also be installed after completing the cluster creation.
5. Click Submit.
Cluster Management
120
Solution Hardware and Software
Node Management
Enter the node management IP addresses and port details for all the nodes.
DNS Details
NTP Details
7. Click Submit.
8. In the Support page, configure the AutoSupport and Event Notifications sections.
121
Solution Hardware and Software
9. Click Submit.
122
Solution Hardware and Software
The node management interface can be on the same subnet as the cluster management interface, or
it can be on a different subnet. In this document, we assume that it is on the same subnet.
2. Log in to the admin user with the password you provided earlier.
disk zerospares
Advanced Data Partitioning creates a root partition and two data partitions on each SSD drive in an All
Flash FAS configuration. Disk auto assign should have assigned one data partition to each node in an
HA pair. If a different disk assignment is required, disk auto assignment must be disabled on both
nodes in the HA pair by running the disk option modify command. Spare partitions can then be
moved from one node to another by running the disk removeowner and disk assign commands.
1. Verify the Current Mode and Current Type properties of the ports by running the ucadmin show com-
mand:
ucadmin show
Current Current Pending Pending Admin
Node Adapter Mode Type Mode Type Status
------------ ------- ------- --------- ------- --------- -----------
<st-node01>
0e cna target - - online
<st-node01>
0f cna target - - online
<st-node01>
0g fc target - - online
<st-node01>
0h fc target - - online
<st-node02>
0e cna target - - online
<st-node02>
0f cna target - - online
<st-node02>
0g fc target - - online
<st-node02>
0h fc target - - online
8 entries were displayed.
2. Verify that the Current Mode and Current Type properties for all ports are set properly. Ports 0g and 0h
are used for FC connectivity and should be set to mode fc if not already configured. The port type for all
proto-cols should be set to target. Change the port personality by running the following command:
123
Solution Hardware and Software
The ports must be offline to run this command. To take an adapter offline, run the fcp adapter
modify –node <home-node-of-the-port> -adapter <port-name> -state down com-
mand. Ports must be converted in pairs (for example, 0e and 0f).
After conversion, a reboot is required. After reboot, bring the ports online by running fcp adapter
modify -node <home-node-of-the-port> -adapter <port-name> -state up.
A storage virtual machine (SVM) is referred to as a Vserver (or vserver) in the GUI and CLI.
system service-processor network modify –node <st-node01> -address-family IPv4 –enable true –dhcp
none –ip-address <node01-sp-ip> -netmask <node01-sp-mask> -gateway <node01-sp-gateway>
system service-processor network modify –node <st-node02> -address-family IPv4 –enable true –dhcp
none –ip-address <node02-sp-ip> -netmask <node02-sp-mask> -gateway <node02-sp-gateway>
The service processor IP addresses should be in the same subnet as the node management IP ad-
dresses.
Create Aggregates
An aggregate containing the root volume is created during the ONTAP setup process. To create additional
aggregates, determine the aggregate name, the node on which to create it, and the number of disks it
should contain.
124
Solution Hardware and Software
This solution was validated using 1 data aggregate on each controller, with 23 data partitions per aggregate.
To create the required aggregates, run the following commands:
You should have the minimum number of hot spare disks for hot spare disk partitions recommended
for your aggregate.
For all flash aggregates, you should have a minimum of one hot spare disk or disk partition. For non-
flash homogenous aggregates, you should have a minimum of two hot spare disks or disk partitions.
For NetApp Flash Pool™ aggregates, you should have a minimum of two hot spare disks or disk parti-
tions for each disk type.
The aggregate cannot be created until disk zeroing completes. Run the aggr show command to dis-
play aggregate creation status. Do not proceed until both aggr1_node1 and aggr1_node2 are
online.
aggr show
aggr rename –aggregate aggr0 –newname <node01-rootaggrname>
Both <st-node01> and <st-node02> must be capable of performing a takeover. Continue with step
3 if the nodes are capable of performing a takeover.
This step is not applicable for clusters with more than two nodes.
cluster ha show
125
Solution Hardware and Software
5. Only enable HA mode for two-node clusters. Do not run this command for clusters with more than two
nodes because it causes problems with failover.
6. Verify that hardware assist is correctly configured and, if needed, modify the partner IP address.
timezone <timezone>
For example, in the eastern United States, the time zone is America/New_York.
126
Solution Hardware and Software
date <ccyymmddhhmm.ss>
3. Configure the Network Time Protocol (NTP) servers for the cluster.
1. Configure basic SNMP information, such as the location and contact. When polled, this information is
visible as the sysLocation and sysContact variables in SNMP.
2. Configure SNMP traps to send to remote hosts, such as a DFM server or another fault management sys-
tem.
Configure AutoSupport
NetApp AutoSupport® sends support summary information to NetApp through HTTPS. To configure
AutoSupport, run the following command:
system node autosupport modify -node * -state enable –mail-hosts <mailhost> -transport https -support
enable -noteto <storage-admin-email>
To be effective, CDP must also be enabled on directly connected networking equipment such as
switches and routers.
127
Solution Hardware and Software
ifgrp create -node <st-node01> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <st-node01> -ifgrp a0a -port e2a
ifgrp add-port -node <st-node01> -ifgrp a0a -port e2e
ifgrp create -node <st-node02> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <st-node02> -ifgrp a0a -port e2a
ifgrp add-port -node <st-node02> -ifgrp a0a -port e2e
ifgrp show
Create VLANs
To create VLANs, create NFS VLAN ports and add them to the NFS broadcast domain:
2. Remove the unused data protocols from the SVM - CIFS, iSCSI, and NDMP.
3. Add the two data aggregates to the Infra-SVM aggregate list for the NetApp Virtual Storage Console
(VSC).
128
Solution Hardware and Software
If NFS license is not installed during the cluster configuration, make sure install the license for staring
the NFS service.
5. Turn on the SVM vstorage parameter for the NetApp NFS VAAI plug-in.
1. Create a volume to be the load-sharing mirror of the infrastructure SVM root volume on each node.
volume create –vserver Infra-SVM –volume rootvol_m01 –aggregate aggr1_node01 –size 1GB –type DP
volume create –vserver Infra-SVM –volume rootvol_m02 –aggregate aggr1_node02 –size 1GB –type DP
2. Create a job schedule to update the root volume mirror relationships every 15 minutes.
If FC license is not installed during the cluster configuration, make sure install the license for creating
FC service
129
Solution Hardware and Software
2. Generally, a self-signed certificate is already in place. Verify the certificate and obtain parameters (for
example, <serial-number>) by running the following command:
For each SVM shown, the certificate common name should match the DNS FQDN of the SVM. Delete the
two default certificates and replace them with either self-signed certificates or certificates from a certificate
authority (CA). To delete the default certificates, run the following commands:
security certificate delete -vserver Infra-SVM -common-name Infra-SVM -ca Infra-SVM -type server -
serial <serial-number>
Deleting expired certificates before creating new certificates is a best practice. Run the security
certificate delete command to delete the expired certificates. In the following command, use
TAB completion to select and delete each default certificate.
3. To generate and install self-signed certificates, run the following commands as one-time commands.
Generate a server certificate for the Infra-SVM and the cluster SVM. Use TAB completion to aid in the
completion of these commands.
security certificate create -common-name <cert-common-name> -type server -size 2048 -country <cert-
country> -state <cert-state> -locality <cert-locality> -organization <cert-org> -unit <cert-unit> -
email-addr <cert-email> -expire-days <cert-days> -protocol SSL -hash-function SHA256 -vserver Infra-
SVM
4. To obtain the values for the parameters required in step 5 (<cert-ca> and <cert-serial>), run the security
certificate show command.
5. Enable each certificate that was just created by using the server-enabled true and client-enabled false
parameters. Use TAB completion to aid in the completion of these commands.
security ssl modify -vserver <clustername> -server-enabled true -client-enabled false -ca <cert-ca> -
serial <cert-serial> -common-name <cert-common-name>
system services firewall policy delete -policy mgmt -service http –vserver <clustername>
It is normal for some of these commands to return an error message stating that the entry does not ex-
ist.
7. Change back to the normal admin privilege level and set up the system to allow SVM logs to be available
by web.
Configure NFSv3
To configure NFSv3 on the SVM, complete the following steps:
130
Solution Hardware and Software
1. Create a new rule for the infrastructure NFS subnet in the default export policy.
vserver export-policy rule create –vserver Infra-SVM -policyname default –ruleindex 1 –protocol nfs -
clientmatch <infra-nfs-subnet-cidr> -rorule sys –rwrule sys -superuser sys –allow-suid false
2. Assign the FlexPod export policy to the infrastructure SVM root volume.
FlexVol volumes are created to house boot LUNs for ESXi servers, datastore LUNs for virtual desktops and
RDS hosts, and for CIFS shares hosting user profile data. For specific details about the volumes created
during this validation, see the Storage Configuration section below.
volume create -vserver Infra-SVM -volume infra_ds01 -aggregate aggr1_AFF300_01 -size 6TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume rdsh_1 -aggregate aggr1_AFF300_02 -size 6TB -state online -
policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume esxi_boot -aggregate aggr1_AFF300_01 -size 500GB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_01 -aggregate aggr1_AFF300_01 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_02 -aggregate aggr1_AFF300_02 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_03 -aggregate aggr1_AFF300_01 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_04 -aggregate aggr1_AFF300_02 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_05 -aggregate aggr1_AFF300_01 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra-SVM -volume vdi_fc_06 -aggregate aggr1_AFF300_02 -size 11TB -state
online -policy default -space-guarantee none -percent-snapshot-space 0
131
Solution Hardware and Software
lun create -vserver Infra-SVM -volume esxi_boot -lun VM-Host-Infra-01 -size 10GB -ostype vmware -
space-reserve disabled
lun create -vserver Infra-SVM -volume esxi_boot -lun VM-Host-Infra-02 -size 10GB -ostype vmware -
space-reserve disabled
lun create -vserver Infra-SVM -volume esxi_boot -lun VDI-1 -size 10GB -ostype vmware -space-reserve
disabled
lun create -vserver Infra-SVM -volume esxi_boot -lun VDI-2 -size 10GB -ostype vmware -space-reserve
disabled
…
lun create -vserver Infra-SVM -volume vdi_fc_01 -lun vdi_fc_01 -size 10TB -ostype vmware -space-
reserve disabled
lun create -vserver Infra-SVM -volume vdi_fc_02 -lun vdi_fc_02 -size 10TB -ostype vmware -space-
reserve disabled
lun create -vserver Infra-SVM -volume vdi_fc_03 -lun vdi_fc_03 -size 10TB -ostype vmware -space-
reserve disabled
lun create -vserver Infra-SVM -volume vdi_fc_04 -lun vdi_fc_04 -size 10TB -ostype vmware -space-
reserve disabled
lun create -vserver Infra-SVM -volume vdi_fc_05 -lun vdi_fc_05 -size 10TB -ostype vmware -space-
reserve disabled
lun create -vserver Infra-SVM -volume vdi_fc_06 -lun vdi_fc_06 -size 10TB -ostype vmware -space-
reserve disabled
Create igroups
Igroups are created to map host initiators to the LUNs they are allowed to access. Igroups can be FCP
protocol, iSCSI protocol, or both. An igroup is created for each ESXi host to map for access to a boot LUN. A
separate igroup is created for the entire ESXi cluster to map all data LUNs to every node in the cluster.
lun map –vserver Infra-SVM –volume esxi_boot –lun VM-Host-Infra-01 –igroup VM-Host-Infra-01 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VM-Host-Infra-02 –igroup VM-Host-Infra-02 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-1 –igroup VDI-1 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-2 –igroup VDI-2 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-3 –igroup VDI-3 –lun-id 0
…
lun map –vserver Infra-SVM –volume infra_ds01 –lun infra_ds01 –igroup VDI-Cluster –lun-id 1
132
Solution Hardware and Software
lun map –vserver Infra-SVM –volume rdsh_1 –lun rdsh_1 –igroup VDI-Cluster –lun-id 2
lun map –vserver Infra-SVM –volume vdi_fc_01 –lun vdi_fc_01 –igroup VDI-Cluster –lun-id 3
lun map –vserver Infra-SVM –volume vdi_fc_02 –lun vdi_fc_02 –igroup VDI-Cluster –lun-id 4
lun map –vserver Infra-SVM –volume vdi_fc_03 –lun vdi_fc_03 –igroup VDI-Cluster –lun-id 5
lun map –vserver Infra-SVM –volume vdi_fc_04 –lun vdi_fc_04 –igroup VDI-Cluster –lun-id 6
lun map –vserver Infra-SVM –volume vdi_fc_05 –lun vdi_fc_05 –igroup VDI-Cluster –lun-id 7
lun map –vserver Infra-SVM –volume vdi_fc_06 –lun vdi_fc_06 –igroup VDI-Cluster –lun-id 8
Schedule Deduplication
On NetApp All Flash FAS systems, deduplication is enabled by default. To schedule deduplication, complete
the following step:
1. After the volumes are created, assign a once-a-day deduplication schedule to esxi_boot, in-
fra_datastore_1 and infra_datastore_2:
Create FC LIFs
Run the following commands to create four FC LIFs (two on each node):
network interface create -vserver Infra-SVM -lif fcp_lif01a -role data -data-protocol fcp -home-node
<st-node01> -home-port 0g –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif01b -role data -data-protocol fcp -home-node
<st-node01> -home-port 0h –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif02a -role data -data-protocol fcp -home-node
<st-node02> -home-port 0g –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif02b -role data -data-protocol fcp -home-node
<st-node02> -home-port 0h –status-admin up
network interface create -vserver Infra-SVM -lif nfs_lif01 -role data -data-protocol nfs -home-node
<st-node01> -home-port a0a-<infra-nfs-vlan-id> –address <node01-nfs_lif01-ip> -netmask <node01-
nfs_lif01-mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –auto-
revert true
network interface create -vserver Infra-SVM -lif nfs_lif02 -role data -data-protocol nfs -home-node
<st-node02> -home-port a0a-<infra-nfs-vlan-id> –address <node02-nfs_lif02-ip> -netmask <node02-
nfs_lif02-mask>> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –auto-
revert true
133
Solution Hardware and Software
network interface create –vserver Infra-SVM –lif svm-mgmt –role data –data-protocol none –home-node
<st-node02> -home-port e0c –address <svm-mgmt-ip> -netmask <svm-mgmt-mask> -status-admin up –
failover-policy broadcast-domain-wide –firewall-policy mgmt –auto-revert true
The SVM management IP in this step should be in the same subnet as the storage cluster manage-
ment IP.
2. Create a default route to allow the SVM management interface to reach the outside world.
3. Set a password for the SVM vsadmin user and unlock the user.
A cluster serves data through at least one and possibly several SVMs. We have just gone through
creating a single SVM. If you would like to configure your environment with multiple SVMs, this is a
good time to create additional SVMs.
This section contains details on the specific storage system configuration used in this validation. This section
does not include all possible configuration options, only those necessary to support this solution.
Cluster Details
A cluster consists of one or more nodes grouped as (HA pairs) to form a scalable cluster. Creating a cluster
enables the nodes to pool their resources and distribute work across the cluster, while presenting
administrators with a single entity to manage. Clustering also enables continuous service to end users if
individual nodes go offline.
134
Solution Hardware and Software
Storage Details
Table 21 lists the storage details for each HA pair.
AFF A300-01
DS224-12: 1 SSD: 24 SSD: 83.84TB 83.84TB
AFF A300-02
AFF A300-01 AFF A300 1 X2069 PMC PM8072; PCI-E quad-port SAS (PM8072)
AFF A300-02 AFF A300 1 X2069 PMC PM8072; PCI-E quad-port SAS (PM8072)
Firmware Details
Table 24 lists the relevant firmware details for each node.
AFF A300-01 AFF A300: 11.1 IOM12: A:0210, B:0210 X357_S163A3T8ATE: NA51 SP: 5.1
AFF A300-02 AFF A300: 11.1 IOM12: A:0210, B:0210 X357_S163A3T8ATE: NA51 SP: 5.0X21
135
Solution Hardware and Software
136
Solution Hardware and Software
Error! Reference source not found.lists the network port ifgrp settings.
Network Routes
You control how LIFs in an SVM use your network for outbound traffic by configuring routing tables and static
routes.
Routing tables. Routes are configured for each SVM and identify the SVM, subnet, and destination.
Because routing tables are for each SVM, routing changes to one SVM do not alter the route table of
another SVM.
Routes are created in an SVM when a service or application is configured for the SVM. Like data
SVMs, the admin SVM of each IPspace has its own routing table because LIFs can be owned by admin
SVMs and might need route configurations different from those on data SVMs.
If you have defined a default gateway when creating a subnet, a default route to that gateway is added
automatically to the SVM that uses a LIF from that subnet.
Static route. A defined route between a LIF and a specific destination IP address. The route can use a
gateway IP address.
137
Solution Hardware and Software
Error! Reference source not found.lists the network routes for Data ONTAP 8.3 or later.
cluster_mgmt
CIFS1-01
AFF A300 Infra 0.0.0.0/0 10.10.62.1 20
CIFS2-02
During cluster initialization, the system creates two default broadcast domains:
The default broadcast domain contains ports that are in the default IPspace. These ports are used
primarily to serve data. Cluster management and node management ports are also in this broadcast
domain.
The cluster broadcast domain contains ports that are in the cluster IPspace. These ports are used for
cluster communication and include all cluster ports from all nodes in the cluster.
Error! Reference source not found.lists the network port broadcast domains for Data ONTAP 8.3 or later.
138
Solution Hardware and Software
Aggregate Configuration
Aggregates are containers for the disks managed by a node. You can use aggregates to isolate workloads
with different performance demands, to tier data with different access patterns, or to segregate data for
regulatory purposes.
For business-critical applications that need the lowest possible latency and the highest possible
performance, you might create an aggregate consisting entirely of SSDs.
To tier data with different access patterns, you can create a hybrid aggregate, deploying flash as high-
performance cache for a working data set, while using lower-cost HDDs or object storage for less
frequently accessed data. A Flash Pool consists of both SSDs and HDDs. A Fabric Pool consists of an
all-SSD aggregate with an attached object store.
If you need to segregate archived data from active data for regulatory purposes, you can use an
aggregate consisting of capacity HDDs, or a combination of performance and capacity HDDs.
[email protected]_
aggr0_A300_ 414.47G
AFF-A300-01 online normal raid_dp SSD 24 cfo True False
01 B
(Shared)
[email protected]_
aggr0_A300_ 368.42G
AFF-A300-02 online normal raid_dp SSD 24 cfo True False
02 B
(Shared)
[email protected]_
aggr1_AFF30
AFF-A300-01 online normal raid_dp SSD 24 sfo False False 32.51TB
0_01
(Shared)
[email protected]_
aggr1_AFF30
AFF-A300-02 online normal raid_dp SSD 24 sfo False False 32.51TB
0_02
(Shared)
139
Solution Hardware and Software
In a cluster, an SVM facilitates data access. A cluster must have at least one SVM to serve data. SVMs use
the storage and network resources of the cluster. However, the volumes and LIFs are exclusive to the SVM.
Multiple SVMs can coexist in a single cluster without being bound to any node in a cluster. However, they
are bound to the physical cluster on which they exist.
SVM Configuration
Table 30 lists the SVM configuration.
Volume Configuration
A FlexVol volume is a data container associated with a SVM with FlexVol volumes. It gets its storage from a
single associated aggregate, which it might share with other FlexVol volumes or infinite volumes. It can be
used to contain files in a NAS environment, or LUNs in a SAN environment.
500.00
AFF A300 Infra esxi_boot aggr1_AFF300_01 RW default default unix
GB
140
Solution Hardware and Software
Size
Cluster Containing Ag- Snapshot Export Poli- Security
SVM Name Volume Name Type Nomi-
Name gregate Policy cy Style
nal
11.00T
AFF A300 Infra vdi_fc_ds01 aggr1_AFF300_01 RW default default unix
B
11.00T
AFF A300 Infra vdi_fc_ds02 aggr1_AFF300_01 RW default default unix
B
11.00T
AFF A300 Infra vdi_fc_ds03 aggr1_AFF300_01 RW default default unix
B
11.00T
AFF A300 Infra vdi_fc_ds04 aggr1_AFF300_02 RW default default unix
B
11.00T
AFF A300 Infra vdi_fc_ds05 aggr1_AFF300_02 RW default default unix
B
11.00T
AFF A300 Infra vdi_fc_ds06 aggr1_AFF300_02 RW default default unix
B
Protocol Configuration
NAS
ONTAP can be accessed over Common Internet File System (CIFS), Server Message Block (SMB) and
Network File System (NFS) capable clients.
This means clients can access all files on a SVM regardless of the protocol they are connecting with or the
type of authentication they require.
Logical Interfaces
A LIF is an IP address with associated characteristics, such as a role, a home port, a home node, a routing
group, a list of ports to fail over to, and a firewall policy. You can configure LIFs on ports over which the
cluster sends and receives communications over the network.
Interface groups
VLANs
While configuring SAN protocols such as FC on a LIF, it a LIF role determines the kind of traffic that is
supported over the LIF, along with the failover rules that apply and the firewall restrictions that are in place.
LIF failover refers to the automatic migration of a LIF in response to a link failure on the LIF's current network
port. When such a port failure is detected, the LIF is migrated to a working port.
141
Solution Hardware and Software
A failover group contains a set of network ports (physical, VLANs, and interface groups) on one or more
nodes. A LIF can subscribe to a failover group. The network ports that are present in the failover group
define the failover targets for the LIF.
CIFS Servers
CIFS clients can access files on a SVM using the CIFS protocol provided ONTAP can properly authenticate
the user.
CIFS Options
Most of these options are only available starting with Data ONTAP 8.2.
142
Solution Hardware and Software
AFF A300 Infra True True False True True False True
Local group members can be local users, domain users and groups, and domain machine accounts.
Local users and groups can also be assigned privileges. Privileges control access to SVM resources and can
override the permissions that are set on objects. A user or member of a group that is assigned a privilege is
granted the specific rights that the privilege allows.
CIFS Shares
A CIFS share is a named access point in a volume and/or namespace that enables CIFS clients to view,
browse, and manipulate files on an SVM.
AFF Everyone:Full
Infra %w %w homedirectory symlinks
A300 Control
AFF
Infra admin$ / browsable UTD
A300
oplocks
browsable
AFF Administrators:Full
Infra c$ / changenotify symlinks
A300 Control
show_previous_version
s
oplocks
browsable
AFF Everyone:Full
Infra home /home changenotify symlinks
A300 Control
show_previous_version
s
AFF
Infra ipc$ / browsable UTD
A300
143
Solution Hardware and Software
Cluster Symlink
SVM Name Share Name Path Share Properties Share ACL
Name Properties
oplocks
browsable
AFF Everyone:Full
Infra Profile$ /home/Profile changenotify symlinks
A300 Control
show_previous_version
s
The home directory search paths are a set of absolute paths from the root of the SVM that you specify that
directs the ONTAP search for home directories. You specify one or more search paths by using the vserver
cifs home-directory search-path add command. If you specify multiple search paths, ONTAP tries them in
the order specified until it finds a valid path.
SAN
Storage Area Network (SAN) is a term used to describe a purpose-built storage controller that provides
block-based data access. ONTAP supports traditional FC as well as iSCSI and FCoE) within a unified
architecture.
LUNs
LUNs are created and exist within a given FlexVol volume and are used to store data which is presented to
servers or clients. LUNs provide storage for block-based protocols such as FC or iSCSI.
144
Solution Hardware and Software
/vol/vdi_fc_ds01/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
1
/vol/vdi_fc_ds02/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
2
/vol/vdi_fc_ds03/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
3
/vol/vdi_fc_ds04/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
4
/vol/vdi_fc_ds05/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
5
145
Solution Hardware and Software
/vol/vdi_fc_ds06/vdi_fc_ds0
AFF A300 Infra True True vmware False 10.00TB
6
Initiator Groups
Initiator groups (igroups) are tables of FC protocol host WWPNs or iSCSI host node names. You can define
igroups and map them to LUNs to control which initiators have access to LUNs.
N. If you are
using multipathing software or have clustered hosts, each initiator port or software initiator of each clustered
host needs redundant paths to the same LUN.
You can create igroups that specify which initiators have access to the LUNs either before or after you create
LUNs, but you must create igroups before you can map a LUN to an igroup.
Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you
cannot map a LUN to multiple igroups that have the same initiator. An initiator cannot be a member of
igroups of differing OS types.
146
Solution Hardware and Software
147
Solution Hardware and Software
20:01:00:a0:98:af:bd:
AFF A300 Infra fcp_01a up/up AFF-A300-01 0g True
e8
20:02:00:a0:98:af:bd:
AFF A300 Infra fcp_01b up/up AFF-A300-01 0h True
e8
20:03:00:a0:98:af:bd:
AFF A300 Infra fcp_02a up/up AFF-A300-02 0g True
e8
20:04:00:a0:98:af:bd:
AFF A300 Infra fcp_02b up/up AFF-A300-02 0h True
e8
FCP / FCoE
Table 42 lists the details of the storage controller target ports and the WWPN address of each.
offlined by
AFF-A300-01 0e 0 ptp auto 50:0a:09:82:80:13:41:27
user/system
offlined by
AFF-A300-01 0f 0 ptp auto 50:0a:09:81:80:13:41:27
user/system
offlined by
AFF-A300-02 0e 0 ptp auto 50:0a:09:82:80:d3:67:d3
user/system
148
Solution Hardware and Software
offlined by
AFF-A300-02 0f 0 ptp auto 50:0a:09:81:80:d3:67:d3
user/system
Volume Efficiency
You can run deduplication, data compression, and data compaction together or independently on a FlexVol
volume or an infinite volume to achieve optimal space savings. Deduplication eliminates duplicate data
blocks and data compression compresses the data blocks to reduce the amount of physical storage that is
required. Data compaction stores more data in less space to increase storage efficiency.
Beginning with ONTAP 9.2, all inline storage-efficiency features, such as inline deduplication and inline
compression, are enabled by default on AFF volumes.
149
Solution Hardware and Software
LUN Efficiency
Thin provisioning enables storage administrators to provision more storage on a LUN than is physically
present on the volume. By overprovisioning the volume, storage administrators can increase the capacity
utilization of that volume. As the blocks are written to the LUN, ONTAP adds more space to the LUN from
available space on the volume.
With thin provisioning, you can present more storage space to the hosts connecting to the SVM than what is
actually available on the SVM. Storage provisioning with thinly provisioned LUNs enables storage
administrators to provide actual storage that the LUN needs. As ONTAP writes blocks to the LUN, the LUN
increases in size automatically.
150
Solution Hardware and Software
In this solution, we utilized the Cisco MDS 9148S Switches for Fiber Channel Switching. For racking,
cable and initial setup of the MDS switches, please refer to the Quick Start Guide:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148/quick/guide/MDS_9148_
QSG.pdf
1. When the MDS switch is racked and can be logged into, it can now be configured to communicate with
the Cisco UCS Fabric Interconnects.
2. In this study, we used two separate fabrics each with their own unique VSAN. Fabric A is configured for
VSAN400 while Fabric B for VSAN401. In our initial Cisco UCS configuration, you will see where we con-
figured fiber cables on ports 1 and 4 and configured a FC port-channel. FI- n-
figured for VSAN400 and FI- -channel for VSAN401.
151
Solution Hardware and Software
Physically, the Fabric Interconnects extended ports 1 and 2 run to the MDS switch ports 1 and 2.
152
Solution Hardware and Software
We used a total of 8 16Gb FC links, four connections from each storage controller to each the MDS 9148S
SAN Switch for high availability and maximum throughput.
After the ports and port channels are configured, the next steps are to configure the zones and Active
Zoneset Database in the MDS switches. The commands listed below detail how to add a single host on both
9148S A and B. You will need to configure all hosts that will access the NetApp array in these commands.
The entire MDS 9148S FC switch configuration is included in Appendix A .
MDS-A
Zoneset name AFF-A300_VDI vsan 400
Member {ESXi hostname-fc0}
Exit
Zoneset activate name AFF-A300_VDI vsan 400
Zone commit vsan 400
Exit
Copy running-config startup-config
MDS-B
Zoneset name AFF-A300_VDI vsan 401
Member {ESXi hostname-fc1}
Exit
Zoneset activate name AFF-A300_VDI vsan 401
Zone commit vsan 401
Exit
Copy running-config startup-config
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use
the built-in keyboard, video, mouse (KVM) console and virtual media features in Cisco UCS Manager to map
remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
2. Type your email or customer number and the password and then click Log in.
https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=ESXI65U1
153
Solution Hardware and Software
This ESXi 6.5 Cisco custom image includes updates for the fNIC and neNIC drivers. The versions that
are part of this image are: neNIC: 1.0.6.0-1; fNIC: 1.6.0.34
2. The IP KVM enables the administrator to begin the installation of the operating system (OS) through re-
mote media. It is necessary to log in to the Cisco UCS environment to run the IP KVM.
3. Open a Web browser and enter the IP address for the Cisco UCS cluster address. This step launches the
Cisco UCS Manager application.
4. Log in to Cisco UCS Manager by using the admin user name and password.
3. Browse to the ESXi installer ISO image file and click Open.
6. Boot the server by selecting Boot Server and click OK, then click OK again.
Install ESXi
To install VMware ESXi to the SAN-bootable LUN of the hosts, complete the following steps on each host:
1. On reboot, the machine detects the presence of the ESXi installation media. Select the ESXi installer
from the menu that is displayed.
2. After the installer is finished loading, press Enter to continue with the installation.
154
Solution Hardware and Software
3. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
8. The installer issues a warning that existing partitions will be removed from the volume. Press F11 to con-
tinue with the installation.
9. After the installation is complete, clear the Mapped checkbox (located in the Virtual Media tab of the
KVM console) to unmap the ESXi installation image.
The ESXi installation image must be unmapped to make sure that the server reboots into ESXi and not
into the installer.
10. The Virtual Media window might issue a warning stating that it is preferable to eject the media from the
guest. Because the media cannot be ejected and it is read-only, click Yes to unmap the image.
11. From the KVM tab, press Enter to reboot the server.
To configure the ESXi host with access to the management network, complete the following steps:
155
Solution Hardware and Software
1. After the server has finished rebooting, press F2 to customize the system.
3. Select the Configure the Management Network option and press Enter.
6. From the Configure Management Network menu, select IP Configuration and press Enter.
7. Select the Set Static IP Address and Network Configuration option by using the space bar.
10. Enter the default gateway for the first ESXi host.
13. Using the spacebar, unselect Enable IPv6 (restart required) and press Enter.
Since the IP address is assigned manually, the DNS information must also be entered manually.
17. Enter the fully qualified domain name (FQDN) for the first ESXi host.
20. Press Y to confirm the changes and return to the main menu.
21. The ESXi host reboots. After reboot, press F2 and log back in as root.
22. Select Test Management Network to verify that the management network is set up correctly and press
Enter.
156
Solution Hardware and Software
157
Solution Hardware and Software
1. Open a web browser on the management workstation and navigate to the VM-Host-01 management IP
address.
This application is downloaded from the VMware website and Internet access is required on the man-
agement workstation.
5. Click Next.
8. Click Install.
9. Click Finish.
1. Open the recently downloaded VMware vSphere Client and enter the IP address of VM-Host-01 as the
host you are trying to connect to: <<var_vm_host_01_ip>>.
158
Solution Hardware and Software
The neNIC version is 1.0.6.0-1 and the fNIC version is 1.6.0.34 were used in this configuration.
https://my.vmware.com/web/vmware/details?downloadGroup=ESXI65U1&productId=614&rPId=18152
https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=DT-ESXI65-CISCO-NENIC-
1060
https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI65-CISCO-NENIC-
1020&productId=614
https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-CISCO-FNIC-
16034&productId=491
3. Open the neNIC driver bundle. This bundle includes the VMware driver bundle which will be uploaded to
ESXi hosts.
4. Open the fNIC driver bundle. This bundle includes the VMware driver bundle which will be uploaded to
ESXi hosts.
5. Save the location of these driver bundles for uploading to ESXi in the next section.
If the link above has changed, go to www.cisco.com for the latest ISO image of Cisco UCS-related
drivers. This ISO will either have the drivers included or may have an HTML file with the location of the
latest network drivers.
3. From Resources > Storage, right-click datastore1 and select Browse Datastore.
5. Navigate to the saved location for each downloaded VIC driver and select:
a. VMware ESXi 6.5 NIC nenic 1.0.6.0 Driver for Cisco nenic
b. VMware ESXi 6.0 fnic 1.6.0.34 FC Driver for Cisco
6. Click Open on each and click Yes to upload the file to datastore1.
159
Solution Hardware and Software
8. Make sure the files have been uploaded to both ESXi hosts.
9. From the management workstation, open the VMware vSphere Remote CLI that was previously installed.
10. At the command prompt, run the following commands to account for each host:
To get the host thumbprint, type the command without the –-thumbprint option, then copy and paste
the thumbprint into the command.
11. Back in the vSphere Client for each host, right click the host and select Reboot.
Verify the neNIC driver version installed by entering vmkload_mod –s nenic and vmkload_mod –
s fNIC at the command prompt.
160
Solution Hardware and Software
1. Open the recently downloaded VMware vSphere Client and enter the IP address of VM-Host-01 as the
host you are trying to connect to.
1. From the vSphere 6 download page on the VMware Web site, download the vCenter ISO file for the
vCenter Server appliance onto your system.
2. Mount the vSphere ISO file via Windows Explorer and navigate to the folder vcsa-ui-installer/win32 and
click installer file to start the VCSA Appliance from Installer.
161
Solution Hardware and Software
3. Click Install.
Install Stage 1: Deploy vCenter Server with Embedded Platform Services Controller opens.
162
Solution Hardware and Software
6. Enter the IP of the ESXi host the vCenter Appliance will reside. Click Next. Provide Host IP or FQDN and
User Name, Password Credentials of the Host to Connect.
163
Solution Hardware and Software
8. Provide a name for the vCenter appliance, then click Next to continue.
164
Solution Hardware and Software
9. Select Install vCenter Server with and Embedded Platform Services Controller (unless your environment
already has a PSC).
165
Solution Hardware and Software
10. Create a new SSO domain (unless your environment already has and SSO domain. Multiple SSO domains
can co-exist).
12. Select the proper appliance size for your deployment. In our study, Large was selected.
It is important to note at this step that you should create a DNS A record for your appliance prior to
running the install. The services will fail to startup and your install will fail if it cannot resolve properly.
16. Provide the Necessary Network Gateways and DNS Server Information. Review the install settings and
click Finish.
17. When your install completes successfully, you can now login to your Web Client and begin adding hosts
and configuring clusters.
166
Solution Hardware and Software
Install Stage 2: Set Up vCenter Server Appliance with an Embedded PSC opens.
167
Solution Hardware and Software
168
Solution Hardware and Software
169
Solution Hardware and Software
29. To create a Datacenter, click the icon in the center pane which has the green plus symbol above it.
170
Solution Hardware and Software
31. Click the vCenter server available in the list. Click OK to continue.
33. Right-click Datacenters > VDI-DC in the list in the center pane, then click New Cluster.
36. Select vSphere HA. Retain the default values. Configure Cluster Specific Setting.
171
Solution Hardware and Software
If mixing Cisco UCS B 200 M5 servers within a vCenter cluster, it is necessary to enable VMware En-
hanced vMotion Compatibility (EVC) mode. For more information about setting up EVC mode, refer to
Enhanced vMotion Compatibility (EVC) Processor Support.
40. Right-click Infra in the center pane and click Add Host.
172
Solution Hardware and Software
42. Type root as the user name and root password as the password. Click Next to Continue.
48. Review the configuration parameters then click Finish to add the host.
50. When completed, the vCenter cluster configuration is comprised of the following clusters, including a
cluster to manage the workload launcher hosts:
Install and configure the infrastructure virtual machines by following the process provided in the table below:
173
Solution Hardware and Software
The master target for RDSH server roles was configured server 2016 and target master image configured for
VDI VMs with Windows 10 64 bit OS as listed in the table below:
174
Solution Hardware and Software
workload)
The prerequisites for installing the Horizon Connection server or Composer server is to have Windows
2008/12 to 2016 servers ready. In this study, we have used Windows 2016 server for Horizon Connection
Server, Replica Servers, and Composer Server.
1. Download the Horizon Connection server installer from VMware and click Install on the Connection Serv-
er Windows Server Image. In this study, we used version Connection Server 7.3.1 build.6760913. For
the download, see Download VMware Horizon 7.3.1 Standard.
175
Solution Hardware and Software
3. Click Next.
5. Click Next.
176
Solution Hardware and Software
To install additional VMware Horizon Replica servers, during the installation, select Horizon 7 Connec-
tion Server Option to sync the Replica Servers with the existing Standard server by providing Horizon
Connection Server’s FQDN or IP address.
177
Solution Hardware and Software
8. Click Install.
1. Follow the steps shown in section VMware Horizon Connection Server Configuration. During the Horizon
Replica Server installation, select the Replica Server option in order to configure this server as a Replica
Server (shown below) and complete all other steps shown above.
2. Click Next and follow the steps (shown for Horizon Standard server) to complete installing additional
Horizon Replica Servers.
178
Solution Hardware and Software
1. Download the Composer installer from VMware and click Install on the Composer Windows server Im-
age. In this study, we used Composer 7.3.1 build.6744335.
179
Solution Hardware and Software
4. Click Install.
180
Solution Hardware and Software
6. Click Finish.
We used Microsoft 2016 Standard Edition to configure the RDS Server Roles for RDSH VMs to be de-
ployed from the Master image.
1. Login to Windows 2016 RDS Server base VM (Master Image) and click Server Properties and then click
Add Roles and Features.
181
Solution Hardware and Software
182
Solution Hardware and Software
5. Click Next and complete the RDS server roles for RDSH Server Sessions enablement.
183
Solution Hardware and Software
1. Select ESXi host in Infrastructure cluster and create a virtual machine to use as the Golden Image with
windows 10 OS. We used windows 10, 64 bit OS for our testing.
Memory : 2048MB
Processor : 2vCPU
Hard Disk : 32 GB
Network Adapter : 1 VMXNET3 type attached to VDI-DVS port-group on VMware Distributed Virtual
2. Attach the already downloaded Windows 10 LTSB Version Build 1607 to the virtual machine to complete
the Windows 10 Master image installation.
3. Click Next.
184
Solution Hardware and Software
185
Solution Hardware and Software
Windows is installing.
186
Solution Hardware and Software
https://my.vmware.com/web/vmware/downloads
https://my.vmware.com/group/vmware/details?downloadGroup=VIEW-732-
STD&productId=681&download=true&fileId=977d331ce20874ba55052514e85bbab4&secureParam=dbc885
4351a3ea1f0d6214f4fd9c0d59&uuId=2560f9b4-f3d3-4a9c-ae79-4ef35994ed2b&downloadType=
187
Solution Hardware and Software
3. Click Next.
188
Solution Hardware and Software
7. Click Install.
189
Solution Hardware and Software
8. Click Finish to complete the Horizon Agent installation on the Master image.
The same agent installation steps are applicable for the RDSH Server 2016 base Image you intend to
use for RDS VMs.
Prerequisites
To set the Scope Options on the DHCP server hosting the Horizon target machines (for example, RDSH and
VDI virtual machines), complete the following steps:
4. Select the cluster and datastore where the first phase of provisioning will occur.
190
Solution Hardware and Software
6. For Windows 10, follow the steps for cloning the Master Image for further deployment.
7.
9. When the template is ready, convert to VM and take a snapshot of the VM to deploy the VDI virtual ma-
chines from Horizon Administrator Console.
191
Solution Hardware and Software
11. Provide a Name for the Master Image snapshot and click OK.
2. Login to VMware Horizon Connection Server Administrator to create the RDSH farm and RDSH pool or
VDI Desktops Pools.
2. Click Next.
192
Solution Hardware and Software
3. Select the pool type by clicking the Add button (for example, automated pool or RDS pool).
4. Select User Assignment for desktop pool. (We created Floating assigned.) Click Next.
193
Solution Hardware and Software
5. Select vCenter Server and the type of Desktop deployment. (We created Horizon Instant Clones).
194
Solution Hardware and Software
7. Provide the number of desktops to be provisioned (shown above POOL1-VDI, total of 1680 desktops).
195
Solution Hardware and Software
196
Solution Hardware and Software
10. Provide the parent VM, snapshot and host/cluster info, data store information for the virtual machines to
create.
11. In Advanced Storage Options, select the View Storage Accelerator and reclaim disk space parameters
as required.
197
Solution Hardware and Software
12. Select AD container for desktops to place in a Domain Controller computer location.
13. Review all the deployment specifications and click Finish to complete the deployment.
198
Solution Hardware and Software
14. Select Entitle users after this wizard finishes, to enable desktop user group/ users to access this pool.
<Domain Controller> Active Directory Users and Computers > vdilab.local > LoginVSI > Computers
> VDI-ICPOOL1,VDI-ICPOO -ICPOOL1680
199
Solution Hardware and Software
You can entitle the user access at the RDS Pool level for RDS users/groups who can access the RDS
VMs.
200
Solution Hardware and Software
3. Click Next.
4. Provide ID and Description for RDS FARM. Select the Display Protocol which is required for users to con-
nect to the RDS Sessions.
5. Click Next.
201
Solution Hardware and Software
6. Provide the naming pattern for the RDS Desktops /VMs you want to create and the number of RDS VMs
you want to create on the RDS host or RDS cluster. For example, 72 RDS server virtual machines created
in this study.
7. Click Next.
202
Solution Hardware and Software
9. Provide all the information about vCenter settings with parent RDS master image, snapshot, Host
/Cluster information, and Datastores for VMs to be stored.
203
Solution Hardware and Software
11. Select Active Directory Domain controller container (VMs to be stored on the separate Computer VM
(RDSHVSI) container in the Domain Controller) intended for storing RDS VMs and select the sysprep cus-
tomization specs for creating VMware Composer provisioned RDS VMs.
12. Review the RDS Farm automatic deployment specifications and click Finish to complete the RDS pool.
204
Solution Hardware and Software
When the RDS FARM is created, you need to create a RDS pool to absorb the RDS VMS FARM into the Pool
for further managing the RDS pool.
A Snapshot of the RDS VMs created in the AD container are selected in the deployment process.
<Domain Controller> Active Directory Users and Computers > vdilab.local > RDSHVSI > Computers >
-72
205
Solution Hardware and Software
3. Leave default settings for the Desktop Pool Settings. Click Next.
206
Solution Hardware and Software
4. Select the RDS Farm. You have the option to create a farm or select the farm which is already created.
We chose the option of selecting an RDS farm (FA-RDS) for this desktop pool. Click Next.
5. Click Ready to Complete and select Entitle users after this wizard finishes to provide users/user group
permission to access this RDS pool. Click Next.
207
Solution Hardware and Software
Printers
Some user settings and data can be redirected by means of folder redirection. However, if folder redirection
is not used these settings are stored within the user profile.
The first stage in planning a profile management deployment is to decide on a set of policy settings that
together form a suitable configuration for your environment and users. The automatic configuration feature
simplifies some of this decision-making for VMware Horizon desktop deployments. Screenshots of the User
208
Solution Hardware and Software
s)
are shown below. Basic profile management policy settings are documented here:
209
Solution Hardware and Software
Figure 33 Cisco UCS B200 M5 Blade Server for Single Server Scalability VMware Horizon 7 VDI (Non-
Persistent) Instant Clones with Windows 10 64bit OS
Figure 34 Cisco UCS B200 M5 Blade Server for Single Server Scalability VMware Horizon 7 VDI (Persis-
tent) Full Clones with Windows 10 64bit OS
210
Solution Hardware and Software
Hardware components:
2 (Infrastructure Hosts) Cisco UCS B200 M4 Blade Servers (2 Intel Xeon processor E5-2660 v3 CPUs
at 2.60 GHz, with 128GB of memory per blade server [16 GB x 8 DIMMs at 2400 MHz]) for
infrastructure host blades
Total number of VMware Horizon RDS server configured 12 and session 285
1 (one) Cisco UCS B200 M5 Blade Server (2 Intel Xeon processor 6140 Gold CPUs at 2.3 GHz, with
768GB of memory per blade server [12 x 64 GB DIMMs at 2666 MHz] for workload host blade
And
Total number of VMware Horizon VDI Virtual Machines 210 (Non Persistent Instant Clones and
Persistent Full Clones Single Server Testing)
1 (one) Cisco UCS B200 M5 Blade Server (2 Intel Xeon processor 6140 Gold CPUs at 2.3 GHz, with
768GB of memory per blade server [12 x 64GB DIMMs at 2666 MHz] for workload host blade
1 NetApp AFF A300Storage with (24x 3.8TB) providing 65 TB Usable disk capacity
Software components:
VMware Horizon RDS Hosted Shared Desktops or VMware Horizon 7 VDI Hosted Virtual Desktops
Microsoft Windows Server 2016, 6vCPU, 24GB RAM, 40 GB base disk for each RDS Server VM
Microsoft Windows 10 64 bit, 2vCPU, 2 GB RAM, 32 GB disk for Persistent Clones Testing
211
Solution Hardware and Software
Figure 36 VDI Cluster Test with VDI Instant Clone Non-Persistent Cluster Test Configuration with Nine
Blades
212
Solution Hardware and Software
Figure 37 VDI Cluster Test with VDI Full Clone Persistent Cluster Test Configuration with Nine Blades
Hardware components:
2 (Infrastructure Hosts) Cisco UCS B200 M4 Blade Servers (2 Intel Xeon processor E5-2660 v3 CPUs
at 2.60 GHz, with 128GB of memory per blade server [16 GB x 8 DIMMs at 2400 MHz]) for
infrastructure host blades
Seven (7) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [12 x 64GB DIMMs at 2666 MHz]) for workload host blades
Total of 72 VMware Horizon RDS Server VMs configured on 7 Hosts RDSH Cluster
or
Nine (9) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [12 x 64GB DIMMs at 2666 MHz]) for workload host blades
Total no of VDI Virtual Machines Configured 1660 on 1 VMware Horizon Instant Clone pool
213
Solution Hardware and Software
And
Nine (9) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [12 X 64 GB DIMMs at 2666 MHz] for workload host blades
Total no of VDI Virtual Machines Configured 1660 on 1 VMware Horizon Persistent Clone pool
Software components:
VMware Horizon RDS Hosted Shared Desktops or VMware Horizon 7 VDI Hosted Virtual Desktops
Microsoft Windows 10 64 bit, 2vCPU, 2 GB RAM, 32 GB disk (32GB Instant cones/ 32GB full clones)
214
Solution Hardware and Software
Hardware components:
2 (Infrastructure Hosts) Cisco UCS B200 M4 Blade Servers (2 Intel Xeon processor E5-2660 v3 CPUs
at 2.60 GHz, with 128GB of memory per blade server [16 GB x 8 DIMMs at 2400 MHz] for
infrastructure host blades
Seven (7) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [12 X 64 GB DIMMs at 2666 MHz] for workload host blades
Total of 72 VMware Horizon RDS Server VMs configured on 7 ESXi Hosts on the RDSH Cluster
And
Nine (9) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [12 x 64 GB DIMMs at 2666 MHz] for workload host blades
Total number of VDI Virtual Machines Configured 1660 VMware Horizon Instant Clone pool
215
Solution Hardware and Software
And
Nine (9) Cisco UCS B200 M5 Blade Server (2 Intel Xeon 6140 Gold processor CPUs at 2.3 GHz, with
768GB of memory per blade server [64 GB x 12 DIMMs at 2666 MHz]) for workload host blades
Total number of VDI Virtual Machines Configured 1660 VMWare Horizon Full Clone pool
Software components:
VMware Horizon RDS Hosted Shared Desktops or VMware Horizon 7 VDI Hosted Virtual Desktops
216
Testing Methodology and Success Criteria
The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during
the desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload
execution (also referred to as steady state), and user logoff for the RDSH Servers Session under test.
Test metrics were gathered from the virtual desktop, storage, and load generation software to assess the
overall success of an individual test cycle. Each test cycle was not considered passing unless all of the
planned test users completed the ramp-up and steady state phases (described below) and unless all metrics
were within the permissible thresholds as noted as success criteria.
Three successfully completed test cycles were conducted for each hardware configuration and results were
found to be relatively consistent from one test to the next.
You can obtain additional information and a free test license from http://www.loginvsi.com
Testing Procedure
The following protocol was used for each test cycle in this study to insure consistent results.
All Launchers for the test were shut down. They were then restarted in groups of 10 each minute until the
In addition, Cisco requires that the Login VSI Benchmark method is used for all single server and scale
testing. This assures that our tests represent real-world scenarios. For each of the three consecutive runs on
single server tests, the same process was followed. Complete the following steps:
All Infrastructure VMs used in test run (AD, SQL, Horizon Connection brokers, Horizon Composer,
etc.)
3. Time 0:05: Boot RDS Machines using VMware Horizon 7 Administrator Console.
217
Testing Methodology and Success Criteria
5. Time 0:35 Single Server or Scale target number of RDS Servers registered on XD.
No more than 60 Minutes of rest time is allowed after the last desktop is registered and available on
VMware Horizon 7 Administrator Console dashboard. Typically a 20-30 minute rest period for Win-
dows 10 desktops and 10 minutes for RDS VMs is sufficient.
6. Time 1:35 Start Login VSI 4.1.25.6 Office Worker Benchmark Mode Test, setting auto-logoff time at 900
seconds, with Single Server or Scale target number of desktop VMs utilizing sufficient number of
Launchers (at 20-25 sessions/Launcher).
7. Time 2:23 Single Server or Scale target number of desktop VMs desktops launched (48 minute bench-
mark launch rate).
All sessions launched must become active for a valid test run within this window.
9. Time 2:40 Login VSI Test Ends (based on Auto Logoff 900 Second period designated above).
All sessions launched and active must be logged off for a valid test run. The VMware Horizon 7 Admin-
istrator Dashboard must show that all desktops have been returned to the registered/available state as
evidence of this condition being met.
12. Time 3:15 Copy all log files off to archive; set virtual desktops to maintenance mode through broker;
shutdown all Windows 10 machines.
Success Criteria
Cisco will run tests at a session count level that effectively utilizes the blade capacity measured by CPU
utilization, memory utilization, storage utilization, and network utilization. We will use Login VSI to launch
version 4.1.5 Knowledge Worker workloads. The number of launched sessions must equal active sessions
within two minutes of the last session launched in a test as observed on the VSI Management console.
The VMware Horizon Administrator Console or Horizon Connection Server Console must be monitored
throughout the steady state to make sure of the following:
No sessions move to unregistered, unavailable or available state at any time during steady state
218
Testing Methodology and Success Criteria
Within 20 minutes of the end of the test, all sessions on all launchers must have logged out automatically and
the Login VSI Agent must have shut down.
Cisco requires three consecutive runs with results within +/-1% variability to pass the Cisco Validated Design
performance criteria. For white papers written by partners, two consecutive runs within +/-1% variability are
accepted. (All test data from partner run testing must be supplied along with proposed white paper.)
We will publish Cisco Validated Designs with our recommended workload following the process above and
will note that we did not reach a VSImax dynamic in our testing FlexPod with Cisco UCS B200 M4 and
VMware Horizon 7 on VMware ESXi 6.5 Update 1 Test Results.
The purpose of this testing is to provide the data needed to validate VMware Horizon Remote Desktop
Session Hosted (RDSH) server sessions and VMware Horizon Virtual Desktop (VDI) models with VMware
Horizon Composer 7 using ESXi, vCenter to virtualize Microsoft Windows 10 desktops and Microsoft
Windows Server 2016 sessions on Cisco UCS B200 M5 Blade Servers using a NetApp AFF A300 storage
system.
The information contained in this section provides data points that a customer may reference in designing
their own implementations. These validation results are an example of what is possible under the specific
environment conditions outlined here, and do not represent the full characterization of VMware Horizon
products with VMware vSphere.
Three test sequences, each containing three consecutive test runs generating the same result, were
performed to establish single blade performance and multi-blade, linear scalability.
Login VSI is different in approach. Login VSI is not primarily designed to be a steady state benchmark
(however, if needed, Login VSI can act like one). Login VSI was designed to perform benchmarks for SBC or
VDI workloads through system saturation. Login VSI loads the system with simulated user workloads using
well known desktop applications like Microsoft Office, Internet Explorer and Adobe PDF reader. By gradually
increasing the amount of simulated users, the system will eventually be saturated. When the system is
saturated, the response time of the applications will increase significantly. This latency in application
response times show a clear indication whether the system is (close to being) overloaded. As a result, by
nearly overloading a system it is possible to find out what its true maximum user capacity is.
After a test is performed, the response times can be analyzed to calculate the maximum active
session/desktop capacity. Within Login VSI this is calculated as VSImax. When the system is coming closer
to its saturation point, response times will rise. When reviewing the average response time it will be clear the
response times escalate at saturation point.
Services (RDS) workloads this is valid and useful information. This index simplifies comparisons and makes it
possible to understand the true impact of configuration changes on hypervisor host or guest level.
219
Testing Methodology and Success Criteria
An alternative to the Login VSI method would be to generate user actions client side through the remoting
protocol. These methods are always specific to a product and vendor dependent. More importantly, some
protocols simply do not have a method to script user actions client side.
For Login VSI the choice has been made to execute the scripts completely server side. This is the only
practical and platform independent solution, for a benchmark like Login VSI.
The five operations from which the response times are measured are:
Loading and initiating VSINotepad.exe and opening the openfile dialog. This operation is handled
by the OS and by the VSINotepad.exe itself through execution. This operation seems almost
instant from an end-
Loading and initiating VSINotepad.exe and opening a file. This operation is also handled by the
OS and by the VSINotepad.exe itself through execution. This operation seems almost instant
from an end-
This action copy's a random file and compresses it (with 7zip) with high compression enabled.
The compression will very briefly spike CPU and disk IO.
This action copy's a random file and compresses it (with 7zip) with low compression enabled.
The compression will very briefly disk IO and creates some load on the CPU.
CPU
Calculates a large array of random data and spikes the CPU for a short period of time.
These measured operations within Login VSI do hit considerably different subsystems such as CPU (user and
kernel), Memory, Disk, the OS in general, the application itself, print, GDI, etc. These operations are
specifically short by nature. When such operations become consistently long: the system is saturated
because of excessive queuing on any kind of resource. As a result, the average response times will then
220
Testing Methodology and Success Criteria
escalate. This effect is clearly visible to end-users. If such operations consistently consume multiple seconds
the user will regard the system as slow and unresponsive.
Figure 39 Sample of a VSI Max Response Time Graph, Representing a Normal Test
Figure 40 Sample of a VSI Test Response Time Graph (where there was a clear performance issue)
When the test is finished, VSImax can be calculated. When the system is not saturated, and it could
complete the full test without exceeding the average response time latency threshold, VSImax is not reached
and the amount of sessions ran successfully.
The response times are very different per measurement type, for instance Zip with compression can be
around 2800 ms, while the Zip action without compression can only take 75ms. These response time of
these actions are weighted before they are added to the total. This ensures that each activity has an equal
impact on the total response time.
In comparison to previous VSImax models, this weighting much better represent system performance. All
actions have very similar weight in the VSImax total. The following weighting of the response times are
applied.
The following actions are part of the VSImax v4.1 calculation and are weighted as follows (US notation):
221
Testing Methodology and Success Criteria
CPU: 0.75
This weighting is applied on the baseline and normal Login VSI response times.
With the introduction of Login VSI 4.1 we also created a new method to calculate the base phase of an
environment. With the new workloads (Taskworker, Powerworker, etc.) enabling 'basephase' for a more
reliable baseline has become obsolete. The calculation is explained below. In total 15 lowest VSI response
time samples are taken from the entire test, the lowest 2 samples are removed and the 13 remaining
samples are averaged. The result is the Baseline. In short:
The VSImax average response time in Login VSI 4.1.x is calculated on the amount of active users that are
logged on the system.
For example, if the active sessions is 60, then latest 5 + 24 (=40 percent of 60) = 31 response time
measurement are used for the average calculation.
To remove noise (accidental spikes) from the calculation, the top 5 percent and bottom 5 percent of the VSI
response time samples are removed from the average calculation, with a minimum of 1 top and 1 bottom
sample. As a result, with 60 active users, the last 31 VSI response time sample are taken. From those 31
samples the top 2 samples are removed and lowest 2 results are removed (5 percent of 31 = 1.55, rounded
to 2). At 60 users the average is then calculated over the 27 remaining results.
VSImax v4.1.x is reached when the VSIbase + a 1000 ms latency threshold is not reached by the average
VSI response time result. Depending on the tested system, VSImax response time can grow 2 - 3x the
baseline average. In end-user computing, a 3x increase in response time in comparison to the baseline is
typically regarded as the maximum performance degradation to be considered acceptable.
In VSImax v4.1.x this latency threshold is fixed to 1000ms, this allows better and fairer comparisons between
two different systems, especially when they have different baseline results. Ultimately, in VSImax v4.1.x, the
performance of the system is not decided by the total average response time, but by the latency is has
under load. For all systems, this is now 1000ms (weighted).
The threshold for the total response time is: average weighted baseline response time + 1000ms.
When the system has a weighted baseline response time average of 1500ms, the maximum average
response time may not be greater than 2500ms (1500+1000). If the average baseline is 3000 the maximum
average response time may not be greater than 4000ms (3000+1000).
222
Testing Methodology and Success Criteria
When the threshold is not exceeded by the average VSI response time during the test, VSImax is not hit and
the amount of sessions ran successfully. This approach is fundamentally different in comparison to previous
VSImax methods, as it was always required to saturate the system beyond VSImax threshold.
Lastly, VSImax v4.1.x is now always reported with the average baseline VSI response time result. For
of systems and gives a more complete understanding of the system. The baseline performance helps to
understand the best performance the system can give to an individual user. VSImax indicates what the total
user capacity is for the system. These two are not automatically connected and related.
When a server with a very fast dual core CPU, running at 3.6 GHZ, is compared to a 10 core CPU, running at
2,26 GHZ, the dual core machine will give and individual user better performance than the 10 core machine.
This is indicated by the baseline VSI response time. The lower this score is, the better performance an
individual user can expect.
However, the server with the slower 10 core CPU will easily have a larger capacity than the faster dual core
system. This is indicated by VSImax v4.1.x, and the higher VSImax is, the larger overall user capacity can be
expected.
With Login VSI 4.1.x a new VSImax method is introduced: VSImax v4.1. This methodology gives much better
insight in system performance and scales to extremely large systems.
This recommended maximum workload approach allows you to determine the server N+1 fault tolerance
load the blade can successfully support in the event of a server outage for maintenance or upgrade.
Our recommendation is that the Login VSI Average Response and VSI Index Average should not exceed the
Baseline plus 1680 milliseconds to insure that end user experience is outstanding. Additionally, during
steady state, the processor utilization should average no more than 90-95%. (Memory should never be
oversubscribed for Desktop Virtualization workloads.)
Callouts have been added throughout the data charts to indicate each phase of testing.
Boot Start all RDS and VDI virtual machines at the same time
Logon The Login VSI phase of test is where sessions are launched and
start executing the workload over a 48 minutes duration
223
Testing Methodology and Success Criteria
Steady The steady state phase is where all users are logged in and per-
state forming various workload tasks such as using Microsoft Office,
Web browsing, PDF printing, playing videos, and compressing
files (typically for 15 minute duration)
Logoff Sessions finish executing the Login VSI workload and logoff
224
Test Results
Test Results
Single-Server Recommended Maximum Workload for RDS Persistent Hosted Server Sessions:
285 Users
Figure 41 Single Server Recommended Maximum Workload for RDS with 285 Users
The recommended maximum workload for a B200 M5 blade server with dual 6140 Gold processors and
768GB of RAM is 285 Server 2016 Remote Desktop Server Hosted Session Desktops. Each dedicated blade
server ran 12 Server 2016 Virtual Machines. Each virtual server was configured with 6 vCPUs and 24GB
RAM.
225
Test Results
Figure 42 Single Server | VMware Horizon 7 RDS Hosted Sessions | VSI Score
Performance data for the server running the workload is shown below:
Figure 43 Single Server | VMware Horizon 7 RDSH processor | Host CPU Utilization
226
Test Results
227
Test Results
Figure 46 Single Server | VMware Horizon 7 RDSH processor (Total%) Processor Time
Figure 47 Single Server | VMware Horizon 7 RDSH processor (Total%) User Time
228
Test Results
Single-Server Recommended Maximum Workload for VDI Persistent and Non-Persistent with 210
Users
Figure 48 Single Server Recommended Maximum Workload for VDI Non-Persistent with 210 Users
The recommended maximum workload for a Cisco UCS B200 M5 blade server with dual 6140 processors
and 768GB of RAM is 210 Windows 10 64-bit virtual machines with 2 vCPU and 2GB RAM. The Login VSI
and blade performance data is shown below.
Performance data for the server running the workload is shown below:
229
Test Results
Figure 50 Single Server | VMware Horizon VDI Non Persistent Desktops | Host CPU Utilization
Figure 51 Single Server | VMware Horizon VDI Non Persistent Desktops | Host Memory Utilization
230
Test Results
Figure 52 Single Server | VMware Horizon VDI non persistent desktops | Host Network Utilization
Figure 53 VDI- Single Server Recommended Maximum Workload for VDI Non-Persistent with 210 Users
231
Test Results
Figure 55 Single Server | VMware Horizon VDI Non Persistent Desktops | Host CPU Utilization
232
Test Results
233
Test Results
The workload for the test is 1680 RDS users. To achieve the target, sessions were launched against the
single RDS cluster only. As per the Cisco Test Protocol for VDI solutions, all sessions were launched within
48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all launched
sessions became active within two minutes subsequent to the last logged in session.
The configured system efficiently and effectively delivered the following results:
Figure 57 RDSH Cluster | 1680 RDS Users | VMware Horizon RDSH VSI Score
234
Test Results
Figure 58 RDSH Cluster | 1680 RDS Users | Workload Host | Host CPU Utilization
Figure 59 RDSH Cluster | 1680 RDS Users | Workload Host | Host Memory Utilization
235
Test Results
Figure 60 RDSH Cluster | 1680 RDS Users | RDS Host | Host Network Utilization
Figure 61 Cluster | 1680 RDS Users | RDS Host | Host Fibre Channel Network Utilization
236
Test Results
Performance Data from One RDSH Server: 1680 Users RDSH Sessions Cluster Testing
Figure 62 RDSH Server Processor (Total%) Time
Cluster Workload Testing with 1660 Persistent and Non-Persistent VDI Instant
Desktop Users
This section provides the key performance metrics that were captured on the Cisco UCS, NetApp AFF
A300storage, and Infrastructure VMs during the persistent desktop testing. The cluster testing with
comprised of 1660 VDI Non- Persistent desktop sessions using 9 workload blades.
237
Test Results
Figure 64 VMware Horizon VDI Non-Persistent Cluster Testing with 1660 Users
The workload for the test is 1660 non-persistent desktop users. To achieve the target, sessions were
launched against the single persistent cluster only. As per the Cisco Test Protocol for VDI solutions, all
sessions were launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark
Mode) and all launched sessions became active within two minutes subsequent to the last logged in session.
The configured system efficiently and effectively delivered the following results:
Figure 65 VDI Cluster | VMware Horizon 1660 VDI Non-Persistent Users | VSI Score
238
Test Results
Figure 66 VDI Cluster | VDI Cluster | 1660 VDI Non-Persistent Users | VDI Host | Host CPU Utilization
Figure 67 VDI Cluster | VDI Cluster | 1660 VDI Non-Persistent Users | VDI Host | Host Memory Utilization
239
Test Results
Figure 68 VDI Cluster | 1660 VDI Non-Persistent Users | VDI Host | Host Network Utilization
Figure 69 VDI Cluster | 1660 VDI-Non Persistent Users | VDI Host | Host Fibre Channel Network Utiliza-
tion
The workload for the test is 1660 persistent desktop users. To achieve the target, sessions were launched
against the single persistent cluster only. As per the Cisco Test Protocol for VDI solutions, all sessions were
launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all
launched sessions became active within two minutes subsequent to the last logged in session.
The configured system efficiently and effectively delivered the following results:
240
Test Results
Figure 70 VMware Horizon VDI Persistent Cluster Testing with 1660 Users
Figure 71 VDI Cluster | VMware Horizon 1660 VDI Persistent Users | VSI Score
241
Test Results
Figure 72 VDI Cluster | VDI Cluster | 1660 VDI Persistent Users | VDI Host | Host CPU Utilization
Figure 73 VDI Cluster | VDI Cluster | 1660 VDI Persistent Users | VDI Host | Host Memory Utilization
242
Test Results
Figure 74 VDI Cluster | 1660 VDI Persistent Users | VDI Host | Host Network Utilization
Figure 75 VDI Cluster | 1660 VDI Persistent Users | VDI Host | Host Fibre Channel Network Utilization
243
Test Results
The combined mixed workload for the solution is 5000 users. To achieve the target, sessions were launched
against all workload clusters concurrently. As per the Cisco Test Protocol for VDI solutions, all sessions were
launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all
launched sessions became active within two minutes subsequent to the last logged in session.
The configured system efficiently and effectively delivered the following results:
244
Test Results
Figure 78 Full Scale | 5000 Mixed Users RDSH Server Host | Host CPU Utilization
245
Test Results
Figure 79 Full Scale | 5000 Mixed Users RDSH Server Host | Host Memory Utilization
Figure 80 Full Scale | 5000 Mixed Users RDSH Server Host | Host CPU Network Utilization
246
Test Results
Figure 81 Full Scale | 5000 Mixed Users RDSH Server Host | Host vHBA Adapter Utilization
RDSH Server Performance Monitor Data for One Sample RDSH Server: 5000 Users
Mixed Scale Testing
Figure 82 RDSH Server Processor (Total%) Time
247
Test Results
Figure 84 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
248
Test Results
Figure 85 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 86 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
249
Test Results
Figure 87 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
250
Test Results
Figure 89 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 90 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
251
Test Results
Figure 91 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 92 represents the read and write latency on both controllers in the storage system. As noted in the
graph, read latency never exceeded 0.6ms and write latency never exceeded 0.4ms during the full-scale
testing.
252
Test Results
Figure 93 AFF A300 5000 Users Mixed Workload Scale Test | Storage controller Read Latency
Figure 94 AFF A300 5000 Users Mixed Workload Scale Test | Storage controller Write Latency
253
Test Results
Figure 95 AFF A300 5000 Users Mixed Workload Scale Test | Cluster Throughput
Figure 95 represents the total throughput of both storage controllers during the full-scale testing. As noted,
total throughput peaked at approximately 1.75GB/sec during the full-scale testing.
Figure 96 AFF A300 5000 Users Mixed Workload Scale Test | Total IOPS
Figure 96 represents the total number of IO operations on both storage controllers during the full-scale
testing. As noted in the graph, the A300 delivered approximately 145,000 IOPs at the peak of the testing.
254
Test Results
Figure 97 shows the remaining CPU performance capacity of the storage system. This metric can help
administrators determine the growth capacity of an existing storage system. This metric considers HA
failover requirements when determining headroom to allow for consistent performance even in the event of a
controller failure. As seen in this graph, there is little remaining headroom on this storage system. If
additional desktops were to be deployed, and additional A300 storage system would be required to maintain
expected performance.
AFF A300 Storage Detailed Test Results for Cluster Scalability Test
This section highlights and provides analysis of the AFF A300 system performance results for each of the
cluster test cases identified previously in this document. Specifically, it depicts and discusses the results for
the following test scenarios:
From a storage perspective, it is critical to maintain a latency of less than a millisecond for an optimal end-
user experience no matter the IOPS and bandwidth being driven. The test results indicate that the AFF A300
storage delivers that essential minimum level of latency despite driving a substantial amount of IOPs and
bandwidth for the thousands of desktops hosted on the AFF A300 system.
The sections that follow show screenshots of the AFF A300 storage test data for all three use case scenarios
with information about IOPS, bandwidth, and latency at the peak of each use case test. In all three use cases
255
Test Results
(cluster level testing with RDSH, VDI Instant clones and full clones and mixed workload), the criteria followed
prior to launching Login VSI workload test are the same.
Followed by this stage is a forty-minute window for all RDSH and VDI VMs to settle and then we begin the
2880-second Login VSI simulation phase when all sessions are ramping up and logging in.
AFF A300 Storage Test Results for 1680 RDS Windows 2016 Sessions
This test uses Login VSI as the workload generator in Benchmark mode with the Knowledge Worker user
type and with VMware Remote Desktop Session Hosts (RDSH) Server Sessions as the VDI delivery
mechanism. This first highlighted cluster test shows that the AFF A300 can easily handle this workload with
exceptional end-user experience as confirmed from Login VSI.
The first AFF A300 GUI screenshot shows the performance during the 1680 RDSH sessions running on top
of 72 Windows 2016 servers. As with all scenarios, there were three separate 1680 RDSH simulation runs
completed in total, all with very similar results. As indicated in charts from one of these simulations, we
maintained latency of less than or close to one millisecond (ms) for both read and write operations
throughout this entire run. This resulted in a confirmed outstanding end-user experience for the simulated
VMware RDSH users independently verified by Login VSI. It has been observed during this component of the
testing with observed peak total values 35K IOPS and 600 MB/s. The latency is less than 0.20ms which
lends itself to a very good end-user experience.
256
Test Results
Figure 100 AFF A300 1680 RDSH Cluster Test | Read/Write Latency
Figure 101 AFF A300 1680 RDSH Cluster Test Controller Processor Average
257
Test Results
Figure 102 AFF A300 1680 RDSH Cluster Test CPU Headroom
NetApp AFF A300 Test Results for 1660 Non-Persistent Windows 10 x64 VMware Horizon
Desktops
The next cluster-level simulation was to run 1660 non persistent Windows 10 x64 desktops against the
same AFF- A300. All Login VSI parameters were kept consistent with bringing up all 1660 desktops created
using VMware Horizon Composer Provisioning virtual machines. As indicated by the following storage
metrics, the AFF A300 system was clearly able to handle this workload and continued to provide sub milli
second latency for another impressive Login VSI result. It has been observed during this component of the
testing with observed peak total values 42K IOPS and bandwidth of 600 MB/s. The latency is less than
0.20ms which results in a very good end user experience.
Figure 103 NetApp AFF A300| 1660 Users VDI Non-Persistent Cluster Test | Read/Write IOPS
258
Test Results
Figure 104 AFF A300| 1680 Users VDI Non-Persistent Cluster Test Bandwidth Transmitted /Received
Figure 105 AFF A300| 1680 Users VDI Non-Persistent Cluster Test Latency (ms)
Figure 106 NetApp AFF A300| 1660 Users VDI Non-persistent Cluster Test Controller Processor Average
259
Test Results
Figure 107 NetApp AFF A300| 1660 Users VDI Non-Persistent Cluster Test Controller CPU Headroom
NetApp AFF A300 Test Results for 1660 Persistent Windows 10x64 VMware Horizon Desktops
The next cluster-level simulation was to run 1660 non persistent Windows 10x64 desktops against the same
AFF- A300. All Login VSI parameters were kept consistent with bringing up all 1660 desktops created using
VMware Horizon Composer Provisioning virtual machines. As indicated by the following storage metrics, the
AFF A300 system was clearly able to handle this workload and continued to provide sub milli second latency
for another impressive Login VSI result. It has been observed during this component of the testing with
observed peak total values 21K IOPS and bandwidth of 430 MB/s. The latency is less than 0.27ms which
results in a very good end user experience
Figure 108 NetApp AFF A300| 1660 Users VDI Persistent Cluster Test | Read/Write IOPS
260
Test Results
Figure 109 NetApp AFF A300| 1660 Users VDI Persistent Cluster Test | Throughput
Figure 110 NetApp AFF A300| 1660 Users VDI Persistent Cluster Test | Latency (ms)
Figure 111 NetApp AFF A300| 1660 Users VDI Persistent Cluster Test | Controller Processor Average
261
Test Results
Figure 112 NetApp AFF A300| 1660 Users VDI Persistent Cluster Test | Controller CPU headroom
NetApp AFF A300 Storage Test Results for 5000 User Full Scale, Mixed Workload Scalability
The next simulation shows the results of combining all of our previous cluster tests of 1680 VMware Horizon
RDS sessions and 1660 non persistent VDI Instant Clone and 1660 Full Clone virtual machines sessions for
a full 5000 user VMware Horizon RDSH and VMware Horizon VDI simulation on the same AFF A300 array.
The performance and consistent results indicate an outstanding user experience on a very large scale.
The screenshot below shows the AFF A300 GUI with the cursor providing detailed metrics at the start of the
simulation during the boot storm of the desktops. Despite driving nearly 2GB/s in bandwidth during the test
itself, we maintain the desktop responsiveness and performance of low latency throughout the entire test.
It has been observed during this component of the testing with observed peak total values 110K IOPS and a
bandwidth of 2000 MB/s. The latency is again less than 0.7ms, which is a very good end-user experience on
mixed use case scale testing scenario.
262
Test Results
Figure 113 AFF A300 5000 Users Mixed Workload Read/Write IOPS
Figure 114 AFF A300 | 5000 Users Mixed Workload Scale Test | Throughput MB/S
263
Test Results
Figure 115 AFF A300 5000 Users Mixed Workload Latency (ms)
The VMware Horizon Administrator Console reports all 5000 sessions /desktops (1680 RDSH sessions and
3320 VDI virtual machines, 2 pools each with 1660 non-persistent and persistent desktops provisioned)
have been logged in during the Login VSI testing.
This was a pristine lab environment; you can see that our data reduction and overall array utilization
during this full test scenario was extremely impressive with only less than 30 percent of the overall ar-
ray space being utilized, allowing substantial room for additional user applications, data, and even ad-
ditional workloads to be hosted on this array without any capacity concerns.
264
Scalability Considerations and Guidelines
Cisco UCS Manager Software supports up to 20 Cisco UCS chassis within a single Cisco UCS domain
with Cisco UCS 6332UP Fabric Interconnect. A single Cisco UCS domain can grow to 160 blades for
an enterprise deployment.
Cisco UCS Central, the manager of managers, extends Cisco UCS domains and vastly increases the
reach of the Cisco UCS system. Simplify daily operations by centrally managing and automating
routine tasks and expediting problem resolution. Our powerful platform eliminates disparate
management environments. Use it to support up to 10,000 Cisco UCS servers (blade, rack,
composable, and Mini) and manage multiple Cisco UCS instances or domains across globally-
distributed locations.
As scale grows, the value of the combined Cisco UCS fabric, Cisco Nexus physical switches, and
Cisco Nexus virtual switches increases dramatically to define the Quality of Services required to
deliver excellent end user experience 100 percent of the time.
To accommodate the Cisco Nexus 9000 upstream connectivity in the way we describe in the network
configuration section, two or more Ethernet uplinks are needed to be configured on the Cisco UCS
6332UP Fabric Interconnect.
The backend storage has to be scaled accordingly, based on the IOP considerations as described in the
NetApp AFF A300 storage scaling section. Please refer the AFF A300 website for scalability guidelines.
For VMware Horizon pooled desktops, the disk size and memory requirements.
Horizon
When designing and deploying this CVD environment, best practices were followed including the following:
VMware recommends using N+1 schema for virtualization host servers to accommodate resiliency. In
all Reference Architectures (such as this CVD), this recommendation is applied to all host servers.
265
Scalability Considerations and Guidelines
All Provisioning Server Network Adapters are configured to have a static IP and management.
266
Summary
Summary
FlexPod delivers a platform for Enterprise End User Computing deployments and cloud datacenters using
Cisco UCS Blade and Rack Servers, Cisco Fabric Interconnects, Cisco Nexus 9000 switches, Cisco MDS
switches and FC-attached NetApp AFF A300. FlexPod is designed and validated using compute, network
and storage best practices and high availability to reduce deployment time, project risk and IT costs while
maintaining scalability and flexibility for addressing a multitude of IT initiatives. This CVD validates the
design, performance, management, scalability, and resilience that FlexPod provides to customers wishing to
deploy enterprise-class VDI for 5000 users at a time.
Deploy and transition services to implement validated architectures and prepare your storage
environment
Operations services to deliver continuous operations while driving operational excellence and
efficiency.
In addition, Cisco Advanced Services and NetApp Advanced Consulting Services provide in-depth
knowledge transfer and education services that give you access to our global technical resources and
intellectual property.
267
About the Authors
Ramesh is a VMware Horizon and Cisco UCS subject matter expert in the Cisco Computer Systems Product
VMware
Horizon.
Acknowledgements
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, we
would like to acknowledge the significant contribution and expertise that resulted in developing this
document:
Mike Brennan, Product and Technical Marketing Manager, Desktop Virtualization and Graphics
Solutions, Cisco Systems, Inc.
268
References
References
https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-
servers/b200m5-specsheet.pdf
https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-b-series-blade-servers/datasheet-
listing.html
https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-b200-m5-blade-server/model.html
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B200M5.pdf
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/release/notes/CiscoUCSManager-RN-3-1.html
http://www.cisco.com/c/en/us/products/interfaces-modules/ucs-virtual-interface-card-1340/index.html
http://www.cisco.com/c/en/us/products/switches/nexus-9372px-switch/index.html
http://www.cisco.com/c/en/us/products/switches/nexus-1000v-switch-vmware-vsphere/index.html
http://www.cisco.com/c/en/us/products/storage-networking/product-listing.html
http://www.cisco.com/c/en/us/products/storage-networking/mds-9000-series-multilayer-switches/datasheet-
listing.html
269
References
FlexPod
https://www.flexpod.com
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi65u1design.
html?referring_site=RE&pos=1&page=https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs
/UCS_CVDs/flexpod_esxi65u1_n9fc.html
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi65u1_n9fc.h
tml
VMware References
https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-6-pubs.html
http://pubs.vmware.com/Release_Notes/en/horizon-7-view/horizon-703-view-release-notes.html
https://labs.vmware.com/flings/vmware-os-optimization-tool
https://pubs.vmware.com/view-51/index.jsp?topic=%2Fcom.vmware.view.planning.doc%2FGUID-6CAFE558-
A0AB-4894-A0F4-97CF556784A9.html
https://pubs.vmware.com/horizon-7-view/index.jsp?topic=%2Fcom.vmware.horizon-
view.desktops.doc%2FGUID-DFAD071A-7F60-4720-86AB-8F1597BFC95C.html
Microsoft References
https://technet.microsoft.com/en-us/library/hh831620(v=ws.11).aspx
https://technet.microsoft.com/en-us/library/dn281793(v=ws.11).aspx
https://support.microsoft.com/en-us/kb/2833839
https://technet.microsoft.com/en-us/library/hh831447(v=ws.11).aspx
https://www.loginvsi.com/documentation/Start_your_first_test
http://www.netapp.com/us/products/data-management-software/ontap.aspx
https://mysupport.netapp.com/documentation/docweb/index.html?productID=62379&language=en-
US
http://www.netapp.com/us/products/management-software/
http://www.netapp.com/us/products/management-software/vsc/
270
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
version 7.0(3)I1(3b)
switchname DV-POD-2-N9K-A
vdc DV-Pod-2-N9K-A id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
feature telnet
cfs ipv4 distribute
cfs eth distribute
feature udld
feature interface-vlan
feature hsrp
feature lacp
feature dhcp
feature vpc
feature lldp
clock protocol none vdc 1
no password strength-check
username admin password 5 $1$tYYajkfc$7P7nLjWYvfTWAlvFDnwJZ. role network-admin
271
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
vlan 61
name Infra-Mgmt
vlan 62
name CIFS
vlan 63
name NFS
vlan 66
name vMotion
vlan 68
name LauncherPXE
vlan 69
name Launcher81
vlan 102
name VDI
vlan 164
name Out-Of-Band-Mgmt
interface Vlan1
description default native vlan
no shutdown
no ip redirects
ip address 10.29.164.253/24
no ipv6 redirects
interface Vlan60
description In Band Mgmt vlan 60
no shutdown
no ip redirects
ip address 10.10.60.2/24
hsrp version 2
hsrp 60
preempt
ip 10.10.60.1
interface Vlan61
description Infrastructure vlan 61
no shutdown
no ip redirects
ip address 10.10.61.2/24
no ipv6 redirects
hsrp version 2
hsrp 61
preempt
ip 10.10.61.1
272
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Vlan62
description CIFS vlan 61
no shutdown
no ip redirects
ip address 10.10.62.2/24
no ipv6 redirects
hsrp version 2
hsrp 62
preempt
priority 110
ip 10.10.62.1
interface Vlan63
description NFS vlan 63
no shutdown
no ip redirects
ip address 10.10.63.2/24
no ipv6 redirects
hsrp version 2
hsrp 63
preempt
ip 10.10.62.1
interface Vlan66
description vMotion network vlan 66
no shutdown
ip address 10.10.66.2/24
hsrp version 2
hsrp 66
preempt
ip 10.10.66.1
interface Vlan68
description VSI Launchers vlan 68
no shutdown
ip address 10.10.68.2/23
hsrp version 2
hsrp 68
preempt
ip 10.10.68.1
ip dhcp relay address 10.10.61.30
interface Vlan69
description LoginVSI Launchers 10.10.81.network vlan 69
no shutdown
ip address 10.10.81.2/24
hsrp version 2
hsrp 69
preempt
ip 10.10.69.1
ip dhcp relay address 10.10.61.30
interface Vlan102
description VDI vlan 102
no shutdown
ip address 10.2.2/19
no ipv6 redirects
hsrp version 2
hsrp 102
preempt delay minimum 240
priority 110
ip 10.2.0.1
ip dhcp relay address 10.10.61.30
ip dhcp relay address 10.10.61.31
interface port-channel10
description VPC peer-link
273
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channe11
description FI-A_6k_UCS-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 11
interface port-channel2
description FI-B_6k_UCS-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102-164
spanning-tree port type edge trunk
mtu 9216
vpc 12
interface port-channel15
description FI-A_6k_Launchers-Uplink
mtu 9216
vpc 15
interface port-channel16
description FI-B_6k_Launchers-Uplink
mtu 9216
vpc 16
interface port-channel51
speed 40000
no negotiate auto
mtu 9216
vpc 51
274
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channel52
speed 40000
no negotiate auto
mtu 9216
vpc 52
interface port-channel53
mtu 9216
vpc 53
interface port-channel54
mtu 9216
vpc 54
interface Ethernet1/1
interface Ethernet1/2
interface Ethernet1/3
interface Ethernet1/4
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
interface Ethernet1/18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
275
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
interface Ethernet1/32
interface Ethernet1/33
interface Ethernet1/34
interface Ethernet1/35
description Uplink_from_FI-A_to-N9KA
mtu 9216
interface Ethernet1/36
description Uplink_from_FI-B_to-N9KB
mtu 9216
interface Ethernet1/37
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
description Uplink_from_LoginVSI_Launchers_FI-A
mtu 9216
interface Ethernet1/46
description Uplink_from_LoginVSI_Launchers_FI-B
mtu 9216
interface Ethernet1/47
276
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Ethernet1/48
description TOR
switchport access vlan 164
interface Ethernet1/49
interface Ethernet1/50
interface Ethernet1/51
description FI-A-N9K-Uplink
speed 40000
interface Ethernet1/52
description FI-B-N9K-Uplink
speed 40000
no negotiate auto
mtu 9216
interface Ethernet1/53
switchport mode trunk
mtu 9216
channel-group 53 mode active
interface Ethernet1/54
switchport mode trunk
277
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
mtu 9216
channel-group 54 mode active
interface mgmt0
vrf member management
ip address 10.29.164.65/24
line console
line vty
version 7.0(3)I1(3b)
switchname DV-Pod-2-N9K-B
vdc N9K-B id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
feature telnet
cfs ipv4 distribute
cfs eth distribute
feature udld
feature interface-vlan
feature hsrp
feature lacp
feature dhcp
feature vpc
feature lldp
clock protocol none vdc 1
no password strength-check
username admin password 5 $1$fp3LrGLC$PF8eML85qkPBgdH/bZAKK/ role network-admin
ip domain-lookup
system default switchport shutdown
no service unsupported-transceiver
copp profile strict
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
ntp peer 10.10.160.2
ntp server 10.10.160.3
ntp server 171.68.38.66 use-vrf management
ntp master 8
vlan 1,60-70,102,164
vlan 60
278
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
name In-Band-Mgmt
vlan 61
name Infra-Mgmt
vlan 62
name CIFS
vlan 63
name CIFS
vlan 66
name vMotion
vlan 68
name LauncherPXE
vlan 69
name Launcher81
vlan 102
name VDI
vlan 164
name Out-Of-Band-Mgmt
interface Vlan1
description default native vlan
no shutdown
no ip redirects
ip address 10.29.164.254/24
no ipv6 redirects
interface Vlan60
description In Band Mmgmt vlan 60
no shutdown
no ip redirects
ip address 10.10.60.3/24
hsrp version 2
hsrp 60
preempt
ip 10.10.60.1
interface Vlan61
description Infrastructure Mgmt vlan 61
no shutdown
no ip redirects
ip address 10.10.61.3/24
no ipv6 redirects
hsrp version 2
hsrp 61
preempt
ip 10.10.61.1
interface Vlan62
279
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Vlan69
description LoginVSI Launchers 10.10.81-network vlan 69
no shutdown
no ip redirects
ip address 10.10.81.3/24
no ipv6 redirects
hsrp version 2
hsrp 69
preempt
ip 10.10.81.1
ip dhcp relay address 10.10.61.30
interface Vlan102
description VDI vlan 102
no shutdown
no ip redirects
ip address 10.2.0.3/19
no ipv6 redirects
hsrp version 2
hsrp 102
preempt delay minimum 240
priority 110
ip 10.2.0.1
ip dhcp relay address 10.10.61.30
ip dhcp relay address 10.10.61.31
280
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channel10
description VPC peer-link
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type network
vpc peer-link
interface port-channel11
description FI-A_6k_UCS-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 11
interface port-channel12
description FI-B_6k_UCS-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 12
interface port-channel15
description FI-A_6k_Launchers-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 15
interface port-channel16
description FI-B_6k_Launchers-Uplink
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 16
interface port-channel51
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 51
interface port-channel52
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
spanning-tree port type edge trunk
mtu 9216
vpc 52
interface port-channel53
switchport mode trunk
spanning-tree port type edge trunk
mtu 9216
vpc 53
interface port-channel54
switchport mode trunk
spanning-tree port type edge trunk
mtu 9216
vpc 54
interface Ethernet1/1
281
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Ethernet1/2
interface Ethernet1/3
interface Ethernet1/4
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
interface Ethernet1/18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
interface Ethernet1/32
interface Ethernet1/33
interface Ethernet1/34
282
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Ethernet1/35
description Uplink_fromFI-A-to_N9KA
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
mtu 9216
channel-group 11 mode active
interface Ethernet1/36
description Uplink_fromFI-B-to_N9KB
switchport mode trunk
switchport trunk allowed vlan 1-2,60-70,102,164
mtu 9216
channel-group 12 mode active
interface Ethernet1/37
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
description Uplink_from_LoginVSI_Launchers_FI-A
mtu 9216
interface Ethernet1/46
description Uplink_from_LoginVSI_Launchers_FI-B
mtu 9216
interface Ethernet1/47
interface Ethernet1/48
description TOR
switchport access vlan 164
interface Ethernet1/49
283
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface Ethernet1/50
description VPC Peer Link between 9ks
interface Ethernet1/51
description FI-A-N9KUPlink
speed 40000
no negotiate auto
mtu 9216
interface Ethernet1/52
description FI-A-N9KUPlink
speed 40000
no negotiate auto
mtu 9216
interface Ethernet1/53
switchport mode trunk
mtu 9216
channel-group 53 mode active
interface Ethernet1/54
switchport mode trunk
mtu 9216
channel-group 54 mode active
ip address 10.29.164.66/24
line console
line vty
284
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
Password:
the GNU General Public License (GPL) version 2.0 or the GNU
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
version 6.2(9a)
feature npiv
feature fport-channel-trunk
285
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
ip domain-lookup
vsan database
device-alias database
286
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
device-alias commit
287
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channel1
interface port-channel2
interface port-channel30
vsan database
288
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
switchname MDS-A
line console
line vty
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/11
interface fc1/12
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
interface fc1/3
interface fc1/4
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/9
interface fc1/10
interface fc1/13
interface fc1/17
289
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/47
interface fc1/48
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/1
290
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/2
interface fc1/11
interface fc1/12
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
291
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
292
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
293
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-1-hba1
member a300_VDI-2-hba1
member a300_VDI-3-hba1
member a300_VDI-4-hba1
member a300_VDI-5-hba1
member a300_VDI-6-hba1
member a300_VDI-7-hba1
member a300_Infra01-8-hba1
member a300_VDI-9-hba1
member a300_VDI-10-hba1
member a300_VDI-11-hba1
member a300_VDI-12-hba1
member a300_VDI-13-hba1
member a300_VDI-14-hba1
294
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-15-hba1
member a300_Infra02-16-hba1
member a300_VDI-17-hba1
member a300_VDI-18-hba1
member a300_VDI-19-hba1
member a300_VDI-20-hba1
member a300_VDI-21-hba1
member a300_VDI-22-hba1
member a300_VDI-23-hba1
member a300_VDI-24-hba1
member a300_VDI-25-hba1
295
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
296
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
297
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-1-hba1
member a300_VDI-2-hba1
member a300_VDI-3-hba1
member a300_VDI-4-hba1
member a300_VDI-5-hba1
member a300_VDI-6-hba1
member a300_VDI-7-hba1
member a300_Infra01-8-hba1
298
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-9-hba1
member a300_VDI-10-hba1
member a300_VDI-11-hba1
member a300_VDI-12-hba1
member a300_VDI-13-hba1
member a300_VDI-14-hba1
member a300_VDI-15-hba1
member a300_Infra02-16-hba1
member a300_VDI-17-hba1
member a300_VDI-18-hba1
member a300_VDI-19-hba1
member a300_VDI-20-hba1
member a300_VDI-21-hba1
member a300_VDI-22-hba1
member a300_VDI-23-hba1
member a300_VDI-24-hba1
member a300_VDI-25-hba1
interface fc1/1
port-license acquire
no shutdown
interface fc1/2
port-license acquire
no shutdown
interface fc1/3
port-license acquire
no shutdown
interface fc1/4
299
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
port-license acquire
no shutdown
interface fc1/5
port-license acquire
channel-group 2 force
no shutdown
interface fc1/6
port-license acquire
channel-group 2 force
no shutdown
interface fc1/7
port-license acquire
channel-group 2 force
no shutdown
interface fc1/8
port-license acquire
channel-group 2 force
no shutdown
interface fc1/9
port-license acquire
interface fc1/10
port-license acquire
interface fc1/11
port-license acquire
interface fc1/12
port-license acquire
interface fc1/13
port-license acquire
no shutdown
interface fc1/14
port-license acquire
300
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
no shutdown
interface fc1/15
port-license acquire
no shutdown
interface fc1/16
port-license acquire
no shutdown
interface fc1/17
port-license acquire
channel-group 1 force
no shutdown
interface fc1/18
port-license acquire
channel-group 1 force
no shutdown
interface fc1/19
port-license acquire
channel-group 1 force
no shutdown
interface fc1/20
port-license acquire
channel-group 1 force
no shutdown
interface fc1/21
port-license acquire
no shutdown
interface fc1/22
port-license acquire
no shutdown
interface fc1/23
port-license acquire
301
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
no shutdown
interface fc1/24
port-license acquire
interface fc1/25
port-license acquire
no shutdown
interface fc1/26
port-license acquire
no shutdown
interface fc1/27
port-license acquire
no shutdown
interface fc1/28
port-license acquire
no shutdown
interface fc1/29
port-license acquire
interface fc1/30
port-license acquire
interface fc1/31
port-license acquire
interface fc1/32
port-license acquire
interface fc1/33
port-license acquire
interface fc1/34
port-license acquire
interface fc1/35
port-license acquire
interface fc1/36
port-license acquire
302
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/37
port-license acquire
no shutdown
interface fc1/38
port-license acquire
no shutdown
interface fc1/39
port-license acquire
no shutdown
interface fc1/40
port-license acquire
no shutdown
interface fc1/41
port-license acquire
no shutdown
interface fc1/42
port-license acquire
no shutdown
interface fc1/43
port-license acquire
no shutdown
interface fc1/44
port-license acquire
no shutdown
interface fc1/45
port-license acquire
no shutdown
interface fc1/46
port-license acquire
no shutdown
interface fc1/47
303
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
port-license acquire
no shutdown
interface fc1/48
port-license acquire
no shutdown
interface mgmt0
ip default-gateway 10.29.164.1
MDS-A#
Password:
the GNU General Public License (GPL) version 2.0 or the GNU
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
304
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
version 6.2(9a)
feature npiv
feature fport-channel-trunk
no password strength-check
ip domain-lookup
vsan database
device-alias database
305
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
device-alias commit
306
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channel1
interface port-channel2
307
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface port-channel11
vsan database
switchname MDS-B
line console
line vty
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/3
interface fc1/4
interface fc1/43
interface fc1/44
interface fc1/45
308
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/46
interface fc1/3
interface fc1/4
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/9
interface fc1/10
interface fc1/17
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
309
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/47
interface fc1/48
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/1
interface fc1/2
interface fc1/11
interface fc1/12
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
310
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
311
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
312
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-1-hba2
member a300_VDI-2-hba2
member a300_VDI-3-hba2
member a300_VDI-4-hba2
313
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-5-hba2
member a300_VDI-6-hba2
member a300_VDI-7-hba2
member a300_Infra01-8-hba2
member a300_VDI-9-hba2
member a300_VDI-10-hba2
member a300_VDI-11-hba2
member a300_VDI-12-hba2
member a300_VDI-13-hba2
member a300_VDI-14-hba2
member a300_VDI-15-hba2
member a300_Infra02-16-hba2
member a300_VDI-17-hba2
member a300_VDI-18-hba2
member a300_VDI-19-hba2
member a300_VDI-20-hba2
member a300_VDI-21-hba2
member a300_VDI-22-hba2
member a300_VDI-23-hba2
member a300_VDI-24-hba2
member a300_VDI-25-hba2
314
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
315
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
316
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
317
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
member a300_VDI-1-hba2
member a300_VDI-2-hba2
member a300_VDI-3-hba2
member a300_VDI-4-hba2
member a300_VDI-5-hba2
member a300_VDI-6-hba2
member a300_VDI-7-hba2
member a300_Infra01-8-hba2
member a300_VDI-9-hba2
member a300_VDI-10-hba2
member a300_VDI-11-hba2
member a300_VDI-12-hba2
member a300_VDI-13-hba2
member a300_VDI-14-hba2
member a300_VDI-15-hba2
member a300_Infra02-16-hba2
member a300_VDI-17-hba2
member a300_VDI-18-hba2
member a300_VDI-19-hba2
member a300_VDI-20-hba2
member a300_VDI-21-hba2
member a300_VDI-22-hba2
member a300_VDI-23-hba2
member a300_VDI-24-hba2
member a300_VDI-25-hba2
interface fc1/1
port-license acquire
no shutdown
interface fc1/2
318
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
port-license acquire
no shutdown
interface fc1/3
port-license acquire
no shutdown
interface fc1/4
port-license acquire
no shutdown
interface fc1/5
port-license acquire
channel-group 2 force
no shutdown
interface fc1/6
port-license acquire
channel-group 2 force
no shutdown
interface fc1/7
port-license acquire
channel-group 2 force
no shutdown
interface fc1/8
port-license acquire
channel-group 2 force
no shutdown
interface fc1/9
port-license acquire
no shutdown
interface fc1/10
319
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
port-license acquire
no shutdown
interface fc1/11
port-license acquire
interface fc1/12
port-license acquire
interface fc1/13
port-license acquire
no shutdown
interface fc1/14
port-license acquire
no shutdown
interface fc1/15
port-license acquire
no shutdown
interface fc1/16
port-license acquire
no shutdown
interface fc1/17
port-license acquire
channel-group 1 force
no shutdown
interface fc1/18
port-license acquire
channel-group 1 force
no shutdown
interface fc1/19
port-license acquire
channel-group 1 force
no shutdown
interface fc1/20
320
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
port-license acquire
channel-group 1 force
no shutdown
interface fc1/21
port-license acquire
no shutdown
interface fc1/22
port-license acquire
no shutdown
interface fc1/23
port-license acquire
no shutdown
interface fc1/24
port-license acquire
interface fc1/25
port-license acquire
no shutdown
interface fc1/26
port-license acquire
no shutdown
interface fc1/27
port-license acquire
no shutdown
interface fc1/28
port-license acquire
no shutdown
interface fc1/29
port-license acquire
interface fc1/30
port-license acquire
321
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/31
port-license acquire
interface fc1/32
port-license acquire
interface fc1/33
port-license acquire
interface fc1/34
port-license acquire
interface fc1/35
port-license acquire
interface fc1/36
port-license acquire
interface fc1/37
port-license acquire
no shutdown
interface fc1/38
port-license acquire
no shutdown
interface fc1/39
port-license acquire
no shutdown
interface fc1/40
port-license acquire
no shutdown
interface fc1/41
port-license acquire
no shutdown
interface fc1/42
port-license acquire
no shutdown
322
Appendix A Cisco Nexus Ethernet and MDS Fibre Channel Switch Configurations
interface fc1/43
port-license acquire
no shutdown
interface fc1/44
port-license acquire
no shutdown
interface fc1/45
port-license acquire
no shutdown
interface fc1/46
port-license acquire
no shutdown
interface fc1/47
port-license acquire
no shutdown
interface fc1/48
port-license acquire
no shutdown
interface mgmt0
ip default-gateway 10.29.164.1
MDS-B#
323
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Appendix B - 5000 Users Scale Test Host Metrics for All Workload
Servers and FlexPod AFF A300 Storage Graphs
This section highlights and provides analysis of the FlexPod AFF A300storage performance results for each
of the cluster test cases identified in this Cisco Validated Design.
From a storage perspective, it is critical to maintain a latency of near to or less than a millisecond in order to
guarantee a good end-user experience. As you will see, FlexPod AFF A300storage delivers that level of
performance despite driving a substantial amount of IOPs and bandwidth for the thousands of desktops
hosted on the single FlexPod AFFA300Storage.
The following charts were compiled from extracting front-end array telemetry data from the storage logs and
are equivalent to values shown in the FlexPod AFF A300storage. Results were plotted in this format to
highlight individual storage performance metrics of interest during each simulation as well as clearly show
the various phases of each simulation. Please note that across the top of each graph we have identified the
Login VSI simulation into the three separate phases of the simulation run. The first phase (green arrows and
text box) is the 2880 second Login VSI simulation phase when all sessions are ramping up and logging in.
Next, the all sessions in the simulation steady-states for 900 seconds and users begin logging out of the
environment. For brevity, we did not show the entire logout operation as array activity is minimal during that
time and the Login VSI simulation had completed.
Front-end statistics were pulled off of the AFF A300 storage array and plotted in the following appendix to
show a more detailed summary of how the key array metrics performed during each cluster-level simulation.
As in the results section, individual stages of each simulation are clearly denoted in each chart to provide
5000 Mixed Workload VMware Horizon RDSH and VDI Non–Persistent Instant
Clones and Persistent Full Clones Virtual Machines Testing
324
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 117 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
Figure 118 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
325
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 119 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
Figure 120 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
RDSH-Host 02
Figure 121 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
326
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 122 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
Figure 123 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
Figure 124 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
327
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
RDSH-03
Figure 125 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
Figure 126 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
Figure 127 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
328
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 128 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
RDSH-04
Figure 129 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
Figure 130 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
329
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 131 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
Figure 132 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
RDSH-05
Figure 133 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
330
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 134 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
Figure 135 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
Figure 136 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
331
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
RDSH-06
Figure 137 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
Figure 138 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
Figure 139 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
332
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 140 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
RDSH-07
Figure 141 Full Scale | 5000 Mixed Users | RDSH Host | Host CPU Utilization
333
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 142 Full Scale | 5000 Mixed Users | RDSH Host | Host Memory Utilization
Figure 143 Full Scale | 5000 Mixed Users | RDSH Host | Host Network Utilization
334
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 144 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
ESXTOP Util% Charts for All VDI (Non–Persistent and Persistent Workload Hosts)
VDI-01
Figure 145 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
335
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 146 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 147 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 148 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
336
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-02
Figure 149 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 150 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 151 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
337
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 152 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-03
Figure 153 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
338
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 154 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 155 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 156 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
339
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host 04
Figure 157 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 158 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
340
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 159 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 160 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-05
Figure 161 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
341
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 162 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 163 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 164 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
342
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-06
Figure 165 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 166 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
343
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 167 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 168 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
344
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI-07
Figure 169 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 170 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 171 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
345
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 172 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-08
Figure 173 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
346
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 174 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 175 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
Figure 176 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
347
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI-09
Figure 177 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 178 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
348
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 179 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 180 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-10
Figure 181 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
349
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 182 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 183 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 184 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
350
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-11
Figure 185 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 186 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 187 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
351
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 188 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-12
Figure 189 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 190 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
352
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 191 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-13
Figure 192 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
353
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 193 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 194 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 195 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
354
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-14
Figure 196 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 197 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 198 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
355
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 199 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
Figure 201 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
356
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 202 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 203 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
357
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-16
Figure 204 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 205 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
358
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 206 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 207 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
VDI Host-17
Figure 208 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
359
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 209 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 210 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
Figure 211 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
360
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
VDI Host-18
Figure 212 Full Scale | 5000 Mixed Users | VDI Host | Host CPU Utilization
Figure 213 Full Scale | 5000 Mixed Users | VDI Host | Host Memory Utilization
Figure 214 Full Scale | 5000 Mixed Users | VDI Host | Host Network Utilization
361
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 215 Full Scale | 5000 Mixed Users | VDI Host | Host Fibre Channel Network Utilization
NETAPP AFF A300 Storage Charts for 5000 Users Scale Test
Figure 216 AFF A300 5000 Users Mixed Workload Scale Test | Total IOPS
362
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 217 AFF A300 5000 Users Mixed Workload Scale Test | Total Throughput
Figure 218 AFF A300 5000 Users Mixed Workload Scale Test | Controller Average Latency
363
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 219 AFF A300 5000 Users Mixed Workload Scale Test | Controller Read / Write Latency
Figure 220 AFF A300 5000 Users Mixed Workload Scale Test | Controller Read Latency
364
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 221 AFF A300 5000 Users Mixed Workload Scale Test | Controller Write Latency
Figure 222 AFF A300 5000 Users Mixed Workload Scale Test | Controller CPU Headroom Util. %
365
Appendix B - 5000 Users Scale Test Host Metrics for All Workload Servers and FlexPod AFF A300 Storage Graphs
Figure 223 AFF A300 5000 Users Mixed Workload Scale Test | Controller Average Processor Busy
366