Openstack Reference Architecture PDF
Openstack Reference Architecture PDF
Openstack Reference Architecture PDF
Architecture for
Service Providers
Last update: 13 June 2019
Version 2.1
Lijun Gu
Bin Zhou
Brahmanand Gorti
Miroslav Halas
Mike Perks
Shuang Yang
Click here to check for updates
Table of Contents
1 Introduction.............................................................................................. 1
3 Requirements........................................................................................... 3
Resources.................................................................................................... 54
Traditional networking is undergoing a major technological change where high-volume server platforms
running virtual machines known as Virtual Network Functions (VNFs) are rapidly replacing purpose-built
network appliances. This is called Network Functions Virtualization (NFV) and telecom operators and large
enterprises stand to benefit from this trend. OpenStack is proving to be the virtualization platform of choice
when service providers deploy NFV.
Comparing with traditional IT data centers, the NFV architecture requires high network bandwidth and
compute capacity to deliver maximum packet throughput and low latencies. The Data Plane Development Kit
(DPDK) and Single-Root Input/Output Virtualization (SR-IOV) are two of the key network acceleration
techniques used by NFV.
This document describes the Lenovo NFVI Reference Architecture (RA) for CSPs by using Red Hat
OpenStack Platform and DPDK and SR-IOV enabled on Lenovo hardware including industry leading servers,
storage, networking, and Physical Infrastructure Management (PIM) tools from Lenovo.
Lenovo and Red Hat have collaborated to promote best practices and validate reference architecture for
deploying private cloud infrastructure by leveraging the Red Hat OpenStack Platform 13. Red Hat OpenStack
Platform 13 is offered with 3 years of production support with the option to purchase an extended lifecycle
support (ELS) for 4th and 5th years.
The Lenovo NFVI platform provides an ideal infrastructure solution for NFV deployments. Lenovo servers
provide the full range of form factors, features and functions that are necessary to meet the needs of small
operators all the way up to large service providers. Lenovo uses industry standards in systems management
on all these server platforms and enables seamless integration into cloud management tools such as
OpenStack. Lenovo also provides data center network switches that are designed specifically for robust,
scale-out server configurations and converged storage interconnect fabrics.
The Lenovo XClarity™ Administrator solution consolidates systems management across multiple Lenovo
servers that span the data center. XClarity, which serves as PIM in an NFV deployment, enables automation
of firmware updates on servers via compliance policies, patterns for system configuration settings, hardware
inventory, bare-metal OS and hypervisor provisioning, and continuous hardware monitoring. XClarity easily
extends via the published REST API to integrate into other management tools.
The intended audience for this document is IT and networking professionals, solution architects, sales
engineers, and consultants. Readers should have a basic knowledge of Red Hat Enterprise Linux and
OpenStack.
• Consolidated and fully integrated hardware resources with balanced workloads for compute, network,
and storage.
• Guidelines to configure OVS-DPDK and SR-IOV as distributed in Red Hat OpenStack Platform 13.
• Elimination of single points of failure in every layer by delivering continuous access to virtual
machines (VMs).
• Hardware redundancy and full utilization.
• Rapid OpenStack cloud deployment, including updates, patches, security, and usability
enhancements with enterprise-level support from Red Hat and Lenovo.
• Unified management and monitoring for VMs.
Management portal Web-based dashboard for • OpenStack dashboard (Horizon) for most
workloads management
routine management operations
Scalability Solution components can • Compute nodes and storage nodes can
scale for growth
be scaled independently within a rack or
across racks without service downtime
Ease of installation Reduced complexity for • A dedicated deployment server with web-
solution deployment based deployment tool and rich command
line provide greater flexibility and control
over how you deploy OpenStack in your
cloud
• Optional deployment services
The NFV-MANO layer provides the capability of NFV management and orchestration (MANO) so that
software implementation of network functions can be decoupled from the compute, storage, and network
resources provided via NFVI. The NFV Orchestrator (NFVO) performs orchestration functions of NFVI
resources across multiple Virtual Infrastructure Managers (VIMs) and lifecycle management of network
services. The VNF Manager (VNFM) performs orchestration and management functions of VNFs. The VIM
performs orchestration and management functions of NFVI resources within a domain.
Figure 1 shows the architecture of the Lenovo NFVI solution along with other components that make up the
entire NFV stack.
The Lenovo NFVI solution is composed of various components, which CSPs can deploy according to their
specific use-case requirements.
The Lenovo NFVI solution supports various options to provide highly available persistent storage for the NFV
cloud. The storage solutions include:
The option to deploy without a Ceph Cluster is targeted towards users who do not have a need for persistent
storage backing their VM workloads and users that do not wish to use compute resources for a Ceph Cluster.
It leverages additional physical disks on OpenStack Controller Nodes to build a Swift cluster to provide
backing storage for OpenStack images, metrics and backup data while user workloads run using the
ephemeral storage on the compute nodes.
A typical selection of storage solution is to deploy a dedicated Ceph Cluster as the storage backend of the
NFV cloud. The Ceph Cluster is highly scalable and provides unified storage interfaces, which makes it
suitable for various user scenarios for block, object, and file storage.
The Hyperconverged Ceph storage option is targeted at users who have a need for persistent data storage for
their VMs but do not want or have space for additional storage nodes in their infrastructure. Additional disks
from the compute nodes is used to provide highly available, persistent storage for workloads and the rest of
the OpenStack deployment. The deployment collocates the Ceph on the compute nodes and configures the
nodes for optimal utilization of resources. But for compute nodes that will have demanding NFV workloads
running on it, it is not recommended to adopt a hyperconverged infrastructure.
In this Lenovo NFVI architecture, Red Hat Ceph Storage is used to provide block, image, and snapshots as
well as object storage service via Storage Nodes, which is optimized for disk capacity to support up to twenty-
four drives combining NVMe, SSD and HDD backed storage tiers.
SR-IOV
Single root I/O virtualization (SR-IOV) is an extension to the PCI Express (PCIe) specification. SR-IOV
enables a single PCIe device to appear as multiple separate PCI devices. SR-IOV enabled devices have the
ability to dedicate isolated access to its resources, e.g. “Virtual Functions” (VF). These VFs are later assigned
to the virtual machines (VMs), which allow direct memory access (DMA) to the network data. VM guests can
gain the performance advantage of direct PCI device assignment, while only using single VF on the physical
NIC. Comparing with traditional virtualized environment without SR-IOV, where a packet has to go through an
OVS-DPDK
Open vSwitch (OVS) is an open source software switch that is commonly used as a virtual switch within a
virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and provides support
to the SDN protocols such as OpenFlow to create user-defined overlay networks. OVS uses Linux kernel
networking to switch packets between virtual machines and across hosts using physical NIC.
DPDK is a set of data plane libraries and NIC drivers for fast packet processing. DPDK applications rely on
the poll mode driver (PMD) which runs a dedicated user space process (or thread) to poll the queue of the
NIC. When packets arrive, the PMD will receive them continuously. If there are no packets, the PMD will just
poll in an endless loop, which usually results in high power consumption. In this way, the packets processing
traverses the hypervisor’s kernel and IP (Internet Protocol) stack.
Open vSwitch can be bundled with DPDK for better performance, resulting in a DPDK-accelerated OVS
(OVS-DPDK). Red Hat OpenStack Platform supports OVS-DPDK deployment on compute nodes so that NFV
workloads can benefit from the DPDK data path at an application level. Recent advanced features of
OVS-DPDK, such as the vHost multi-queue provides further improvement in the network throughput of guest
VMs. At a high level, OVS-DPDK replaces the standard OVS kernel data path with a DPDK based data path,
and creates a user-space vSwitch on the host, which is using DPDK internally for its packet forwarding. The
architecture is mostly transparent to users since the basic OVS features as well as the interfaces (such as
OpenFlow, OVSDB, and command line) remain mostly the same.
Figure 2 shows the diagram of IO data path for VM applications running on Red Hat OpenStack Platform.
OVS-DPDK requires one or more cores dedicated to the poll mode thread and thus isolates the allocated
CPU cores from being scheduled for other tasks. This will avoid context switching and reduce the cache miss
NUMA awareness
NUMA, or Non-Uniform Memory Access, is a shared memory architecture that describes the placement of
main memory modules with respect to processors in a multiprocessor system. When planning NFVI
deployment, the Cloud Administrator needs to understand the NUMA topology of Compute node to partition
the CPU and memory resources for optimum performance. For example, the following NUMA topology
information should be collected at hardware introspection:
• RAM (in kilobytes)
• Physical CPU cores and their sibling threads
• NICs associated with the NUMA node
To achieve high performance in NFVI environment, Cloud Admin need to partition the resources between the
host and the guest. For SR-IOV Compute nodes and OVS-DPDK Compute nodes, the partition of resources
will be different although it has to follow the same principle: the packets traversing through entire data path
should only go through the same NUMA node. The VNFs should use NICs associated with the same NUMA
node that they use for memory and CPU pinning.
Figure 3 shows an example of the Lenovo recommended NUMA partitioning on a dual socket Compute
nodes. In the case of OVS-DPDK deployment, OVS-DPDK performance depends on reserving a block of
memory and CPUs pinned for the PMDS local to the NUMA node. Three VNFs use NICs associated with the
same NUMA node. Also it is recommended that both interfaces in a bond are from NICs on the same NUMA
node. In the case of SR-IOV deployment, no memory and CPUs are required for PMD and thus leave more
VCPUs available in the pool for other VNFs.
CPU pinning
CPU pinning refers to reserving physical cores for specific VNF guest instances or host processes. The
configuration is implemented in two parts: ensuring that virtual guests can only run on dedicated cores;
ensuring that common host processes do not run on those cores.
CPU Pinning is often used together with DPDK applications and PMD. The DPDK applications and PMD use
the thread affinity feature of the Linux kernel to bind the thread to a specific core in order to avoid context
switching.
Figure 4 shows an example of CPU Pinning for both host processes and guest vCPUs. The Kernel
configuration parameter isolcpus specifies a list of physical CPU cores (in orange color) isolated from the
kernel scheduler. Host processes scheduled by kernel (in red color) do not run on isolated CPU cores. In the
case of SR-IOV deployment, no host processes are required for cpu pinning. For OVS-DPDK deployment,
DPDK PMDs should be pinned to the isolated physical CPU cores (in orange color). In general, all VNFs
should use CPUs exclusively to ensure the performance SLA.
Huge pages
In general, the CPU allocates RAM by pages, e.g. chunks of 4K bytes. Modern CPU architectures support a
much larger page size. In an NFVI environment, hugepages support is required for the large memory pool
allocation used for data packet buffers. By using hugepages allocations, performance is increased since fewer
pages are needed, and therefore fewer Translation Lookaside Buffers (TLB) lookups. This in turn reduces the
time it takes to translate a virtual page address to a physical page address. Without hugepages enabled as
kernel parameter high TLB miss rates would occur thereby slowing performance.
Huge pages are typically enabled together with DPDK and NUMA CPU pinning to provide accelerated high
data path performance.
Table 3 lists the core components of Red Hat OpenStack Platform as shown in Figure 5.
Compute service Nova Provisions and manages VMs, which creates a redundant and
horizontally scalable cloud-computing platform. It is hardware and
hypervisor independent and has a distributed and asynchronous
architecture that provides HA and tenant-based isolation.
Block storage Cinder Provides persistent block storage for VM instances. The ephemeral
service storage of deployed instances is non-persistent; therefore, any data
generated by the instance is destroyed after the instance terminates.
Cinder uses persistent volumes attached to instances for data
longevity, and instances can boot from a Cinder volume rather than
from a local image.
Image service Glance Provides discovery, registration, and delivery services for virtual disk
images. The images can be stored on multiple back-end storage units
and cached locally to reduce image staging time.
Object storage Swift Provides cloud storage software built for scale and optimized for
service durability, availability, and concurrency across the entire data set. It can
store and retrieve data with a simple API, and is ideal for storing
unstructured data that can grow without bound.
Dashboard service Horizon Dashboard provides a graphical user interface for users and
administrator to perform operations such as creating and launching
instances, managing networking, and setting access control.
File Share Service Manila A file share service that presents the management of file shares (for
example, NFS and CIFS) as a core service to OpenStack.
Table 4 lists the optional components in the Red Hat OpenStack Platform release. Actual deployment use
cases will determine when and how these components are used.
OpenStack Reference Architecture for Service Providers
13
Version 2.1
Table 4. Optional components
Component Code name Description
Bare-metal Ironic OpenStack Bare Metal Provisioning enables the user to provision
provisioning service physical, or bare metal machines, for a variety of hardware vendors
with hardware-specific drivers.
Data Processing Sahara Provides the provisioning and management of Hadoop cluster on
OpenStack. Hadoop stores and analyzes large amounts of
unstructured and structured data in clusters.
Table 5 lists the OpenStack concepts to help the administrator further manage the tenancy or segmentation in
a cloud environment.
Availability Zone In OpenStack, an availability zone allows a user to allocate new resources with
defined placement. The “instance availability zone” defines the placement for
allocation of VMs, and the “volume availability zone” defines the placement for
allocation of virtual block storage devices.
Host Aggregate A host aggregate further partitions an availability zone. It consists of key-value pairs
assigned to groups of machines and used by the scheduler to enable advanced
scheduling.
Region Regions segregate the cloud into multiple compute deployments. Administrators use
regions to divide a shared-infrastructure cloud into multiple sites each with separate
API endpoints and without coordination between sites. Regions share the Keystone
identity service, but each has a different API endpoint and a full Nova compute
installation.
MariaDB MariaDB is open source database software shipped with Red Hat Enterprise Linux as
a replacement for MySQL. MariaDB Galera cluster is a synchronous multi-master
cluster for MariaDB. It uses synchronous replication between every instance in the
cluster to achieve an active-active multi-master topology, which means every instance
can accept data retrieving and storing requests and the failed nodes do not affect the
function of the cluster.
RabbitMQ RabbitMQ is a robust open source messaging system based on the AMQP standard,
and it is the default and recommended message broker in Red Hat OpenStack
Platform.
Redis Redis is an open-source in memory database that provides the alternative solution to
Memcached for web application performance optimization
Red Hat Ceph Storage is integrated with Red Hat OpenStack Platform. The OpenStack Cinder storage
component and Glance image services can be implemented on top of the Ceph distributed storage.
OpenStack users and administrators can use the Horizon dashboard or the OpenStack command-line
interface to request and use the storage resources without requiring knowledge of where the storage is
deployed or how the block storage volume is allocated in a Ceph cluster.
The Nova, Cinder, Swift, and Glance services on the controller and compute nodes use the Ceph driver as
the underlying implementation for storing the actual VM or image data. Ceph divides the data into placement
groups to balance the workload of each storage device. Data blocks within a placement group are further
distributed to logical storage units called Object Storage Devices (OSDs), which often are physical disks or
drive partitions on a storage node.
The OpenStack services can use a Ceph cluster in the following ways:
• VM Images: OpenStack Glance manages images for VMs. The Glance service treats VM images as
immutable binary blobs and can be uploaded to or downloaded from a Ceph cluster accordingly.
• Volumes: OpenStack Cinder manages volumes (that is, virtual block devices) attached to running
VMs or used to boot VMs. Ceph serves as the back-end volume provider for Cinder.
If the hypervisor fails, it is convenient to trigger the Nova evacuate function and almost seamlessly run the
VM machine on another server. When the Ceph back end is enabled for both Glance and Nova, there is no
need to cache an image from Glance to a local file, which saves time and local disk space. In addition, Ceph
can implement a copy-on-write feature ensuring the start-up of an instance from a Glance image does not
actually use any disk space.
Listed below are the most important new features in the Red Hat OpenStack Platform 13. For more details,
please see: Red Hat OpenStack Platform 13 Release Notes.
• Fast forward upgrade path supported by Red Hat OpenStack Director, specifically from Red Hat
OpenStack Platform 10 to Red Hat OpenStack Platform 13. This enables easy upgrade to current
long life version from previous LTS version for customers.
• Controller nodes deployed in Red Hat Virtualization is now supported by Director node provisioning.
New driver(staging-ovirt) is included in Director Bare Metal (ironic) service.
• Fully containerized services are provided. All Red Hat OpenStack Platform services are deployed as
containers.
• L3 routed spine-leaf network is supported for director to provision and introspect nodes with multiple
networks. This feature, in conjunction with composable networks, allows users to provision and
configure a complete L3 routed spine-leaf architecture for the overcloud.
• Red Hat Ceph Storage 3.0 is the default supported version of Ceph for Red Hat OpenStack Platform.
Red Hat Ceph Storage 2.x is continuously compatible with the newer Ceph client as external Ceph
Storage. Storage nodes deployed by Director will be version 3.0 by default. The new Red Hat Ceph
Storage 3.0 also supports scale out Ceph Metadata server(MDS) and RADOS gateway nodes.
• The Shared File System service(manila) of Red Hat OpenStack Platform 13 now supports mounting
shared file systems backed by Ceph File System(CephFS) via the NFSv4 protocol. Multi-tenancy is
supported.
The Lenovo ThinkSystem SR650 server (as shown in Figure 6 and Figure 7) is an enterprise class 2U two-
socket versatile server that incorporates outstanding reliability, availability, and serviceability (RAS), security,
and high efficiency for business-critical applications and cloud deployments. Unique Lenovo AnyBay
technology provides the flexibility to mix-and-match SAS/SATA HDDs/SSDs and NVMe SSDs in the same
drive bays. Four direct-connect NVMe ports on the motherboard provide ultra-fast read/writes with NVMe
drives and reduce costs by eliminating PCIe switch adapters. Plus, storage can be tiered for greater
application performance, to provide the most cost-effective solution.
Combined with the Intel® Xeon® Scalable processors product family, the Lenovo ThinkSystem SR650 server
offers a high density of workloads and performance that is targeted to lower the total cost of ownership (TCO)
per VM. Its flexible, pay-as-you-grow design and great expansion capabilities solidify dependability for any
kind of virtualized workload, with minimal downtime. Additionally, it supports two 300W high-performance
GPUs and ML2 NIC adapters with shared management.
The Lenovo ThinkSystem SR650 server provides internal storage density of up to 100 TB (with up to 26 x 2.5-
inch drives) in a 2U form factor with its impressive array of workload-optimized storage configurations. The
ThinkSystem SR650 offers easy management and saves floor space and power consumption for the most
The SR650 server supports up to two processors, each with up to 28-core or 56 threads with hyperthread
enabled, up to 38.5 MB of last level cache (LLC), up to 2666 MHz memory speeds and up to 3 TB of memory
capacity. The SR650 also support up to 6 x PCIe slots. Its on-board Ethernet solution provides 2/4 standard
embedded Gigabit Ethernet ports and 2/4 optional embedded 10 Gigabit Ethernet ports without occupying
PCIe slots. All these advanced features makes the server ideal to run data and bandwidth intensive VNF
workload and storage functions of NFVI platform.
For more information, see the following website: ThinkSystem SR650 Product Guide
The Lenovo ThinkSystem SR630 server (as shown in Figure 8) is an ideal 2-socket 1U rack server for small
businesses up to large enterprises that need industry-leading reliability, management, and security, as well as
maximizing performance and flexibility for future growth. The SR630 server is designed to handle a wide
range of workloads, such as databases, virtualization and cloud computing, virtual desktop infrastructure
(VDI), infrastructure security, systems management, enterprise applications, collaboration/email, streaming
media, web, and HPC. It improves productivity by supporting up to two processors, 56 cores, and 112
threads, and and up to 3 TB of memory capacity with memory speed of up to 2666 MHz, which is capable to
host Red Hat OpenStack Platform controller services. The Key difference ThinkSystem SR630 offers up to
twelve 2.5-inch hot-swappable SAS/SATA HDDs or SSDs together with up to four on-board NVMe PCIe ports
that allow direct connections to the U.2 NVMe PCIe SSDs.
The Lenovo ThinkSystem SR630 is ideal for OpenStack Controller and Compute nodes and utility nodes.
For more information, see the following website: ThinkSystem SR630 Product Guide
The 10Gb and 25Gb Ethernet switches are used for the internal and external network of Red Hat OpenStack
Platform cluster, and 1Gb Ethernet switch is used for out-of-band server management. The Networking
Operating System software features of these Lenovo switches deliver seamless, standards-based integration
into upstream switches.
The Lenovo RackSwitch G8052 (as shown in Figure 9) is a top-of-rack data center switch that delivers
unmatched line-rate Layer 2/3 performance at an attractive price. It has 48x 10/100/1000BASE-T RJ-45 ports
and four 10 Gigabit Ethernet SFP+ ports (it also supports 1 GbE SFP transceivers), and includes hot-swap
redundant power supplies and fans as standard, which minimizes your configuration requirements. Unlike
most rack equipment that cools from side-to-side, the G8052 has rear-to-front or front-to-rear airflow that
matches server airflow.
The Lenovo ThinkSystem NE0152T RackSwitch is a 1U rack-mount Gigabit Ethernet switch that delivers line-
rate performance with feature-rich design that supports virtualization, high availability, and enterprise class
Layer 2 and Layer 3 functionality in a cloud management environment.
The NE0152T RackSwitch has 48x RJ-45 Gigabit Ethernet fixed ports and 4x SFP+ ports that support 1 GbE
and 10 GbE optical transceivers, active optical cables (AOCs), and direct attach copper (DAC) cables.
The NE0152T RackSwitch runs the Lenovo Cloud Networking Operating System (CNOS) that provides a
simple, open and programmable network infrastructure with cloud-scale performance. It supports the Open
Network Install Environment (ONIE), which is an open, standards-based boot code that provides a
deployment environment for loading certified ONIE networking operating systems onto networking devices.
The Lenovo RackSwitch G8272 uses 10Gb SFP+ and 40Gb QSFP+ Ethernet technology and is specifically
designed for the data center. It is an enterprise class Layer 2 and Layer 3 full featured switch that delivers
line-rate, high-bandwidth, low latency switching, filtering, and traffic queuing without delaying data. Large data
center-grade buffers help keep traffic moving, while the hot-swap redundant power supplies and fans (along
with numerous high-availability features) help provide high availability for business sensitive traffic.
The RackSwitch G8272 is ideal for latency sensitive applications, such as high-performance computing
clusters, financial applications and NFV deployments. In addition to the 10 Gb Ethernet (GbE) and 40GbE
connections, the G8272 can use 1GbE connections.
The NE1032 RackSwitch has 32x SFP+ ports that support 1 GbE and 10 GbE optical transceivers, active
optical cables (AOCs), and direct attach copper (DAC) cables.
The NE1032T RackSwitch has 24x 1/10 Gb Ethernet (RJ-45) fixed ports and 8x SFP+ ports that support
1 GbE and 10 GbE optical transceivers, active optical cables (AOCs), and direct attach copper (DAC) cables.
The NE1072T RackSwitch has 48x 1/10 Gb Ethernet (RJ-45) fixed ports and 6x QSFP+ ports that support
40 GbE optical transceivers, active optical cables (AOCs), and direct attach copper (DAC) cables. The
QSFP+ ports can also be split out into four 10 GbE ports by using QSFP+ to 4x SFP+ DAC or active optical
breakout cables.
The Lenovo ThinkSystem NE2572 RackSwitch is designed for the data center and provides 10 Gb/25 Gb
Ethernet connectivity with 40 Gb/100 Gb Ethernet upstream links. It is ideal for big data, cloud, and enterprise
workload solutions. It is an enterprise class Layer 2 and Layer 3 full featured switch that delivers line-rate,
high-bandwidth switching, filtering, and traffic queuing without delaying data. Large data center-grade buffers
help keep traffic moving, while the hot-swap redundant power supplies and fans (along with numerous high-
availability software features) help provide high availability for business sensitive traffic.
The NE2572 RackSwitch has 48x SFP28/SFP+ ports that support 10 GbE SFP+ and 25 GbE SFP28 optical
transceivers, active optical cables (AOCs), and direct attach copper (DAC) cables. The switch also offers 6x
QSFP28/QSFP+ ports that support 40 GbE QSFP+ and 100 GbE QSFP28 optical transceivers, active optical
cables (AOCs), and direct attach copper (DAC) cables. The QSFP28/QSFP+ ports can also be split out into
two 50 GbE (for 100 GbE QSFP28), or four 10 GbE (for 40 GbE QSFP+) or 25 GbE (for 100 GbE QSFP28)
connections by using breakout cables.
Figure 16 shows how the different software components relate to each other and form an OpenStack cluster
with integrated management functions.
Figure 16. Deployment Model for Lenovo NFVI Solution Using Red Hat OpenStack Platform
In this solution, the Nova-Compute, Open vSwitch agents and SR-IOV agents run on the compute nodes.
Compute nodes have either OVS-DPDK or SR-IOV enabled to better satisfy the performance requirement
upon the data path from VNFs. The agents receive instrumentation requests from the controller node via
RabbitMQ messages to manage the compute and network virtualization of instances that are running on the
compute nodes.
The compute nodes can be aggregated into pools of various sizes for better management, performance, or
isolation. A Red Hat Ceph Storage cluster is created on the storage nodes. It is largely self-managed and
supervised by the Ceph monitor installed on the controller node. The Red Hat Ceph Storage cluster provides
block data storage for Glance image store and for VM instances via the Cinder service and Nova services as
well as for Telemetry service.
Utility node
The utility node is responsible for the initial deployment of the controller nodes, compute nodes, and storage
nodes by leveraging the OpenStack bare metal provisioning service, it is also capable of running the Lenovo
system management tools like Lenovo XClarity Administrator.
The Lenovo NFVI solution for service providers uses Red Hat OpenStack Platform Director as the toolset for
installing and managing a production OpenStack environment, e.g. overcloud. The Red Hat OpenStack
Platform Director is based primarily on the OpenStack TripleO project and uses a minimal OpenStack
installation to deploy an operational OpenStack environment, including controller nodes, compute nodes, and
storage nodes as shown in the diagrams. Director can be installed directly on the bare metal server or can run
as a guest VM on the utility node. The Ironic component enables bare metal server deployment and
management. This tool simplifies the process of installing and configuring the Red Hat OpenStack Platform
while providing a means to scale in the future.
The Lenovo NFVI solution also integrates hardware management and cloud management to enable users to
manage the physical and virtual infrastructure efficiently. The additional services can run as guest VMs
together with underCloud director on Utility Node in a small-scale deployment. They can be deployed on
additional utility nodes in a large-scale cloud environment.
Compute nodes inside the compute pool can be grouped into one or more “host aggregates” according to
business need. For example, hosts may be grouped based on hardware features, capabilities or performance
characteristics (e.g. NICs and OVS-DPDK configuration for acceleration of data networks used by VNFs).
Controller nodes
The controller nodes act as a central entrance for processing all internal and external cloud operation
requests. The controller nodes manage the lifecycle of all VM instances that are running on the compute
nodes and provide essential services, such as authentication and networking to the VM instances. The
controller nodes rely on support services, such as DHCP, DNS, and NTP. Typically, the controller nodes are
implemented as a cluster of three nodes to support High Availability.
The controller cluster also hosts proxy and message broker services for scheduling compute and storage pool
resources and provides the data store for cloud environment settings. In addition, the controller cluster also
provides virtual routers and some other networking functions for all the VMs.
The local storage pool consists of local drives in a server. Therefore the configuration is easier and they
provide high-speed data access. For workloads that have demanding requirement to storage performance, it
is suitable to use local storage pools. However, this approach lacks high-availability across servers and
affects the ability to migrate workloads, and is usually limited by the storage capacity of the local disks.
The Red Hat Ceph Storage pool consists of multiple storage nodes that provide persistent storage resources
from their local drives. In this reference architecture, all cloud data are stored in a single Ceph cluster for
simplicity and ease of management. Figure 17 shows the details of the integration between the Red Hat
OpenStack Platform and Red Hat Ceph Storage.
Ceph uses a write-ahead mode for local operations; a write operation hits the file system journal first and from
there copied to the backing file store. To achieve optimal performance, SSDs are recommended for the
operating system and the Ceph journal data. Please refer to Red Hat Ceph Storage Configuration Guide for
Ceph OSD and SSD journal configuration details.
6.4 Networking
In a typical NFV environment, network traffic needs to be isolated to fulfil the security and quality of service
(QoS) requirements. To fulfil the OpenStack network requirement, a set of logical networks is created
accordingly, some meant for internal traffic, such as the storage network or OpenStack internal API traffic,
and others for external and tenant traffic. VLAN technology provides a simple way for logical grouping and
isolation of the various networks.
Table 7. VLANs
Network Description
Provisioning Provides DHCP and PXE boot functions to help discover bare metal systems for
use in the OpenStack installer to provision the system.
Tenant This is the subnet for allocating the VM private IP addresses. Through this
network, the VM instances can talk to each other
External This is the subnet for allocating the VM floating IP addresses. It is the only
network where external users can access their VM instances.
Storage The front-side storage network where Ceph clients (through Glance API, Cinder
API, or Ceph CLI) access the Ceph cluster. Ceph Monitors operate on this
network.
Storage The backside storage network to which Ceph routes its heartbeat, object
Management replication, and recovery traffic.
Internal API The Internal API network is used for communication between the OpenStack
services using API communication, RPC messages, and database
communication.
NFV data NFV Data Network is used for NFV data plane traffic for VM applications that
require high performance.
Lenovo recommends using one 1GbE switch, e.g. Lenovo RackSwitch G8052, to provide networking for the
server BMC (Baseboard Management Controller) through the dedicated 1GbE port on each server and for the
cloud provisioning through on-board 1GbE ports.
Red Hat OpenStack Platform provides IPv6 support to deploy and configure the overcloud. IPv6-native
VLANs is supported as well for network isolation. For details, refer to IPv6 Networking for the Overcloud.
Figure 18 shows the Lenovo XClarity Administrator interface, where Lenovo ThinkSystem servers are
managed from the dashboard.
The Lenovo Physical Infrastructure Provider for CloudForms 4.7 provides IT administrators the ability to
integrate the management features of Lenovo XClarity Administrator with the hybrid-cloud management
capabilities of Red Hat CloudForms. Lenovo expands physical-infrastructure management for on-premise
cloud configurations extending the visibility of connected physical and virtual infrastructures. This facilitates
the configuration, monitoring, event management, and power monitoring needed to reduce cost and
complexity through server consolidation and simplified management across physical and virtual component
boundaries.
Figure 19 shows the architecture and capabilities of Red Hat CloudForms. These features are designed to
work together to provide robust management and maintenance of your virtual infrastructure.
Figure 20 demonstrates a physical infrastructure topology generated by Red Hat CloudForms and integrated
with the Lenovo XClarity provider.
Figure 20 Topology View of Lenovo XClarity Providers Integrated in Red Hat CloudForms
To install Red Hat CloudForms 4.7 on Red Hat OpenStack platform, please refer to Installing CloudForms.
For performance benchmarking, it is critical the separate the VNF workload data traffic with all the other traffic
including the management traffic, so that the performance KPIs are not polluted by other traffic sources. The
Lenovo RackSwitch G8052 is used to aggregate the control and management traffic, and Lenovo
The NSB Engine runs in a virtual machine which includes the orchestration, management and visualization
components of NSB suite. The user can orchestrate a VNF workload in the targeted cloud environment using
the management network. Once the selected benchmark tests are completed, the NSB engine automatically
collects all the KPIs, saves the benchmark data in a database, and displays the results in a dashboard for
review.
The System Under Test (SUT) is the NFVI compute node running VNFs which are orchestrated by NSB
engine. The NSB engine generates the VNF configuration according to the test plan. Typical configuration
parameters include NIC ports, number of RX/TX queues, cores for PMDs, etc. Due to the existing limitation of
the NSB test suite, the bare-metal compute node and Red Hat OpenStack environment need to be configured
properly to provide the matching configuration on NFVI. The detailed configurations can be found in sections
7.2.5 and 7.2.6.
The Traffic Generator (TG) is a bare-metal compute node running Open Source traffic generator software
such as Pktgen or Trex to generate VNF data traffic at line rate and display real time metrics on the NIC ports.
Similar to the SUT, TG can be configured at run-time by the NSB engine. The traffic generator compute node
should use the same BIOS settings as the SUT.
Test setup
One of the main goals of the performance benchmarking is to provide a reproducible configuration and
performance KPIs on the tested hardware and software environment. The performance KPIs and
benchmarking results will provide a quantitative reference for the users who want to verify a setup in the
production or for proof of concepts before making procurement decisions.
Lenovo choose the Open Source application L3fwd as the sample VNF. L3fwd is a simple application
performing layer-3 packet forwarding using the DPDK. It is widely used and well accepted as a useful VNF for
benchmarking. For more information, see dpdk.org/doc/guides/sample_app_ug/l3_forward.html.
The L3fwd application is executed in following three environments to provide a comprehensive comparison of
typical NFVI deployments:
• Bare-metal
• OVS-DPDK compute
• SR-IOV compute
The best performance is expected when executing VNFs on bare-metal compute nodes because there is the
least overhead in data traffic processing. Performance benchmarks obtained from bare-metal setups are often
used to evaluate new hardware configurations and new software stacks. Executing VNFs in a SR-IOV
environment usually provides sub-optimal throughput compared to the bare-metal setup, but is still able to
achieve much higher throughput than OVS-DPDK because there is no hypervisor overhead.
Figure 22 shows the three test environments and the respective traffic flows.
NIC ports in the SUT are connected to the traffic generator ports through a Lenovo ThinkSystem NE2572
RackSwitch. Bi-directional traffic is sent from traffic generator and the aggregated throughputs at the receiving
side of traffic generator are calculated to give the overall throughput for different packet sizes from 64 bytes to
1518 bytes.
The benchmarking methodology documented in the RFC2544 standard was adopted in the testing setup and
benchmark data collection. In particular, the test cases are designed to check the maximum IO throughput for
a single core. Two Intel XXV710 NIC cards are attached to the first processor of the compute node. Only the
first port is used on each NIC for transporting the VNF traffic. Each port has one queue assigned and all the
TX/RX queues are assigned to the same logical core.
Hardware components
Table 8 lists the hardware configuration of the ThinkSystem SR650 SUT.
Software components
Table 9 provides the detailed version for each software components of the testbed.
BIOS settings
Table 10 lists the key BIOS (UEFI) settings for the NFV compute nodes in order to achieve optimal
performance.
OpenStack settings
In order to achieve optimal performance for the OVS-DPDK configuration, the sibling logical CPU cores are
not used for PMD cores . This ensures that the PMD core mapping is from only one logical CPU to each
physical CPU core.
Below are the SR-IOV specific parameters and OVS-DPDK specific parameters for Red Hat OpenStack
deployment.
ComputeSriovParameters:
NovaSchedulerDefaultFilters:
"RamFilter,ComputeFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,Availa
bilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,N
UMATopologyFilter"
KernelArgs: "isolcpus=4-19,24-39,44-59,64-79 nohz_full=4-19,24-39,44-59,64-79
default_hugepagesz=1GB hugepagesz=1G hugepages=192 iommu=pt intel_iommu=on"
SriovNeutronNetworkType: 'flat'
NovaVcpuPinSet: "4-15,24-35,44-55,64-75"
IsolCpusList: "4-19,24-39,44-59,64-79"
NeutronSriovNumVFs:
- enp47s0f1:16:switchdev
- enp47s0f0:16:switchdev
NeutronPhysicalDevMappings: "prov704:enp47s0f1,prov703:enp47s0f0"
NovaPCIPassthrough:
ComputeOvsDpdkParameters:
NovaVcpuPinSet: "4-15,24-35,44-55,64-75"
NovaSchedulerDefaultFilters:
"RamFilter,ComputeFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,Availa
bilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,N
UMATopologyFilter"
KernelArgs: "isolcpus=4-19,24-39,44-59,64-79 nohz_full=4-19,24-39,44-59,64-79
default_hugepagesz=1GB hugepagesz=1G hugepages=192 iommu=pt intel_iommu=on"
IsolCpusList: "4-19,24-39,44-59,64-79"
OvsEnableDpdk: True
TunedProfileName: "cpu-partitioning"
NovaReservedHostMemory: 4096
OvsDpdkSocketMemory: "2048,2048"
OvsDpdkMemoryChannels: "4"
OvsPmdCoreList: "16,17,18,19, 56,57,58,59"
VhostuserSocketGroup: "hugetlbfs"
Figure 24. Comparison of Line Rate over Bare Metal, SR-IOV, and OVS-DPDK
Rack 1 (42U)
Monitor 1
The rack shown in Figure 25 provides a reference configuration consisting of two NE2572 and one G8052
switches, one utility node, three controller nodes, three storage nodes and five NFV compute nodes.
Additional compute nodes and storage nodes can be easily added to expand the available capacity. Two
flavors of NFV compute configurations, e.g. SR-IOV and OVS-DPDK are verified in this setup.
Storage 25GbE 5 NIC 2/3 Bond NIC 2/3 Bond NIC 2/3 Bond
Internal API 25GbE 7 NIC 2/3 Bond NIC 2/3 Bond N/A
NFV data 25GbE 8-10 NIC 4/5 bond on OVS- N/A N/A
DPDK Compute Nodes;
NIC 4/5 no bond on SR-IOV
Compute Nodes
Figure 26 shows the recommended network topology diagram with VLANs for the Lenovo NFVI solution.
As shown in the above network connectivity diagram, the storage traffic on Compute Node shares the bonded
interface with the Tenant, External, and Internal API networks, and separate provider network is allocated for
NFV data. This assumes that typical VNFs have less I/O requirements for storage, but much higher demands
on high-throughput and low latency for the data traffic. Two bonded 25GbE interfaces are dedicated to a
provider DPDK network on OVS-DPDK compute nodes. For SR-IOV compute nodes, as the physical NIC
resources are passed through to VMs via virtual functions, NIC bond is not applied. The number of VFs
provided by each of the 25GbE interfaces can be specified during cloud deployment.
Pre-deployment configurations
Before starting the deployment of compute nodes with enhanced IO performance, the following prerequisites
must be met by setting options in the server BIOS:
The implementation of NIC bonding on SR-IOV compute nodes is slightly different than on DPDK compute
node. For the SR-IOV enabled compute nodes NIC ports are connected to switch port without bonding. While
NIC bonding is enabled on VNFs which have multiple VF connections. On DPDK compute nodes, NIC
bonding is done at compute hosts.
Cloud configurations
To enable NFV deployment in Red Hat OpenStack Platform, Cloud Admin needs to make the following
changes in Red Hat OpenStack Platform overcloud deployment templates:
Updating roles_data.yaml
The basic method of adding services involves creating a copy of the default service list for a node role and
then adding services. For example, Cloud Admin can add ComputeOvsDpdk and ComputeSriov role to the
default roles_data.yaml file.
# Role: ComputeOvsDpdk #
- name: ComputeOvsDpdk
description: |
Compute OvS DPDK Role
CountDefault: 1
networks:
- InternalApi
- Tenant
- Storage
HostnameFormatDefault: ‘%stackname%-computeovsdpdk-%index%’
disable_upgrade_deployment: True
deprecated_nic_config_name: ‘compute-dpdk.yaml’
# Role: ComputeSriov #
- name: ComputeSriov
description: |
Compute SR-IOV Role
CountDefault: 1
networks:
- InternalApi
- Tenant
- Storage
HostnameFormatDefault: ‘%stackname%-computesriov-%index%’
disable_upgrade_deployment: True
ServicesDefault:
- OS::TripleO::Services::Aide
Modification of network-environment.yaml
The network-environment.yaml file defines isolated network and related parameters. DPDK related
parameters need to be set under parameter_defaults. The parameters need to be properly tuned on Lenovo
hardware to achieve optimal performance. The following example of parameters apply only to the Intel Xeon
Gold 6138T CPU @ 2.00GHz which is used in this reference architecture that has two NUMA nodes each with
40 logical CPU cores when hyperthreading is enabled.
Resource_registry:
OS::TripleO::Compute::Net::SoftwareConfig:
/home/stack/templates/compute.yaml
ComputeSriovParameters:
NovaSchedulerDefaultFilters:
“RamFilter,ComputeFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,AvailabilityZoneFilt
er,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter”
KernelArgs: “isolcpus=4-19,24-39,44-59,64-79 nohz_full=4-19,24-39,44-59,64-79
default_hugepagesz=1GB hugepagesz=1G hugepages=192 iommu=pt intel_iommu=on”
SriovNeutronNetworkType: ‘flat’
NovaVcpuPinSet: “4-15,24-35,44-55,64-75”
IsolCpusList: “4-19,24-39,44-59,64-79”
NeutronSriovNumVFs:
- enp47s0f1:16:switchdev
- enp47s0f0:16:switchdev
NeutronPhysicalDevMappings: “prov704:enp47s0f1,prov703:enp47s0f0”
NovaPCIPassthrough:
- devname: “enp47s0f1”
physical_network: “prov704”
- devname: “enp47s0f0”
physical_network: “prov703”
TunedProfileName: “cpu-partitioning”
NeutronSupportedPCIVendorDevs: [‘8086:158b’]
ComputeOvsDpdkParameters:
NovaVcpuPinSet: “4-15,24-35,44-55,64-75”
NovaSchedulerDefaultFilters:
“RamFilter,ComputeFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter,AvailabilityZoneFilt
er,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter”
KernelArgs: “isolcpus=4-19,24-39,44-59,64-79 nohz_full=4-19,24-39,44-59,64-79
default_hugepagesz=1GB hugepagesz=1G hugepages=192 iommu=pt intel_iommu=on”
IsolCpusList: “4-19,24-39,44-59,64-79”
OvsEnableDpdk: True
TunedProfileName: “cpu-partitioning”
NovaReservedHostMemory: 4096
OvsDpdkSocketMemory: “2048,2048”
OvsDpdkMemoryChannels: “4”
OvsPmdCoreList: “16,17,18,19,36,37,38,39,56,57,58,59,76,77,78,79”
DpdkBondInterfaceOvsOptions: “bond_mode=balance-tcp lacp=active”
BondInterfaceOvsOptions: “mode=4 lacp_rate=1 updelay=1500 miimon=200”
VhostuserSocketGroup: “hugetlbfs”
The mappings of networks to NICs should be updated according to the network configuration. The following is
an example of configuration of ovs-user-bridge with NIC-bonding on OVS-DPDK compute nodes:
-
type: ovs_user_bridge
name: br-link
use_dhcp: false
members:
-
type: ovs_dpdk_bond
name: dpdkbond0
ovs_options: {get_param: DpdkBondInterfaceOvsOptions}
members:
-
type: ovs_dpdk_port
name: dpdk0
members:
-
type: interface
name: enp47s0f0
-
type: ovs_dpdk_port
name: dpdk1
members:
-
type: interface
name: enp47s0f1
On SR-IOV compute nodes, the NIC ports that have SR-IOV enabled should not be associated to any OVS or
Linux bridge.
-
type: interface
name: enp47s0f0
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
-
type: interface
name: enp47s0f1
use_dhcp: false
defroute: false
nm_controlled: true
hotplug: true
Modification of puppet-ceph-external.yaml
resource_registry:
OS::TripleO::Services::CephExternal: /usr/share/openstack-tripleo-heat-
templates/puppet/services/ceph-external.yaml
OS::TripleO::Services::CephMon: OS::Heat::None
parameter_defaults:
CephClusterFSID: ‘ba41970b-7d3b-4101-a96e-1c4ba58108ac’
CephClientKey: ‘AQBKuHFbxj3ROBAAfeBUESciizuB/62cZL9KFA==’
CephExternalMonHost: ‘192.168.80.182,192.168.80.183,192.168.80.184’
# the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova
NovaEnableRbdBackend: true
CinderEnableRbdBackend: true
CinderBackupBackend: ceph
GlanceBackend: rbd
GnocchiBackend: rbd
NovaRbdPoolName: cloud5_nova
CinderRbdPoolName: cloud5_volumes
CinderBackupRbdPoolName: cloud5_backups
GlanceRbdPoolName: cloud5_images
GnocchiRbdPoolName: cloud5_metrics
CephClientUserName: cloud5_openstack
CinderEnableIscsiBackend: false
CephAdminKey: ‘’
Cloud deployment
The following is an example of the deployment script. For a full template example of Red Hat OpenStack
Platform 13 with DPDK and SR-IOV enabled on Lenovo ThinkSystem SR650, please visit
https://github.com/lenovo/ServiceProviderRA.
#!/bin/bash
source ~/stackrc
cd /usr/share/openstack-tripleo-heat-templates
sudo ./tools/process-templates.py -r /home/stack/templates/roles_data.yaml –n
/home/stack/templates/network_data.yaml
cd /home/stack
openstack overcloud deploy \
--templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-dpdk-permissions.yaml \
-r /home/stack/templates/roles_data.yaml \
-e /home/stack/templates/network-environment.yaml \
-e /home/stack/templates/puppet-ceph-external.yaml \
-e /home/stack/templates/overcloud_images.yaml \
-e /home/stack/templates/node-info.yaml \
--ntp-server pool.ntp.org
5. hw:cpu_policy=dedicated
6. hw:cpu_thread_policy=require
7. hw:mem_page_size=large
8. hw:numa_nodes=1
9. hw:numa_mempolicy=strict
The vHost multi-queue can be enabled in VNFs by turning on the hw_vif_multiqueue_enabled option for the
Glance images.
• Ceph OSD (Object Storage Daemon) which is deployed on three dedicated Ceph nodes. The OSD
nodes perform the replication, rebalancing, recovery, and reporting. Lenovo recommends three
ThinkSystem SR650 to host Ceph OSDs. If a deployment requires higher storage capacity, more
OSDs can be added to the Ceph cluster.
• Ceph Monitor maintains a master copy of Ceph storage map with the current state of the storage
cluster. In this example, the OpenStack controller nodes are used to host Ceph monitor function.
For detailed deployment steps for OpenStack Platform 13 deployment, please see the Red Hat
documentation “Director Installation and Usage”.
Utility node
Code Description Quantity
7X02CTO1WW ThinkSystem SR630 – 3yr Warranty 1
AUW0 ThinkSystem SR630 2.5” Chassis with 8 bays 1
AWEP Intel Xeon Gold 5118 12C 105W 2.3GHz Processor 2
AUNC ThinkSystem 16GB TruDDR4 2666 MHz (2Rx8 1.2V) RDIMM 12
BOWY ThinkSystem Intel XXV710-DA2 PCIe 25Gb 2-port 1
AUKH ThinkSystem 1Gb 4-port RJ45 LOM 1
AV1X Lenovo 3m Passive 25G SFP28 DAC Cable 2
AVWB ThinkSystem 1100W (230V/115V) Platinum Hot-Swap Power Supply 2
AUMV ThinkSystem M.2 with Mirroring Enablement Kit 1
B11V ThinkSystem M.2 5100 480GB SATA 6Gbps Non-Hot-Swap SSD 2
Optional local storage
AUNJ ThinkSystem RAID 930-8i 2GB Flash PCIe 12Gb Adapter 1
AUM2 ThinkSystem 2.5” 1.8TB 10K SAS 12Gb Hot Swap 512e HDD 8
Compute node
Code Description Quantity
7X06CTO1WW ThinkSystem SR650 - 3yr Warranty 1
AUVV ThinkSystem SR650 2.5" Chassis with 8, 16 or 24 bays 1
AWEM Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz 2
AUND ThinkSystem 32GB TruDDR4 2666 MHz (2Rx4 1.2V) RDIMM 12
B0WY ThinkSystem Intel XXV710-DA2 PCIe 25Gb 2-port 2
AUKH ThinkSystem 1Gb 4-port RJ45 LOM 1
AV1X Lenovo 3m Passive 25G SFP28 DAC Cable 4
AVWF ThinkSystem 1100W (230V/115V) Platinum Hot-Swap Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
AUNJ ThinkSystem RAID 930-8i 2GB Flash PCIe 12Gb Adapter 1
B49M ThinkSystem 2.5" Intel S4610 480GB Mainstream SATA 6Gb HS SSD 2
Optional local storage
B58G ThinkSystem U.2 Intel P4510 2.0TB Entry NVMe PCIe 3.0 x4 Hot Swap SSD 2
Storage node
Code Description Quantity
7X06CTO1WW ThinkSystem SR650 - 3yr Warranty 1
AUVV ThinkSystem SR650 2.5" Chassis with 8, 16 or 24 bays 1
AWER Intel Xeon Silver 4116 12C 85W 2.1GHz Processor 2
AUNC ThinkSystem 16GB TruDDR4 2666 MHz (2Rx8 1.2V) RDIMM 12
AUNK ThinkSystem RAID 930-16i 4GB Flash PCIe 12Gb Adapter 1
B49M ThinkSystem 2.5" Intel S4610 480GB Mainstream SATA 6Gb HS SSD 2
AUM2 ThinkSystem 2.5" 1.8TB 10K SAS 12Gb Hot Swap 512e HDD 12
B4Y4 ThinkSystem 2.5" SS530 400GB Performance SAS 12Gb Hot Swap SSD 2
BOWY ThinkSystem Intel XXV710-DA2 PCIe 25Gb 2-port 2
AUKG ThinkSystem 1Gb 2-port RJ45 LOM 1
AV1X Lenovo 3m Passive 25G SFP28 DAC Cable 4
AVWF ThinkSystem 1100W (230V/115V) Platinum Hot-Swap Power Supply 2
6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2
References in this document to Lenovo products or services do not imply that Lenovo intends to make them available in
every country.
Lenovo, the Lenovo logo, AnyBay, AnyRAID, BladeCenter, NeXtScale, RackSwitch, Rescue and Recovery, ThinkSystem,
System x, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and XClarity are trademarks of Lenovo.
Red Hat, Red Hat Enterprise Linux and the Shadowman logo are trademarks of Red Hat, Inc., registered in the U.S. and
other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. The OpenStack mark
is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United
States and other countries, and is used with the OpenStack Foundation's permission. We are not affiliated with, endorsed
or sponsored by the OpenStack Foundation, or the OpenStack community
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation
in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Intel, Intel Inside (logos), and Xeon are trademarks of Intel Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
All customer examples described are presented as illustrations of how those customers have used Lenovo products and
the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.
Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement
material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources
for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor
announcements and vendor worldwide homepages. Lenovo has not tested these products and cannot confirm the
accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of
non-Lenovo products should be addressed to the supplier of those products.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice, and
represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the
specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a
commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such
commitments are only made in Lenovo product announcements. The information is presented here to communicate
Lenovo’s current investment and development activities as a good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled environment.
The actual throughput or performance that any user will experience will vary depending upon considerations such as the
amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload
processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance
improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-Lenovo websites are provided for convenience only and do not in any manner
serve as an endorsement of those websites. The materials at those websites are not part of the materials for this Lenovo
product and use of those websites is at your own risk.