Redp 5599
Redp 5599
Redp 5599
Redpaper
IBM Redbooks
September 2020
REDP-5599-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems
servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Red Hat OpenShift V4.3 deployment with internet connection stand-alone installation 34
3.2.1 PowerVM configuration for network installation . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 BOOTP infrastructure server for installing CoreOS by using the network . . . . . . 35
3.2.3 Network infrastructure prerequisites for Red Hat OpenShift Container Platform
installation in a production environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.4 Creating a network infrastructure for a test environment . . . . . . . . . . . . . . . . . . . 39
3.2.5 Installing Red Hat OpenShift on PowerVM LPARs by using the BOOTP network
installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.6 Configuring the internal Red Hat OpenShift registry . . . . . . . . . . . . . . . . . . . . . . . 49
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Spectrum® PowerHA®
Cognos® IBM Watson® PowerVM®
DataStage® InfoSphere® Redbooks®
DB2® Interconnect® Redbooks (logo) ®
Db2® OpenCAPI™ SPSS®
IBM® POWER® SystemMirror®
IBM Cloud® POWER8® Watson™
IBM Cloud Pak® POWER9™
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Ansible, OpenShift, Red Hat, RHCE, are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
RStudio, and the RStudio logo are registered trademarks of RStudio, Inc.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redpaper publication describes how to deploy Red Hat OpenShift V4.3 on IBM
Power Systems servers.
This book presents reference architectures for deployment, initial sizing guidelines for server,
storage, and IBM Cloud® Paks. Moreover, this publication delivers information about initial
supported Power System configurations for Red Hat OpenShift V4.3 deployment (bare metal,
IBM PowerVM® LE LPARs, and others).
This book serves as a guide for how to deploy Red Hat OpenShift V4.3 and provide start
guidelines and recommended practices for implementing it on Power Systems and
completing it with the supported IBM Cloud Paks.
The publication addresses topics for developers, IT architects, IT specialists, sellers, and
anyone who wants to implement a Red Hat OpenShift V4.3 and IBM Cloud Paks on IBM
Power Systems. This book also provides technical content to transfer how-to skills to the
support teams, and solution guidance to the sales team.
This book compliments the documentation that is available at IBM Knowledge Center, and
also aligns with the educational offerings that are provided by the IBM Systems Technical
Education (SSE).
Authors
This paper was produced by a team of specialists from around the world working at IBM
Redbooks, Poughkeepsie Center.
Daniel Casali is a Thought Leader Information Technology Specialist working for 15 years at
IBM with Power Systems, High Performance Computing, Big Data, and Storage. His role at
IBM is to bring to reality solutions that address client’s needs by exploring new technologies
and for different workloads. He is also fascinated by real multicloud implementations, and
always trying to abstract and simplify the new challenges of the heterogeneous architectures
that are intrinsic to this new consumption model, be it on-premises or in the public cloud.
Federico Fros is an IT Specialist who works for IBM Global Technologies Services leading
the UNIX and Storage team for the IBM Innovation center in Uruguay. He has worked in IBM
for more than 15 years, including 12 years of experience in IBM Power Systems and IBM
Storage. He is an IBM Certified Systems Expert for UNIX and High Availability. His areas of
expertise include IBM AIX®, Linux, PowerHA® SystemMirror®, IBM PowerVM, SAN
Networks, Cloud Computing, and IBM Storage Family, including IBM Spectrum® Scale.
Miguel Gomez Gonzalez is an IBM Power Systems Integration engineer based in Mexico
with over 11 years experience in Linux and IBM Power Systems technologies. He has
extensive experience in implementing several IBM Linux and AIX clustering solutions. His
areas of expertise are Power Systems performance boost, virtualization, high availability,
system administration, and test design. Miguel holds a Master in Information Technology
Management from ITESM.
Felix Gonzalez is an IT Specialist who currently works in UNIX and Storage Team for IBM
Global Technologies Services in Uruguay. He is studying Electronic Engineering at
Universidad de la República in Uruguay. He is a Red Hat Linux Certified Specialist with 5
years of experience in UNIX systems, including FreeBSD, Linux, and AIX. His areas of
expertise include AIX, Linux, FreeBSD, Red Hat Ansible, IBM PowerVM, SAN, Cloud
Computing, and IBM Storage.
Paulo Sergio Lemes Queiroz is a Systems Consultant for AI Solutions on Power Systems
with IBM in Brazil. He has 19 years of experience in UNIX and Linux, ranging from systems
design and development to systems support. His areas of expertise include AIX Power
Systems, AIX, GPFS, RHCS, KVM, and Linux. He is a Certified Advanced Technical Expert
for Power Systems with AIX (CATE) and a Red Hat Certified Engineer (RHCE).
Sudipto Pal is an Application Architect and Technical Project manager in GBS with over 20
years of leadership experience in designing innovative business solution for clients from
various industry, domain, and geographical locations. He is skilled in Cloud computing, Big
Data, application development, and virtualization. He successfully delivered several critical
projects with IBM clients from USA and Europe. He led Cognos® administration competency
and mentor several candidates. He co-authored Exploiting IBM PowerVM Virtualization
Features with IBM Cognos 8 Business Intelligence, SG24-7842.
Bogdan Savu is a Cloud Infrastructure Architect at IBM POWER® IaaS and works for IBM
Systems in Romania. He has over 13 years of experience in designing, developing, and
implementing Cloud Computing, Virtualization, Automatization, and Infrastructure solutions.
Bogdan holds a bachelor’s degree in Computer Science from the Polytechnic University of
Bucharest. He is an IBM Certified Advanced Technical Expert for Power Systems, ITIL 4
Managing Professional Certificate, TOGAF 9 Certified, Red Hat Certified Engineer, Red Hat
Certified Specialist in Ansible Automation, Red Hat Certified Specialist in Red Hat OpenShift
Administration, and VMware Certified Professional. His areas of expertise include Cloud
Computing, Virtualization, DevOps, and scripting.
viii Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Richard Wale is a Senior IT Specialist, supporting many IBM development teams at the IBM
Hursley Lab, UK. He holds a B.Sc. (Hons) degree in Computer Science from Portsmouth
University, England. He joined IBM in 1996 and has been supporting production IBM AIX
systems since 1998. His areas of expertise include IBM Power Systems, PowerVM, AIX, and
IBM i. He has participated in co-writing many IBM Redbooks publications since 2002.
Wade Wallace
IBM Redbooks, Poughkeepsie Center
Theresa Xu
IBM Canada
Claus Huempel
IBM Germany
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Preface ix
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
With over 25 years since the release of the original models, IBM Power Systems continues to
be designed for traditional Enterprise workloads and the most demanding, data-intensive
computing. The range of models offer flexibility, scalability, and innovation.
We made a statement of intent within the previously mentioned publication, that subsequent
volumes be published in due course. We felt this approach better served the agile nature of
the Red Hat OpenShift product. The window is always moving, the next release is already on
the horizon. At the time of writing, Volume 2 is in development; however, with some of the
changes and improvements that are provided by Red Hat OpenShift V4.3, we felt an interim
publication was needed to follow the product release.
Red Hat OpenShift is one of the most reliable enterprise-grade containers. It is designed and
optimized to easily deploy web applications and services. Categorized as a cloud
development Platform as a Service (PaaS), it is and based on industry standards, such as
Docker and Kubernetes.
This publication explains what is new with Red Hat OpenShift on IBM Power Systems, and
provides updated installation instructions and sizing guides.
Note: Red Hat OpenShift V4.3 for IBM Power Systems was officially announced by IBM on
28th April 2020. It was subsequently announced two days later by Red Hat on their Red
Hat OpenShift blog.
Red Hat OpenShift V4.3 provides several important changes compared to the previous V3.x
release written about in Volume 1. The most significant change is the move from Red Hat
Enterprise Linux to Red Hat CoreOS for the operating system that is used for the cluster
nodes.
CoreOS is a lightweight Linux that is specifically designed for hosting containers across a
cluster of nodes. As it is patched and configured as part of Red Hat OpenShift, it bring
consistency to the deployed environment and reduces the overhead of ongoing ownership.
For more information about Red Hat OpenShift V4.3 on IBM Power Systems, see Chapter 2,
“Supported configurations and sizing guidelines” on page 5.
If major changes are required, a revised edition of this IBM Redpaper publication can be
published. However, we always recommend to check official resources (release notes,
online documentation, and so on) for any changes to what is presented here.
With the advent of the POWER4 processor in 2001, IBM introduced logical partitions (LPARs)
outside of their mainframe family to another audience. What was seen as radical then grew
into the expected today. The term virtualization is now commonplace across most platforms
and operating systems. These days, virtualization is the core foundation for Cloud Computing.
IBM Power Systems is built for the most demanding, data-intensive, computing on earth. The
servers are cloud-ready and help you unleash insight from your data pipeline, from managing
mission-critical data, to managing your operational data stores and data lakes, to delivering
the best server for cognitive computing.
IBM POWER9, the foundation for the No. 1 and No. 2 supercomputers in the world, is the only
processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA
NVLink, PCIe Gen4, and IBM OpenCAPI™.
IBM Power Systems servers based on POWER9 processors are built for today’s most
advanced applications from mission-critical enterprise workloads to big data and AI, as
shown in Figure 2-1.
Robust scale-out and enterprise servers can support a wide range of mission-critical
applications running on IBM AIX, IBM i, and Linux operating systems. They also can provide
building blocks for private and hybrid cloud environments.
Scale-out servers deliver the performance and capacity that is required for big data and
analytics workloads.
The IBM Power Systems E950 server is the correct fit for growing midsize businesses,
departments, and large enterprises that need a building-block platform for their data center.
The IBM Power Systems E980 server is designed for large enterprises that need flexible,
reliable servers for a private or hybrid cloud infrastructure.
Scale-out AIX, IBM i, and Linux servers are designed to scale out and integrate into an
organization’s cloud and AI strategy, delivering exceptional performance and reliability.
Sockets 1 2 2 1 or 2
Memory slots 16 32 32 32
Memory max. 1 TB 4 TB 4 TB 4 TB
PCIe G4 slots 2 4 4 4
Supported AIX, IBM i, and Linux AIX, IBM i, and Linux AIX, IBM i, and Linux Linux
operating
systems
Scale-out servers for SAP HANA servers are designed to deliver outstanding performance
and a large memory footprint of up to 4 TB in a dense form factor, as shown in Table 2-3 on
page 9. These servers help deliver insights fast at the same time maintaining high reliability.
They are also scalable: When it is time to grow, organizations can expand database capacity
and the size of their SAP HANA environment without having to provision a new server.
Key features Optimized for SAP HANA. High performance for SAP
High performance, tight HANA.
security. Strong security with large
Dense form factor with large memory footprint
memory footprint For Linux-focused
For Linux-focused customers.
customers.
Form factors 2U 4U
Sockets 1 upgradeable or 2 2
Memory slots 32 32
Memory max. 4 TB 4 TB
PCIe G4 slots 4 4
Supported operating AIX, IBM i, and Linux AIX, IBM i, and Linux
systems
IBM Power Systems Scale-out servers for big data deliver the outstanding performance and
scalable capacity for intensive big data and AI workloads, as shown in Table 2-4.
Purpose-built with a storage-rich server design and industry-leading compute capabilities,
these servers are made to explore and analyze a tremendous amount of data, all at a lower
cost than equivalent x86 alternatives.
Table 2-4 IBM Power Systems: Scale-out servers for big data
LC921 LC922
Form factors 1U 2U
Sockets 1 upgradeable or 2 2
Memory slots 32 16
Memory max. 2 TB 2 TB
PCIe G4 slots 4 6
Accelerated servers can also play a vital role in HPC and supercomputing. With the correct
accelerated servers, researchers and scientists can explore more complex, data- intensive
problems and deliver results faster than before.
The IBM Power Systems Accelerated Compute Server helps reduce the time to value for
enterprise AI initiatives. The IBM PowerAI Enterprise platform combines this server with
popular open source deep learning frameworks and efficient AI development tools to
accelerate the processes of building, training, and inferring deep learning neural networks.
Using PowerAI Enterprise, organizations can deploy a fully optimized and supported AI
platform with blazing performance, proven dependability, and resilience.
The new IBM Power System IC922 server is built to delivers powerful computing, scaling
efficiency, and storage capacity in a cost-optimized design to meet the evolving data
challenges of the artificial intelligence (AI) era (see Table 2-5).
The IC in IC922 stands for inference and cloud. The I can also stand for I/O.
Form factors 2U 2U
Sockets 2 2
Memory slots. 16 32
Memory max. 1 TB 2 TB
Red Hat OpenShift V4.3 for Power Systems is an enterprise-grade platform that provides a
secure, private platform-as-a-service cloud on IBM Power Systems servers.
The Red Hat OpenShift V4.3 includes the following key features:
Red Hat Enterprise Linux CoreOS, which offer a fully immutable, lightweight, and
container-optimized Linux OS distribution.
Cluster upgrades and cloud automation.
IBM Cloud Paks support on Power Systems platforms. IBM Cloud Paks are a
containerized bundling of IBM middleware and open source content.
The Red Hat OpenShift architecture builds on top of Kubernetes and is consists of the
following types of roles for the nodes:
Bootstrap
Red Hat OpenShift Container Platform uses a temporary bootstrap node during initial
configuration to provide the required information to the master node (control plane). It
boots by using an Ignition configuration file that describes how to create the cluster. The
bootstrap node creates the master nodes, and master nodes creates the worker nodes.
The master nodes install more services in the form of a set of Operators. The Red Hat
OpenShift bootstrap node runs CoreOS V4.3.
Important: Because of the consensus that is required by the RAFTa algorithm, the
etcd service must be deployed in odd numbers to maintain quorum. For this reason, the
minimum number of etcd instances for production environments is three.
a. The Raft Consensus Algorithm: https://raft.github.io/
Worker
Red Hat OpenShift worker nodes run containerized applications that are created and
deployed by developers. A Red Hat OpenShift worker node contains the Red Hat
OpenShift node components, including the container engine CRI-O, container workloads
running and stopping executor Kubelet, and a service proxy managing across worker
nodes communication for Pods. A Red Hat OpenShift application node runs CoreOS V4.3
or Red Hat Enterprise Linux V7.x.
A deployment host is any virtual or physical host that is typically required for the installation of
Red Hat OpenShift. The Red Hat OpenShift installation assumes that many, if not all the
external services, such as DNS, load balancing, HTTP server, and DHCP are available in a
data center and therefore they do not need to be duplicated on a node in the Red Hat
OpenShift cluster.
However, experience shows that creating a deployment host node and consolidating the Red
Hat OpenShift required external services on it greatly simplifies installation. After installation
is complete, the deployment host node can continue to serve as a load balancer for the Red
Hat OpenShift API service (running on each of the master nodes) and the application ingress
controller (also running on the three master nodes). As part of providing this single front door
to the Red Hat OpenShift cluster, it can serve as a jump server that controls access between
some external network and the Red Hat OpenShift cluster network.
Figure 2-2 Red Hat OpenShift Container Platform for IBM Power Systems
Nodes can run on top of PowerVC, PowerVM, Red Hat Virtualization, KVM, or run bare metal
environment. Table 2-6 shows the IBM Power Systems infrastructure landscape for Red Hat
OpenShift Container Platform V4.3.
Table 2-6 IBM Power Systems infrastructure landscape for OpenShift V4.3
IaaS PowerVC* N/A RHV-M N/A
Guest operating CoreOS V4.3 or CoreOS V4.3 or CoreOS V4.3 or CoreOS V4.3 or
system later later later later
File storage NFS, Spectrum NFS, Spectrum NFS, Spectrum NFS, Spectrum
Scale** Scale** Scale** Scale**
2.2.1 Differences between Red Hat OpenShift Container Platform V4.3 and
V3.11
This section highlights some of the following high-level differences between Red Hat
OpenShift V4.3 and Red Hat OpenShift V3.11 on IBM Power Systems:
One key difference is the change in the base OS, as it transitions from Red Hat Enterprise
Linux V7 to CoreOS for the master nodes. CoreOS is a stripped-down version of Red Hat
Enterprise Linux that is optimized for container orchestration. CoreOS is bundled with Red
Hat OpenShift V4.x and is not separately charged.
The container runtime moves from Docker to Cri-O. The difference in the base operating
system is that Cri-O is a lightweight alternative to the use of Docker as the run time for
Kubernetes.
The container CLI also transitions into Podman. The key difference between Podman and
Docker for CLI is that Podman does not require a daemon to be running. It also shares
many of the underlying components with other container engines, including CRI-O.
Installation and configuration are done by using openshift-install (ignition-based)
deployment for Red Hat OpenShift V4.3 and replacing Red Hat OpenShift-Ansible based
on Ansible tool.
Table 2-7 Red Hat OpenShift Container Platform V4.3 versus V3.11 stack differences
Red Hat OpenShift V4.3 Red Hat OpenShift V3.11
In Figure 2-7, the solid boxes represent nodes that are required at a minimum to run Red Hat
OpenShift Container Platform v3.11. The dashed lines represent recommended configuration
for production.
When deploying Red Hat OpenShift V3.11 on Power Systems, it is required to have only one
master, one infrastructure, and one worker node. It is also common practice to consolidate
the master and infrastructure nodes on a single VM.
Although the worker node can also be consolidated onto a single VM, it is not recommended
because Red Hat OpenShift core licenses are determined by the number of cores on the
worker nodes and not the control nodes. Although this installation is the minimal installation, it
is recommended that at least three copies of this worker node on three separate systems for
high availability and fault tolerance when moving into a production level deployment.
In Red Hat OpenShift V4.3, the three master nodes become a requirement, and at least two
worker nodes must be present in the cluster. Red Hat OpenShift environments with multiple
workers often require a distributed shared file system.
In most cases, the application developer does not have any control over which worker in the
Red Hat OpenShift cluster the application Pod is dispatched. Therefore, regardless of which
worker the application can be deployed to, the persistent storage that is needed by the
application must be available.
One supported storage provider for Red Hat OpenShift V4.3 on POWER architecture is an
NFS server. PowerVC CSI driver is also supported and can be used for block storage. To use
NFS, you must create the NFS server that in this book was done by using Spectrum Scale
CES.
In Figure 2-4, the solid boxes represent nodes that are required at a minimum to run Red Hat
OpenShift Container Platform V4.3 with NFS storage.
Note: The Spectrum Scale that is shown on Figure 2-4 is a representation of a cluster with
a minimum of two nodes sharing the volumes. More nodes can be required, depending on
the size of the cluster.
Enterprise Network
Disk Array
POWERVC
(RWO)
Storage Layer
On KVM-based systems, such as the AC922 and IC922, the host operating system is Red
Hat Enterprise Linux and CoreOS is the guest operating system. CoreOS as the host
operating system is for the bare metal deployment.
AC922/IC922 CoreOS V4.3 Red Hat Bare metal, RHV, NFS Spectrum
Enterprise OSP 16 Spectrum Scale Virtualize CSI
Linux V7.x PowerVC CSI
Red Hat
Enterprise
Linux V8.1
CoreOS V4.3
Figure 2-5 on page 18 clarifies the supported host operating system and guest operating
system for KVM-based Power Systems. For PowerVM-based systems, such as the enterprise
E/S/L systems, no host operating system is used, only a guest operating system. Red Hat
Enterprise Linux V7 as a guest operating system is supported in Red Hat OpenShift V3.11
only. Red Hat OpenShift V4.3 requires CoreOS.
Note: At the time of this writing, Spectrum Scale support is limited to deployment as an
NFS server. IBM intends to support Spectrum Scale on CoreOS in future releases.
2.3.2 Getting started with Red Hat OpenShift V4.3 on Power Systems with a
minimal configuration
This section describes the minimal configuration of Red Hat OpenShift V4.3 on Power
Systems to get you started using the container platform. This initial sizing helps you to use a
Red Hat OpenShift V4.3 instance (stand-alone) without a large capital investment. This option
also helps you to scale to an HA production level deployment in the future.
To deploy in a minimal configuration of Red Hat OpenShift V4.3, you need the following
machines:
A temporary bootstrap machine.
Three masters or control plane machines.
Two workers or compute machines.
The bootstrap node is required only during the installation step; it is not needed after the
cluster is up and running.
PowerVM, bare metal, and KVM-based Power Systems are supported by Red Hat OpenShift
V4.3. Consider the following points:
On PowerVM-based systems, such as the enterprise E/S/L systems, no host operating
system is used and CoreOS is the guest operating system.
On KVM-based systems, such as the AC922 and IC922, the host operating system is Red
Hat Enterprise Linux and CoreOS is the guest operating system.
CoreOS as the host operating system is for the bare metal deployments.
Figure 2-5 shows an example production level deployment on scale-out (IC922) systems.
This configuration attempts to be as cost efficient as possible. The 22 cores 240 GB
represents the total amount of compute resources that are available for worker nodes on that
system. It must not be interpreted as a single VM with 20 cores and 240 GB allocated;
instead, it is to what it can be scaled up.
Figure 2-5 Minimal Red Hat OpenShift V4.3 deployment on POWER scale-out architecture
Important: When sizing with an IC922, only four VMs can be supported on a single
system for PowerVC GA 1.5.
Figure 2-6 Minimal Red Hat OpenShift V4.3 deployment on POWER scale-Up architecture
To complete a restricted network installation, you must create a registry that mirrors the
contents of the Red Hat OpenShift registry and contains the installation media. You can
create this mirror on a bastion host, which can access internet and your closed network, or by
using other methods that meet your restrictions.
For more information about supported configurations, see Installing a cluster on IBM Power.
For more information about use cases, see this web page.
Power Systems can have up to eight threads per core, which is a benefit for containers that
use CPU that is based on the operating system CPU count, as shown in Example 2-1. You
can see that this count is based on the number of threads that are available in the system.
Four times more containers can be used per core, which maintains the required and limited
settings of your YAML files when compared to x86.
For examples of different workloads running on Power Systems two and a half times faster
when running eight threads on a PowerVM system when compared to the same number of
cores on x86, see the following resources:
YouTube
IBM IT Infrastructure web page
Although performance is not a simple 2.5-to-1 comparison and workloads can vary, this
section explains how to use this performance advantage.
With Kubernetes, Pods are assigned CPU resources on a CPU thread basis. On deployment
and Pod definitions, CPU refers to an operating system CPU that maps to a CPU hardware
thread.
For an x86 system, an x86 core running with hyperthreading is equivalent to two Kubernetes
CPUs. Therefore, when running with x86 hyperthreading, a Kubernetes CPU is equivalent to
half of an x86 core.
If your Pod CPU resource was defined to run on x86, you must consider the effects of the
POWER’s performance advantage and the effects of Kubernetes resources being assigned
on a thread basis. For example, for a workload where POWER has a 2X advantage over x86
when running with PowerVM SMT-4, you can assign the same number of Kubernetes CPUs
to POWER that you do to x86 to get equivalent performance.
From a hardware perspective, you are assigning the performance of half the number of cores
to POWER that you assigned to x86. Whereas for a workload where POWER has a twice the
advantage over x86 when running with PowerVM SMT-8, you must assign twice the number
of Kubernetes CPUs to POWER that you do to x86 to realize equivalent performance.
Although you are assigning twice the number of Kubernetes CPUs to POWER, from a
hardware perspective, you are assigning the performance of half the number of cores to
POWER that you assigned to x86.
Note: IBM Cloud Pak for Data V3.0.1 defaults to running in SMT-4 mode on POWER. This
default can change in the future.
Transforming this abstract concept on core performance that is divided on threads is difficult.
A summary is shown in Figure 2-7.
As seen in Figure 2-7, for the workload that is shown, because the PowerVM system can
deliver twice the performance of x86 when running with SMT-4, 2 Kubernetes CPUs for
PowerVM (which is equivalent to half of a physical core) can deliver the same performance as
two Kubernetes CPUs for x86 (which is equivalent to one physical core).
All development performance testing that is done on PowerVM systems use SMT4 and can
use half of the cores than x86 that uses the same assumptions. On OpenPower processor
systems, all the tests for IBM Cloud Pak for Data are done with SMT2. By using these tests as
baseline, we provide directions for sizing guidelines in 2.4.2, “Sizing for IBM Cloud Paks” on
page 23.
Appendix A, “Configuring Red Hat CoreOS” on page 73 shows steps for controlling SMT on a
node using labels. This label enables different SMT across your cluster for different purposes.
You can use the label to select where your application runs depending on needs. By using
this method, you can have a super packing configuration of nodes, which as nodes that run
SMT8 and nodes that run a specific SMT because of application constraints.
25 8 4 2 16
100 16 8 4 32
250 32 16 8 64
To determine the number of worker nodes in the cluster, it is important to determine how
many Pods are expected to fit per node. This number depends on the application because the
application’s memory, CPU, and storage requirements must be considered. Red Hat also
provided guidelines for the maximum number of Pods per node, which is 250. It is
recommended to not exceed this number because it results in lower overall performance.
Table 2-12 outlines the number of VMs that are required for deployment and sizing of each
VM for IBM Cloud Pak for Multicloud Manager.
Bastion 1 2 4 100
Master 3 16 32 300
Worker 8 4 16 200
Table 2-13 lists the conversion of the number of vCPUs to physical cores.
86 43 43 22
Master 3 8 16 200
Worker 3 8 16 100
Table 2-15 lists the conversion of the number of vCPUs to physical cores.
56 28 28 14
The recommended sizing for installing IBM Cloud Pak for Integration is listed in Table 2-16.
Master 3 8 16 200
Worker 8 8 16 200
Table 2-17 lists the conversion of the number of vCPUs to physical cores.
100 50 50 25
The recommended sizing for installing the IBM Cloud Pak for Data is shown in Table 2-18.
IBM Cloud Pak for Data is the most variable in terms of the sizing because it is highly
dependent on the add-ons that are required for a project. The Power Systems release
features almost 50 add-on modules for AI (IBM Watson®), Analytics, Dashboarding,
Governance, Data Sources, Development Tools, and Storage. Part of this effort must include
sizing for add-ons, but for now, the information that is provided Table 2-18 on page 25 is used
for a base installation.
Master 3 8 32 n/a
Worker 3 16 64 200
Table 2-19 shows the conversion of the number of vCPUs to physical cores.
80 40 40 20
The recommended sizing for installing the IBM Cloud Pak for Automation is shown in
Table 2-20.
Master 3 8 16 200
Worker 5 8 16 100
Table 2-21 shows the conversion of the number of vCPUs to physical cores.
72 36 36 18
In most cases, the developer of the application does not have any control over which worker
in the Red Hat OpenShift cluster the application Pod is dispatched. Therefore, regardless of
which worker the application can be deployed to, the persistent storage that is required by the
application must be available. However, some environments do not require the added
complexity that is required to provision a shared, distributed storage environment.
You use the deployment.yaml file but change it to use the ppc64le image
docker.io/ibmcom/nfs-client-provisioner-ppc64le:latest, as shown in Example 2-3 on
page 27. You must have a preexisting NFS export to use this provisioner.
Change <NFS_SERVER> and <NFS_BASE_PATH> to match your exported NFS. You can use
Spectrum Scale as your NFS server, as described in 2.5.4, “IBM Spectrum Scale and NFS”
on page 30.
For the storage class creation, use the class.yaml file, as shown in Example 2-5.
To define the provisioner, apply the rbac file, security constrains, deployment, and class, as
shown in Example 2-6.
Note: We are defining the provisioner on the default project. We recommend creating a
new project and define the provisioner. Remember to change all namespaces from default
to your namespace. Also, remember to change from default to your namespace on the
add-scc-to-user command, as shown in Example 2-6.
Example 2-7 Test latency for IBM Cloud Pak for Data
[root@client ~]# dd if=/dev/zero of=/mnt/testfile bs=4096 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB, 3.9 MiB) copied, 1.5625 s, 2.5 MB/s
To test bandwidth, check that your result is comparable or better than the result, as shown in
Example 2-8.
Example 2-8 Test bandwidth for IBM Cloud Pak for Data
[root@client ~]# dd if=/dev/zero of=/mnt/testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 5.14444 s, 209 MB/s
Note: Native Spectrum Scale support for CoreOS is intended for all architectures, s390x,
ppc64le and x86_64, and can be a great option for persistent storage.
IBM Spectrum Scale has many features beyond common data access, including data
replication, policy-based storage management, and multi-site operations. You can create a
cluster of AIX nodes, Linux nodes, Windows server nodes, or a mix of all three. IBM Spectrum
Scale can run on virtualized instances, which provide common data access in environments
to take advantage of logical partitioning or other hypervisors. Multiple IBM Spectrum Scale
clusters can share data within a location or across wide area network (WAN) connections.
For more information about the benefits and features of IBM Spectrum Scale, see IBM
Knowledge Center.
For more information about setting up IBM Spectrum Scale as an NFS server, see IBM
Knowledge Center.
For more information about the Container Storage Interface in Red Hat OpenShift, see this
web page.
For more information about the PowerVC CSI and how to configure it, see IBM Knowledge
Center.
2.5.6 Backing up and restoring your Red Hat OpenShift cluster and
applications
Many parts of the cluster must be backed up so that it can be restored. This section describes
that process at a high level.
For more information about the procedure to backup the etcd, see this web page.
Check the two files that are generated are backed up in a safe place outside of the cluster so
they can be retrieved and used if needed.
For more information about the procedure to restore to a previous state, see this web page.
Some databases, such as DB2, have their own backup tool (db2 backup) to create online
backups. This tool is also used on the containers. For more information, see IBM Knowledge
Center.
If you are using NFS or Spectrum Scale, you can use any other backup tool to backup the
directory after the backup completed successfully. Check your backup to a PV that is
available externally and has enough space to perform the operation.
Some applications (for example, IBM Cloud Pak for Data) provide tools to quiesce the
workload (cpdbr quiesce and cpdbr unquiesce). You can also combine this tool to create a
backup strategy for your PVC.
Some applications do not need to maintain consistency, and a backup of the PV files is
enough.
Note: This type of backup is not advisable if you intend to always maintain an
application-consistent method of restore.
Some applications can fail to start this method because they need consistency.
On all architectures, the control plane runs only Red Hat CoreOS and does not permit the use
of Red Hat Linux for this role, unlike the previous Red Hat OpenShift V3.x release. The
installation of this operating system depends on a file that is called ignition that acts on the
configuration of the CoreOS. Because CoreOS is an immutable operating system, no rpm
package management is needed, as you expect on a Red Hat Enterprise Linux operating
system.
The minimum number of nodes that are required for a production Red Hat OpenShift cluster
are three master nodes and two worker nodes. The only option for worker nodes on ppc64le
environment is CoreOS; therefore, you have a flat environment with CoreOS on master and
worker nodes. The collection of the master nodes is called control plane. A bootstrap node is
needed to bring up the control plane, but it is destroyed after the control plane is fully up.
This chapter highlights the differences and uses the BOOTP for the boot process instead of
PXE boot as suggested in the documentation for PowerVM LPARs. However, we use the
defined PXE boot for bare metal installation.
The minimum resource requirement to start a Red Hat OpenShift V4.3 cluster on Power
Systems servers is listed in Table 3-1.
This section also describes how to create a Linux environment to segregate the NIM server
from the Red Hat OpenShift installation server. You can also unify them and use an existing
NIM server to be your network installation infrastructure (BOOTP and TFTP), as described in
3.3, “Using NIM as your BOOTP infrastructure” on page 50.
This test environment uses other servers to act as the DNS and load balancer, considering
we do not control the enterprise network resources, only to the Power Systems servers.
Because it is a single point of failure, do not use this method on a production environment.
Instead, use the highly available network services that are provided by the enterprise
networking, infrastructure, and security teams.
The DHCP/BOOTP in this case is another single server. This server works for production
environments if the nodes are installed with fixed IP because the server is used during
installation only. The installation server can be your NIM server if you have one.
3.2.2 BOOTP infrastructure server for installing CoreOS by using the network
During the initial boot, the machines require a DHCP/TFTP server or a static IP address to be
set to establish a network connection to download their ignition configuration files. For this
installation, all our nodes feature direct internet access, and include installed DHCP/TFTP on
the client server to serve IP addresses. We are also serving HTTP and for this environment,
the IP address of this server is 192.168.122.5. We used a ppc64le Red Hat Enterprise Linux
V7.7 for simplicity of configuration of the grub network environment.
This document does not describe in great deal setting up a boot environment. For more
information about how this process is done, see Red Hat’s Configuring Network Boot on IBM
Power Systems Using GRUB2.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 35
Complete the following steps to set up a boot environment:
1. Install a DHCP server, a TFTP server, and an HTTP server by using YUM.
2. Follow the procedure that is shown in Example 3-1 to create a GRUB2 network boot
directory inside the tftp root.
4. Example 3-2 on page 36 shows the installation image and the ignition file point to an
HTTP server. Use httpd and change the port to run on 8080. Leave in the directory the
default /var/www/html.
Note: Remember to repeat the entries for all nodes because these entries are only
examples for each node type to bootstrap the cluster. You must at least three masters and
two worker nodes. In our case, we have three masters and two worker nodes. The env2
entry on the IP parameter is the one in our environment and changes with the virtual ID
you assign to it on the LPAR.
Example 3-2 on page 36 shows some referenced files. These installation files are
available to download from this web page.
You find the client, the installer, and the CoreOS assets. Remember to retrieve all of the
referenced files in grub.cfg: rhcos*metal.ppc64le.raw.gz,
rhcos*initramfs.ppc64le.img, and rhcos*kernel-ppc64le.
The rhcos*metal.ppc64le.raw.gz file must be placed on the root directory of the HTTP
server, and rhcos*initramfs.ppc64le.img and rhcos*kernel-ppc64le must be on the
TFTP root directory.
5. Install dhcpd and configure dhcp.conf to match your configuration, as shown in
Example 3-3.
allow bootp;
option conf-file code 209 = text;
host bootstrap {
hardware ethernet 52:54:00:af:db:b6;
fixed-address 192.168.122.10;
option host-name "bootstrap.ocp4.ibm.lab";
allow booting;
}
host master1 {
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 37
hardware ethernet 52:54:00:02:23:c7;
fixed-address 192.168.122.11;
option host-name "master1.ocp4.ibm.lab";
allow booting;
}
host master2 {
hardware ethernet 52:54:00:06:c2:ee;
fixed-address 192.168.122.12;
option host-name "master2.ocp4.ibm.lab";
allow booting;
}
host master3 {
hardware ethernet 52:54:00:df:15:3e;
fixed-address 192.168.122.13;
option host-name "master3.ocp4.ibm.lab";
allow booting;
}
host worker1 {
hardware ethernet 52:54:00:07:b4:ec;
fixed-address 192.168.122.14;
option host-name "worker1.ocp4.ibm.lab";
allow booting;
}
host worker2 {
hardware ethernet 52:54:00:68:5c:7c;
fixed-address 192.168.122.15;
option host-name "worker2.ocp4.ibm.lab";
allow booting;
}
host worker3 {
hardware ethernet 52:54:00:68:ac:7d;
fixed-address 192.168.122.16;
option host-name "worker3.ocp4.ibm.lab";
allow booting;
}
The MAC address must match grub.cfg and dhcpd.conf files for a correct installation.
Note: The MAC addresses that are used differ because they are normally randomly
generated upon LPAR definition. You can use your preferred method to define the LPARs
and collect the MAC address from the configuration to use on these files.
Note: Use highly available enterprise services that available in your infrastructure to
provide the services that are described in this section (load balancer, DNS, and network
connectivity). This process is normally done before the installation by the enterprise
networking, security, and active directory teams.
This section shows copies of the documentation tables for easier reference. Check for
updates in the source documentation.
Important: Do not follow this procedure for production environments unless you created a
full network infrastructure.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 39
Component Record Description
address=/bootstrap.ocp4.ibm.lab/192.168.122.10
ptr-record=10.122.168.192.in-addr.arpa,bootstrap.ocp4.ibm.lab
address=/master1.ocp4.ibm.lab/192.168.122.11
address=/etcd-0.ocp4.ibm.lab/192.168.122.11
srv-host=_etcd-server-ssl._tcp.ocp4.ibm.lab,etcd-0.ocp4.ibm.lab,2380
ptr-record=11.122.168.192.in-addr.arpa,master1.ocp4.ibm.lab
address=/master2.ocp4.ibm.lab/192.168.122.12
address=/etcd-1.ocp4.ibm.lab/192.168.122.12
srv-host=_etcd-server-ssl._tcp.ocp4.ibm.lab,etcd-1.ocp4.ibm.lab,2380
ptr-record=12.122.168.192.in-addr.arpa,master2.ocp4.ibm.lab
address=/master3.ocp4.ibm.lab/192.168.122.13
address=/etcd-2.ocp4.ibm.lab/192.168.122.13
srv-host=_etcd-server-ssl._tcp.ocp4.ibm.lab,etcd-2.ocp4.ibm.lab,2380
ptr-record=13.122.168.192.in-addr.arpa,master3.ocp4.ibm.lab
address=/worker1.ocp4.ibm.lab/192.168.122.14
ptr-record=14.122.168.192.in-addr.arpa,worker1.ocp4.ibm.lab
address=/worker2.ocp4.ibm.lab/192.168.122.15
ptr-record=15.122.168.192.in-addr.arpa,worker2.ocp4.ibm.lab
address=/worker3.ocp4.ibm.lab/192.168.122.16
ptr-record=16.122.168.192.in-addr.arpa,worker3.ocp4.ibm.lab
address=/api.ocp4.ibm.lab/192.168.122.3
address=/api-int.ocp4.ibm.lab/192.168.122.3
address=/.apps.ocp4.ibm.lab/192.168.122.3
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 41
Table 3-3 Load balancing entries requirement
Port Machines Internal External Description
Example 3-5 shows the configuration file of what the installation documentation describes
when haproxy is used to implement it.
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
frontend openshift-api
bind *:6443
default_backend openshift-api
mode tcp
option tcplog
backend openshift-api
balance source
mode tcp
server ocp43-bootstrap 192.168.0.10:6443 check
server ocp43-master01 192.168.0.11:6443 check
server ocp43-master02 192.168.0.12:6443 check
server ocp43-master03 192.168.0.13:6443 check
frontend openshift-configserver
bind *:22623
default_backend openshift-configserver
mode tcp
option tcplog
backend openshift-configserver
balance source
mode tcp
server ocp43-bootstrap 192.168.0.10:22623 check
server ocp43-master01 192.168.0.11:22623 check
server ocp43-master02 192.168.0.12:22623 check
server ocp43-master03 192.168.0.13:22623 check
frontend openshift-http
bind *:80
default_backend openshift-http
mode tcp
option tcplog
backend openshift-http
balance source
mode tcp
server ocp43-worker01 192.168.0.14:80 check
server ocp43-worker02 192.168.0.15:80 check
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 43
server ocp43-worker02 192.168.0.16:80 check
frontend openshift-https
bind *:443
default_backend openshift-https
mode tcp
option tcplog
backend openshift-https
balance source
mode tcp
server ocp43-worker01 192.168.0.14:443 check
server ocp43-worker02 192.168.0.15:443 check
server ocp43-worker03 192.168.0.16:443 check
After your configuration is complete, confirm that all services are started. Be aware that you
might need to change SELinux Boolean configurations to get haproxy to serve on any port.
3.2.5 Installing Red Hat OpenShift on PowerVM LPARs by using the BOOTP
network installation
This section uses the BOOTP process to install CoreOS. Red Hat OpenShift Container
Platform is installed with all the configurations to be passed to CoreOS installation with the
bootstrap ignition file.
Check you downloaded the Red Hat OpenShift installation package, the Red Hat OpenShift
client, and the binaries are on your path (you must decompress the packages you download).
Also, confirm that you downloaded the CoreOS assets to build your BOOTP infrastructure
and the files are correctly placed as directed. On this same page, you find your pull secret that
you need to configure your install-config.yaml file.
The pull secret is tied to your account and you can use your licenses. At the time of this
writing, we were tied to a 60-day trial. If you intend to maintain the cluster for more than 60
days, check that you have a valid subscription.
Complete the following steps to install a Power Systems cluster until the point you have your
YAML file ready. Do not create the ignition files at this moment. Create the sample file as
shown at this web page.
If you need proxy to access the internet, complete the process that is described at this web
page.
The creation of the manifests that are a set of YAML files that are used to create the ignition
files that configure the CoreOS installation. The creation of the manifests is done by using the
YAML file that you prepared (see Example 3-6 on page 44), and the openshift-install
command that was downloaded from this web page.
Place the openshift-installer and install-config.yaml files into a single directory and
create a backup of the install-config.yaml because it is deleted after use. For any
installation, these three files are the only files in the directory. Our installation directory is
shown in Example 3-7.
After you prepare the install directory, run the manifest creation, as shown in Example 3-8.
As shown in Example 3-8, a structure is created. Per the documentation, change the
manifests/cluster-scheduler-02-config.yml files to mark the masters not schedulable, as
shown in Example 3-9 on page 46.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 45
Example 3-9 Change cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: null
name: cluster
spec:
mastersSchedulable: false
policy:
name: ""
status: {}
Copy the ignition files (bootstrap.ign, master.ign, and worker.ign) to your httpd file server.
Note: You need the files inside the auth directory to access your cluster. Copy the file
auth/kubeconfig to /root/.kube/config. Find the password for the console inside the
kubeadmin-password file.
For more information about the Power Systems boot, see Appendix B, “Booting from System
Management Services” on page 85.
Use a network boot of your choice to install the servers. For more information, see 3.3, “Using
NIM as your BOOTP infrastructure” on page 50.
After you start all servers, wait for the bootstrap to complete. This section does not discuss
troubleshoot during the boot process. To get information about the bootstrap completion, use
the command that is shown in Example 3-11.
After completion, it is safe to shut down the bootstrap server because it is no longer needed in
the cluster lifecycle, and everything is done by using the control plane. Even adding nodes is
done without a bootstrap.
To get configurations ready for use, we show how to change CoreOS parameters.
The supported way of making configuration changes to CoreOS is through the machineconfig
objects (see the login message of CoreOS), as shown in Example 3-12.
https://docs.openshift.com/container-platform/4.3/architecture/architecture-rhcos.
html
---
Last login: Sun May 31 11:58:27 2020 from 20.0.1.37
[core@worker1 ~]$
For more information about how to create machineconfig objects and how to add them, see
Appendix A, “Configuring Red Hat CoreOS” on page 73.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 47
Important: The configuration that is shown in Appendix A, “Configuring Red Hat CoreOS”
on page 73 is important when VIOS Shared Ethernet Adapters (SEA) is used, and also for
IBM Cloud Pak for Data. It also simplifies SMT management across the cluster, which
makes it possible to run different levels of SMT to take advantage of the parallel threading
offered by the POWER processor. However, it also makes it possible for applications that
are sensitive to context switching to run at their best. If you do not apply this configuration,
you can end up with authentication operator in unknown state if using SEA, as in our case
because of our network configuration. The oc apply -f <FILE> command is used to apply
the files.
With time, the range of operations start and become available, as shown in Example 3-13.
The process to check when the cluster is ready can also be monitored by using the
openshift-install command, as shown in Example 3-14.
Export the NFS server as shown in the documentation that is available at this web page.
Example 3-15 shows a sample YAML file that describes how to define a static persistent
volume in the NFS storage backend.
Apply the file by issuing the oc apply -f <FILE> command, as shown in Example 3-16.
Now, enable the internal image registry by changing the managementState parameter from
Deleted to Managed, as shown in Example 3-17. Issue the oc edit
configs.imageregistry.operator.openshift.io command to open a text editor and make
the change.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 49
resourceVersion: "7473646"
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 913f5985-a50f-4878-b058-72eae57fc8a5
spec:
defaultRoute: true
disableRedirect: false
httpSecret: 7ab51007dbad0462dfeb89f5f1d97edbfc42782c534bf0911ab77497bd285357055f
92129469643cdf507d3f15ed1ab38e5cd4e0c8b4bf71aea9a4f3b531c39c
logging: 2
managementState: Managed
proxy:
http: ""
https: ""
noProxy: ""
readOnly: false
replicas: 1
requests:
read:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
write:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
storage:
pvc:
claim:
Tip: This step-by-step guide can also be used with Red Hat Linux V7 and later versions.
Example 3-19 Download and install apache for AIX from aixtoolbox IBM site
# wget --no-check-certificate
https://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/httpd/httpd-2.4.41-1.aix6.1
.ppc.rpm
--2020-05-25 20:59:09--
https://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/httpd/httpd-2.4.41-1.aix6.1
.ppc.rpm
Resolving public.dhe.ibm.com... 170.225.15.112
Connecting to public.dhe.ibm.com|170.225.15.112|:443... connected.
WARNING: cannot verify public.dhe.ibm.com's certificate, issued by 'CN=GeoTrust RSA CA
2018,OU=www.digicert.com,O=DigiCert Inc,C=US':
Self-signed certificate encountered.
HTTP request sent, awaiting response... 200 OK
Length: 4279776 (4.1M) [text/plain]
Saving to: 'httpd-2.4.41-1.aix6.1.ppc.rpm'
httpd-2.4.41-1.aix6.1.ppc.rpm
100%[=====================================================>] 4.08M 310KB/s in 27s
2020-05-25 20:59:37 (154 KB/s) - 'httpd-2.4.41-1.aix6.1.ppc.rpm' saved [4279776/4279776]
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 51
pcre >= 8.42 is needed by httpd-2.4.41-1.ppc
Example 3-19 on page 51 shows that package dependencies must be installed first before
the httpd package can be installed. Another option is to install the YUM package manager for
AIX and let YUM resolve and install the dependencies.
YUM installation AIX: For more information, see this web page.
After YUM is installed, you can use it to install the Apache web server and all package
dependencies, as shown in Example 3-20.
Copying ignition files and CoreOS boot file to the Apache root directory
After generating the ignition files as described in Example 3-10 on page 46, copy these files
to the Apache root directory in the NIM server.
Ignition files: Because no Red Hat OpenShift installer is available for AIX, the ignition files
must be created by using the openshift-install script in a Linux Red Hat or MacOS
operating system.
Copy the bootstrap.ign, master.ign, and worker.ign files to the NIM server, as shown in
Example 3-22 on page 54.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 53
Copy the rhcos-4.3.18-ppc64le-metal.ppc64le.raw.gz file to the Apache root directory, as
shown in Example 3-22.
Example 3-22 Copy the ignition files and CoreOS boot file to the NIM server
deploy01# scp /root/openshift-install/*.ign ocp43nimserver:/var/www/htdocs/
deploy01# scp rhcos-4.3.18-ppc64le-metal.ppc64le.raw.gz ocp43nimserver:/var/www/htdocs/
Verify that Apache is running and listening on port 8080, as shown in Example 3-25.
Note: After the installation is completed, the bootptab lines must be removed manually.
Copy all files from powerpc-ieee1275 directory (including the core.elf from the Red Hat
deploy01 LPAR) to the NIM server, as shown in Example 3-27.
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 55
Configuring tftp access
The last config file to change is the /etc/tftpaccess.ctl file that permits tftp process to
some directories on the server, as shown in Example 3-29.
cat /etc/tftpaccess.ctl
# NIM access for network boot
allow:/tftpboot/coreos
Tip: If more verbose debug is needed for the lpar_netboot command, the –x flag can be
specified.
This section describes how to add a bare metal server (also known by Opal or PowerNV
servers) to your PowerVM cluster. This example can be the most complex case because you
are mixing different ways to provision nodes into your cluster. The entire cluster can also be
PowerNV only.
Petitboot is the operating system bootloader for scale-out PowerNV systems and is based on
Linux kexec. Petitboot can use PXE boot to ease the process booting with a simple
configuration file. It can load any operating system image that supports the Linux kexec
reboot mechanism, such as Linux and FreeBSD. Petitboot can load images from any device
that can be mounted by Linux, and can also load images from the network by using the HTTP,
HTTPS, NFS, SFTP, and TFTP protocols.
Example 3-31 dhcp.conf file for mixed PowerVM and PowerNV environment
default-lease-time 900;
max-lease-time 7200;
subnet 192.168.122.0 netmask 255.255.255.0 {
option routers 192.168.122.1;
option subnet-mask 255.255.255.0;
option domain-search "ocp4.ibm.lab";
option domain-name-servers 192.168.122.1;
next-server 192.168.122.5;
filename "boot/grub2/powerpc-ieee1275/core.elf";
}
allow bootp;
option conf-file code 209 = text;
.
.
.
host powervmworker {
hardware ethernet 52:54:00:1a:fb:b6;
fixed-address 192.168.122.30;
option host-name "powervmworker.ocp4.ibm.lab";
next-server 192.168.122.5;
filename "boot/grub2/powerpc-ieee1275/core.elf";
allow booting;
}
host powernvworker {
hardware ethernet 98:be:94:73:cd:78;
fixed-address 192.168.122.31;
option host-name "powernvworker.ocp4.ibm.lab";
next-server 192.168.122.5;
option conf-file "pxelinux/pxelinux.cfg/98-be-94-73-cd-78.cfg";
allow booting;
}
After changing the dhcp.conf file, restart the dhcpd server. The PXE boot does not point to
the grub.cfg file created for the PowerVM hosts. Instead, it points to the configuration file that
is stated on the dhcpd entry.
Example 3-32 Configuration file example for PXE boot of a PowerNV worker node
DEFAULT pxeboot
TIMEOUT 20
PROMPT 0
LABEL pxeboot
KERNEL http://192.168.122.5:8080/rhcos-4.3.18-ppc64le-installer-kernel-ppc64le
Chapter 3. Reference installation guide for Red Hat OpenShift V4.3 on Power Systems servers 57
APPEND
initrd=http://192.168.122.5:8080/rhcos-4.3.18-ppc64le-installer-initramfs.ppc64le.
img rd.neednet=1 ip=dhcp nameserver=192.168.122.1 console
Find a bastion host that can accesses the internet that your cluster can reach. This server is
not used as a router or any other network service other than the registry for the Red Hat
OpenShift Platform. Therefore, it is not in the production path and is used for maintenance
only.
For more information about this concept, see the online documentation on the Red Hat
website.
Note: The most important part of this documentation is where it shows you how to
configure the mirror registry on the bastion node (see this web page).
These instructions assume that you perform the instructions on a x86 architecture. If you
are performing it on a ppc64le, the only change is to change the registry container image
to one supported in ppc64le when performing Step 5 of the section at this web page.
IBM Cloud Paks are enterprise-ready, containerized middleware and common software
solutions that are hosted on Kubernetes-based platforms that give clients an open, faster, and
more secure way to move core business applications to any Cloud. This full stack, converged
infrastructure includes a virtualized cloud hosting environment that helps to extend
applications to the Cloud.
IBM Cloud Paks are certified by IBM with up-to-date software to provide full-stack support,
from hardware to applications.
Figure 4-1 IBM Cloud Paks helps users looking to enable workloads
Transformation of a traditional application is one the key features in this scope. Development,
testing, and redeployment are some of the phases where most of the effort and challenges
are experienced in the traditional development model. The Agile DevOps-based development
model is the potential solution to break this challenge.
To complement the Agile development process, IBM Cloud Pak for Application extends
Kubernetes features for consistent and faster development cycle, which helps IBM clients to
build cost-optimized, smarter applications.
Note: Although all IBM Cloud Paks are intended to be supported on the ppc64le
architecture, at the time of this writing, IBM Cloud Pak for Data was readily available for us
to test. Therefore, we worked more with this IBM Cloud Pak specifically.
For more information about IBM Cloud Pak for Data, see 4.4, “IBM Cloud Pak for Data” on
page 63.
Personalize customer experience is the primary business focus that needs an integrated view
of all scattered data. IBM Cloud Pak for Integration facilitates rapid integration of data along
with security, compliance, and version capability. If featured the following key capabilities:
API lifecycle management
Application and data integration
Enterprise messaging
Event streaming
High-speed data transfer
Secure gateway
IBM Cloud Pak for Automation helps to automate business operations with an integrated
platform. Kubernetes makes it easier to configure, deploy, and manage containerized
applications. It is compatible with all types of projects small and large to deliver better
end-to-end customer journeys with improved governing of content and processes.
Functionality with fully integrated data and AI platform features the following capabilities:
Data collection
Data organization
Data analysis
Infuse AI into the business
Support multicloud architecture
These enhancements (see Figure 4-2) are classified into the following sections:
Platform:
– Modular services-based platform setup for efficient and optimized use of resources.
– Built-in dashboard with meaningful KPI for better monitoring, metering, and
serviceability.
– Open extendable platform with new age Platform and Service APIs.
Installation:
– Simplified installation
– Red Hat OpenShift V4.3
Service:
– Data processing and analytics are some of the key enhancements.
– Advanced integration with IBM DataStage® and IBM Db2®.
– Advanced predictive and analytical models that use IBM SPSS® and Streams Watson
suits.
IBM Cloud Pak for Data emerged as an extensible and highly scalable data and AI platform. It
is built on a comprehensive foundation of unified code, data, and AI services that can use
multiple Cloud native services. It also is flexible enough to adopt customizations to address
specific business needs through extended service catalog.
The minimum resource recommendations that are described in this publication are for
guidance only. Consult with your IBM Sales representative for recommendations that are
based on your specific needs.
To check that your persistent volume provider complies with latency and throughput
requirements for IBM Cloud Pak for Data, follow the storage test that is described at this web
page.
IBM Cloud Pak for Data includes many services to ensure that you have a complete solution
available to you. For more information about the services that supported on the ppc64le
platform and their requirements, see this web page.
Note: The time on all of the nodes must be synchronized within 500 ms. Check that you
have the NTP correctly set. Use the machine configuration method that is described in
Appendix A, “Configuring Red Hat CoreOS” on page 73 to configure /etc/chrony.conf to
point to the correct NTP server.
4.4.3 Installing IBM Cloud Pak for Data on Power Systems servers
For more information about the prerequisites that must be met to install IBM Cloud Pak for
Data on Red Hat OpenShift Container Platform V4.3, see IBM Knowledge Center.
Before installing, you need a registry to hold IBM Cloud Pak for Data images. For simplicity,
the internal registry of Red Hat OpenShift can be configured as described in 3.2.6,
“Configuring the internal Red Hat OpenShift registry” on page 49.
This case uses the repo.yaml file and the cpd-ppc64le client. Example 4-1 shows the
repo.yaml file.
You need the entitlement key to install IBM Cloud Pak for Data.
The base service is the lite assembly that contains the infrastructure and the core part of IBM
Cloud Pak for Data. To download the assembly, configure your repo.yaml file and issue the
command as shown in Example 4-2.
After you download it, push the images to your registry, as shown in Example 4-3.
You have access to 82 projects, the list has been suppressed. You can list all
projects with 'oc projects'
*** Parsing assembly data and generating a list of charts and images for download
***
The next step is to apply the admin setup, as shown in Example 4-4.
* Parsing assembly data and generating a list of charts and images for download *
[INFO] [2020-06-03 05:59:37-0548] Assembly data validated
[INFO] [2020-06-03 05:59:38-0046] The category field of module 0010-infra is not
specified in its manifest file, assuming default type 'helm-chart'
[INFO] [2020-06-03 05:59:38-0048] The category field of module 0015-setup is not
specified in its manifest file, assuming default type 'helm-chart'
[INFO] [2020-06-03 05:59:38-0050] The category field of module 0020-core is not
specified in its manifest file, assuming default type 'helm-chart'
.
.
.
[INFO] [2020-06-03 05:59:46-0201]
securitycontextconstraints.security.openshift.io/cpd-zensys-scc added to:
["system:serviceaccount:zen:cpd-admin-sa"]
Finally, you are ready to install the assembly for the lite service, as shown in Example 4-5 on
page 69.
[INFO] [2020-06-03 06:01:33-0603] Version for assembly is not specified, using the
latest version '3.0.1' for assembly lite
Your IBM Cloud Pak for Data is ready and accessible. Access the web console URL by using
a web browse as the installer installation instruction shows in Example 4-5. You can now log
in to the console and see the window, as shown in Figure 4-3 on page 71.
Note: The default user name is admin and the password is password.
You can repeat the steps from Example 4-2 on page 66 - Example 4-4 on page 69 to install
other available assemblies, including the following examples:
aiopenscale
cde
db2oltp
db2wh
dods
hadoop
hadoop-addon
lite
rstudio
spark
spss
spss-modeler
wml
wsl
Remember to change lite for the service that you want to install.
If you use Spectrum Scale as storage for IBM Cloud Pak for Data persistent volumes that use
Cluster Export Services, you can use the snapshot feature to enable the backup.
---
Last login: Tue Jun 2 12:00:39 2020 from 10.17.201.99
[core@worker1 ~]$
Machine objects can be created by using YAML files and enable systemd services or change
files on disk. The basic YAML structure is shown in Example A-2.
We use the base64 content for text files because it eases the occurrence of any special
characters and non-printable characters resulting in a single stream.
The crio changes are necessary because some workloads include busy containers, and need
more pids and open files than the allocated by default. We do not paste the full crio.conf
because we changed only one parameter from the default file pids_limit = 1024 and added
a ulimit parameter, as shown in Example A-3.
We create two files to control the SMT across the cluster, as shown in Example A-4. For more
information about how Kubernetes uses CPU, see 2.4.1, “Red Hat OpenShift sizing
guidelines” on page 22.
[Service]
ExecStart="/usr/local/bin/powersmt"
[Install]
WantedBy=multi-user.target
Example A-4 shows the reference to the file /usr/local/bin/powersmt. We need to create
that file to set the parameters correctly, as shown in Example A-6 on page 76.
You must transform the file to change any special characters and spaces (that are meaningful
in YAML files) to a simple string in a single line. The process is shown in Example A-5.
Note: During the project, we attempted different ways to create the YAML file and use
base64 was the easiest way to overcome all situations on the YAML file definition. It is for
this reason that we used this format and we encourage users to use this format as well.
Note: We do not show this process for other files, but if you want to use base64 for the
source of the file contents, repeat the process for any file you created.
while :
do
ISNODEDEGRADED=$(/bin/oc get node $HOSTNAME -o yaml |/bin/grep machineconfigurat
ion.openshift.io/reason |/bin/grep "unexpected on-disk state validating")
SMTLABEL=$(/bin/oc get node $HOSTNAME -L SMT --no-headers |/bin/awk '{print $6}'
)
if [[ -n $SMTLABEL ]]
then
case $SMTLABEL in
1) TARGETSMT=1
;;
2) TARGETSMT=2
;;
4) TARGETSMT=4
;;
8) TARGETSMT=8
;;
*) TARGETSMT=$CURRENTSMT ; echo "SMT value must be 1, 2, 4, or 8 and small
er than Maximum SMT."
;;
esac
else
TARGETSMT=$MAXSMT
fi
if [[ -n $ISNODEDEGRADED ]]
then
touch /run/machine-config-daemon-force
fi
The machineconfig file that is used after including all of these alterations is shown in
Example A-7.
Important: The file that is shown in Example A-7 applies the configuration to all worker
nodes. This configuration does not need to be applied to the master nodes. If you must
apply this configuration, change worker to master on both metadata stanza occurrences,
and create two different YAML configuration files. Both configuration files must be applied
to on all masters and workers nodes.
Do not use this example for your installation; instead, build your own file. You must at
minimum update the base64 strings from the ones in your crio.config. Remember that
configuration entries can change over time; for example, the pause_image.
By default, the nodes automatically reboot in rolling fashion, one-by-one, after you apply
the configuration.
enabled: true
Check the contents of the base64 hash by using the base64 -d command to check that the
configuration you are applying is the one to which you intended. You can always create a
hash with a different configuration when you have a different requirement.
After you created the YAML file, apply the configuration as shown in Example A-8.
Note: By default, the nodes automatically reboot in rolling fashion, one-by-one, after you
apply the configuration.
These settings are required for all deployments. Example A-9 assumes that you have worker
nodes with 64 GB of RAM. If the worker nodes have 128 GB of RAM each, double the values
for the kernel.shm* values.
The SMS services helps view information about your system or partition, and perform tasks,
such as changing the boot list and setting the network parameters. These SMS menus can be
used for AIX or Linux logical partitions.
Alternatively, you can run the following command from the HMC CLI:
user@hmc:~> chsysstate -r lpar -m <managed-system> -o on -f <profile> -b sms -n
<lpar_name>
100 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Exit SMS. Confirm exiting SMS, as shown in Figure B-16. This action boots the operating
system from the network, installs the operating system, and reboots.
102 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
In SMS mode again, select option number 5, Select Boot Options, as shown in
Figure B-18.
104 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Select option number 1, Select 1st Boot Device, to set the first boot device, as shown in
Figure B-20.
106 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Select the device where you installed the operating system. Figure B-22 shows four hard disk
drives that are the same because of the multipath configuration. Select one of the hard drives.
108 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Figure B-24 shows the current boot sequence. Enter x to exit SMS.
Search for REDP5599, select the title, and then, click Additional materials to open the
directory that corresponds with the IBM Redpaper form number, REDP5599.
112 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Related publications
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
Red Hat OpenShift and IBM Cloud Paks on IBM Power Systems: Volume 1, SG24-8459
NIM from A to Z in AIX 5L, SG24-7296
IBM Power System E950: Technical Overview and Introduction, REDP-5509
IBM Power Systems LC921 and LC922: Technical Overview and Introduction,
REDP-5495
IBM Power System IC922: Technical Overview and Introduction, REDP-5584
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and additional materials, at the following website:
ibm.com/redbooks
Online resources
The following websites are also relevant as further information sources:
IBM PowerVC:
https://www.ibm.com/us-en/marketplace/powervc
IBM PowerVC Standard Edition V1.4.4:
https://www.ibm.com/support/knowledgecenter/SSXK2N_1.4.4/com.ibm.powervc.standa
rd.help.doc/powervc_whats_new_hmc.html
Network Installation Management:
https://www.ibm.com/support/knowledgecenter/ssw_aix_72/install/nim_intro.html
IBM Redbooks highlighting POWER9 processor-based technology:
http://www.redbooks.ibm.com/Redbooks.nsf/pages/power9?Open
114 Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide
Back cover
REDP-5599-00
ISBN 0738459070
Printed in U.S.A.
®
ibm.com/redbooks