How To Configure EMS 10.2 On Kubernetes Files

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

TIBCO Software, a

Business Unit of Cloud TIBCO Enterprise Message Service™


Software Group.
Santa Clara, CA.
Installation on Kubernetes Container
www.tibco.com Platforms
www.cloud.com

Version 10.2 Nov, 2022. Updated for EMS10.2 Files based


Deployments on Kubernetes

Version 10.2.1 June, 2023. Updated to provide Helm Charts


For EMS installation

TIBCO fuels digital business


by enabling better decisions
and faster, smarter actions
through the TIBCO
Connected Intelligence
Cloud. From APIs and
systems to devices and
people, we interconnect
everything, capture data in
real time wherever it is, and
augment the intelligence of
your business through
analytical insights.
Thousands of customers
around the globe rely on us
to build compelling
experiences, energize
operations, and propel
innovation. Learn how
TIBCO makes digital smarter
at www.tibco.com.
Copyright Notice
COPYRIGHT© 2023 Cloud Software Group. All rights reserved.

Trademarks
TIBCO, the TIBCO, logo and TIBCO Enterprise Message Service are either registered trademarks or
trademarks of Cloud Software Group in the United States and/or other countries. All other product and
company names and marks mentioned in this document are the property of their respective owners and are
mentioned for identification purposes only.

Content Warranty
The information in this document is subject to change without notice. THIS DOCUMENT IS PROVIDED "AS
IS" AND CSG MAKES NO WARRANTY, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT
LIMITED TO ALL WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. TIBCO Software Inc. shall not be liable for errors contained herein or for incidental or
consequential damages in connection with the furnishing, performance or use of this material.

For more information, please contact:

TIBCO Software, a Business Unit of Cloud Software Group


Santa Clara, CA
USA

©2022 TIBCO Software Inc. All Rights Reserved. 2


Table of Contents
1 Overview ........................................................................................................................................ 4
1.1 Document Purpose ............................................................................................................................. 4
1.2 Supported Versions ............................................................................................................................ 4
1.3 Prerequisites ....................................................................................................................................... 4
1.4 Prepare the Preliminary Environment ................................................................................................ 4
2 Fault Tolerance and Shared Folder Setup ..................................................................................... 6
2.1 Fault Tolerance ................................................................................................................................... 6
2.2 Control Access to NFS Shared Folders ................................................................................................ 6
2.3 Setting Up the Shared Folder.............................................................................................................. 7
3 EMS Docker image ........................................................................................................................ 8
3.1 Creating the Base Docker Image ......................................................................................................... 8
3.2 Running the Docker Image Create Script ............................................................................................ 8
3.3 Extending the Base Docker Image ...................................................................................................... 9
3.3.1 Provisioning FTL Client Libraries to Use the Corresponding Transports ......................................... 9
3.3.2 Provisioning Custom JAAS Authentication or JACI authorization Modules..................................... 9
3.4 Hosting the Image............................................................................................................................... 9
4 Kubernetes Setup ........................................................................................................................ 11
4.1 Sizing EMS in the Kubernetes Cluster ............................................................................................... 11
4.2 On Premise Kubernetes Configuration ............................................................................................. 11
4.2.1 Provisioning the NFS Shared Folder .............................................................................................. 11
4.2.2 EMS Server Template for NFS ....................................................................................................... 12
4.2.3 Health Checks: Liveness and Readiness Probes ............................................................................ 15
4.2.4 Creating a Deployment and Service.............................................................................................. 16
4.2.5 Stopping or Deleting an EMS Server ............................................................................................. 16
4.2.6 EMS Server Configuration ............................................................................................................. 16
4.2.7 Connecting to the EMS Server pod ............................................................................................... 16
4.3 EMS Kubernetes Deployment in Cloud Platforms ............................................................................ 17
4.3.1 Storage Class ................................................................................................................................ 17
4.3.2 EMS Server Template for Cloud Deployments .............................................................................. 19
4.4 Using Helm to Deploy EMS in Kubernetes ........................................................................................ 22
4.4.1 Preparing the Environment to Use Helm ...................................................................................... 22
4.4.2 Using Helm to Deploy EMS ........................................................................................................... 22
5 Accessing and Testing EMS on a Cloud Platform ........................................................................ 25
5.1 Internal Access to the EMS Server .................................................................................................... 25
5.2 External Access to the EMS Server ................................................................................................... 25
5.3 Connection Factory Update .............................................................................................................. 26
Appendix A: TLS Configuration ............................................................................................................ 27

©2022 TIBCO Software Inc. All Rights Reserved. 3


1 Overview

1.1 Document Purpose

TIBCO Enterprise Message Server (EMS) version 10.2 can run with a variety of persistent storage
options. This document will outline running EMS with persisted file storage in Kubernetes
environments. Running TIBCO Enterprise Message Service on different Kubernetes Container
Platforms with persisted file storage involves:
● Creating a Docker® image embedding EMS and hosting it in a Docker registry
● Preparing a shared folder on NFS for on premise storage configurations, if required
● Provisioning Persisted volume storage for cloud platforms, if required
● Configuring and creating EMS containers based on the EMS Docker image to run in
Kubernetes
● Optionally, EMS can be deployed in Kubernetes via Helm

1.2 Supported Versions

The steps described in this document are supported for the following versions of the products and
components involved:
● TIBCO EMS 10.2.1 and later
● Docker Community/Enterprise Edition should be most recent version.
● Kubernetes 1.2x or Red Hat OpenShift Container Platform 4.7. Recommend latest versions
of the container platform
● HELM 3.92 or later

1.3 Prerequisites

The reader of this document should be familiar with:


● Docker concepts
● Kubernetes Container Platform administration
● TIBCO EMS configuration
● NFSv4 (optional)
● Helm Chart configuration (optional)

1.4 Prepare the Preliminary Environment

The following infrastructure should already be in place:


● A machine equipped for building Docker images (Linux or MacOS)
● A Docker registry. Can be on premise, in AWS, Azure, or GCP
● The following software must already be downloaded to the Linux or macOS machine
equipped for building Docker images.

©2022 TIBCO Software Inc. All Rights Reserved. 4


Note: All software must be for Linux!
● TIBCO Enterprise Message Server 10.2.1 installation package downloaded to a directory
● The ems_10.2_files_kubernetes.zip downloaded and unzipped to a directory. The directory
must readable/writable.
● A Kubernetes Container Platform cluster. Kubernetes platform can be Azure Kubernetes
Service (AKS), Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine
(GKE), SUSE Rancher, Red Hat OpenShift (OC), or a generic on premise Kubernetes
cluster.
● A shared folder on an NFSv4 server for Files with on premise environments.
● A persisted volume is (PV) is required for all other configurations. SSD backed is
recommended.
● Install Docker on the workstation to build the TIBCO EMS image.
● Install the kubectl command-line tool to manage and deploy applications to Kubernetes in
Kubernetes from a workstation.
● Install Helm on the workstation, if desired. (optional)

©2022 TIBCO Software Inc. All Rights Reserved. 5


2 Fault Tolerance and Shared Folder Setup

With on premise environments, NFSv4 is required to support EMS in containers. NFSv4 should be
used with Red Hat OpenShift, SUSE Rancher, or generic Kubernetes environments on premise
environments. However, in cloud Kubernetes deployments (AKS, EKS, GKE), NFSv4 is not
recommended. The provider’s Kubernetes persisted storage solutions should be used. This
document does not provide information setting up NFSv4 in cloud deployments, nor third party
persisted storage for on premise deployments. A persisted volume (PV) is always required no
matter what the storage option is.

Note: It is also possible to use the default provisioned storage class for the Kubernetes
environment. However, these usually are not suitable with EMS, as the data is not retained, and
every time the EMS pod is stopped/started all data will be lost. Check with your Kubernetes
administrator or other options for on premise environments.

2.1 Fault Tolerance

A traditional EMS server configured for fault tolerance relies on its state being shared by a primary
and a secondary instance, one being in the active state while the other is in standby, ready to take
over. The shared state relies on the server store and configuration files to be located on a shared
storage such as a SAN or a NAS using NFS.
By contrast, the fault tolerance model used by EMS in Kubernetes relies on the Kubernetes restart
mechanisms. Only one EMS server instance is running and, in case of a server failure, will be
restarted inside its container. In case of a failure of the container or the corresponding cluster node,
the cluster will recreate the container, possibly on a different node, and restart the EMS server
there.
Note: In cloud environments, cluster nodes may be in different zones, and the storage may not be
available. In these environments, the use of FTL is highly recommended to ensure data persistence
and replication.
Within the container, the health of the EMS server is monitored by a health check probe for:
● liveness
● readiness
For more information on the probes, see section 4.2.3.

2.2 Control Access to NFS Shared Folders

Note: If NFS is not being used for storage on premise environments, this section and the following
section can be skipped.
You can control access to the NFS shared folders using User and Group IDs.
Depending on how your NFS server is configured, programs accessing shared folders may have to
run with a specific user ID (uid) and group ID (gid).
©2022 TIBCO Software Inc. All Rights Reserved. 6
While you can control the uid of a container through a field called runAsUser, controlling its gid
is not possible in older versions of OpenShift, such as version 3.11. If your NFS setup requires
controlling the gid used by the EMS server, a workaround consists of creating a specific user and
group in the EMS Docker image (see section 3.1 below) and setting its uid and gid to the desired
values.
As a result, an EMS server running in a container started from that image will access its store, log,
and configuration.

2.3 Setting Up the Shared Folder

● Log on to a machine that can access the NFS shared folder with the user account meant to
be used by the EMS server.
● Create the shared folder.
For example, ~/ems/shared.
● Modify the permissions to your requirements.
For example, 750 (rwxr-x---).
Example:
> mkdir -p ~/ems/shared
> chmod -R 750 ~/ems/shared

©2022 TIBCO Software Inc. All Rights Reserved. 7


3 EMS Docker image

3.1 Creating the Base Docker Image

The content of the container that will run in Kubernetes derives from a Docker image that first
needs to be created and then hosted in a Docker registry.
To create an EMS Docker image, use the tibemsfilescreateimage script on a machine
equipped for building Docker images.
Note: The tibemsfilescreateimage script provided with the ems_10.2_files_kubernetes.zip
must be used.
This script needs the location of the following software packages to be installed:
● EMS installation package
● EMS hotfixes (optional)
● The Java package (optional)

3.2 Running the Docker Image Create Script

Once the necessary EMS installation package and optional packages are available, the
tibemsfilescreateimage script can be ran to create the Docker image.
This script also lets you choose whether to save the image as an archive and creates a user and
group set to the required uid and gid values.
The following command creates a Docker image based on the EMS 10.2 Linux installation
package, adding a JVM, the 1000 uid and the 1000 gid.
> tibemsfilescreateimage TIB_ems_10.2.1_linux_x86_64.zip \
-j <JRE installation package>.tar.gz \
-u 1000 \
-g 1000

The following example illustrates how you can experiment with that Docker image after it has been
built:
This following command creates a sample EMS server folder hierarchy and configuration in the
current directory and starts the corresponding server:
> docker run -p 7222:7222 -v `pwd`/test:/shared ems:10.2.1

You can modify the tibemsfilescreateimage script to suit your environment.

©2022 TIBCO Software Inc. All Rights Reserved. 8


3.3 Extending the Base Docker Image

The base Docker image can be extended to include FTL client libraries and custom JAAS
authentication and JACI authorization modules.
3.3.1 Provisioning FTL Client Libraries to Use the Corresponding Transports
1. Copy the FTL client library files to a temporary folder.
2. From the temporary folder, use a Dockerfile based on the example given below to copy
these files into the base Docker image:
FROM ems:10.2.1
COPY --chown=tibuser:tibgroup . /opt/tibco/ems/docker/ftl
> docker build -t ems:10.2.0_ftl .

3. Upon customizing your EMS configuration, make sure to include


/opt/tibco/ems/docker/ftl in the Module Path property.

3.3.2 Provisioning Custom JAAS Authentication or JACI authorization Modules

1. Copy your custom JAAS or JACI plugin files, including the static configuration files they
may rely on, to a temporary folder.
2. From the temporary folder, use a Dockerfile based on the example given below to copy
these files into the base Docker image:
FROM ems:10.2.1
COPY --chown=tibuser:tibgroup . /opt/tibco/ems/docker/security

> docker build -t ems:10.2.0_security .

3. Upon customizing your EMS configuration, make sure to include the relevant paths to those
files in the Security Class path, JAAS Classpath and JACI Classpath properties. Note: This
step can only be completed, if a files based docker image is being created.
4. Note that the other required files are in their usual location:
/opt/tibco/ems/<version>/bin and /opt/tibco/ems/<version>/lib

For example:
/opt/tibco/ems/docker/security/user_jaas_plugin.jar:/opt/tibco/ems/
10.2/bin/tibemsd_jaas.jar:/opt/tibco/ems/10.2/lib/tibjmsadmin.jar,
etc.

3.4 Hosting the Image

Tag the image to suit your Docker registry location and push it there. Note: If the image is to be
hosted on AKS/EKS/GKE, there may be additional steps required to login into the registry. See the
specific cloud provider’s documentation for details on uploading the Docker image to the
respective registry.
Note: with AKS, it is now possible to attach the Azure Container Registry (ACR) directly to the
Azure Kubernetes Cluster, making this a simple step, which does not require a secret.

©2022 TIBCO Software Inc. All Rights Reserved. 9


For example:

> docker tag ems:10.2.1 docker.company.com/path/ems:10.2.1


> docker push docker.company.com/path/ems:10.2.1

©2022 TIBCO Software Inc. All Rights Reserved. 10


4 Kubernetes Setup

As previously mentioned, TIBCO Enterprise Message Service can run on virtually all Kubernetes
Container Platforms. These include generic on premise Kubernetes, on premise or cloud versions
of Red Hat OpenShift, Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service
(EKS), and Google Kubernetes Engine (GKE). Though not tested, EMS should also work with
SUSE Rancher and Tanzu.
The EMS deployment on any of these container platforms is similar with differences mainly with
the persisted storage.
This section will provide details of configuring TIBCO EMS utilizing files based storage for EMS
in Kubernetes, highlighting the differences between container platforms.

4.1 Sizing EMS in the Kubernetes Cluster

A new or existing Kubernetes cluster can be used to deploy EMS. In general, for EMS, two (2)
Kubernetes nodes are required. For a small EMS configuration, each node requires a minimum of
2 cores, and 8+ GB of RAM for EMS depending on usage. The system resource requirements for
EMS in Kubernetes will be similar to an EMS server running on bare metal. The storage class for
EMS is set for 5GB.

4.2 On Premise Kubernetes Configuration

On premise configurations are usually OpenShift, Rancher, Tanzu, or a generic Kubernetes cluster.
This section will outline the requirements for deploying EMS in an on premise cluster.

4.2.1 Provisioning the NFS Shared Folder


When an on premise Kubernetes is used, a storage resource is provisioned by the cluster
administrator through a Persistent Volume (PV), which should be of type NFSv4. There may be
other third party solutions, but these are not covered as part of this document. A project can then
claim that resource through a Persistent Volume Claim (PVC). That claim will eventually be
mounted as a volume inside containers.

We will create one PV and one PVC at the same time since these are meant to be bound together.
Modify the nfs-pv-pvc.yaml file for your setup:

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-ems
annotations:
# Should be replaced by spec.mountOptions in the future
volume.beta.kubernetes.io/mount-options: soft (1)
spec:
capacity:
storage: 5Gi (2)
accessModes:
©2022 TIBCO Software Inc. All Rights Reserved. 11
- ReadWriteMany
nfs:
path: /vol/home/user/ems/shared (3)
server: 10.98.128.50 (4)
persistentVolumeReclaimPolicy: Retain
claimRef:
name: claim-nfs-ems
namespace: ems-project (5)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-nfs-ems
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi (2)
volumeName: pv-nfs-ems

(1): Optional comma-separated list of NFS mount options used when the PV is mounted on a
cluster node. Note: soft mount must be retained.
(2): Storage capacity for EMS. Default is 5 Gi.
(3): The path that is exported by the NFS server. In this example, we want it to match the
~/ems/shared folder created in section 2.3.
(4): The host name or IP address of the NFS server.
(5): This needs to match the name of the namespace. Can be default if there is no specific
namespace. If a namespace is not required, the line can removed.

Create the PV and PVC:


> kubectl apply -n <namespace name> -f nfs-pv-pvc.yaml

You can check the result this way:


> kubectl get pv,pvc

Note: the same PV/PVC can be used by multiple pods within the same project.

Creating the PV/PVC is done once for the lifetime of the project.

4.2.2 EMS Server Template for NFS


EMS server container using NFSv4 is created in the Kubernetes cluster through the tibemsd-
nfs.yaml sample template. This template includes sections that define a deployment, and a set of
services. The template is provided in the ems_10.2_files_kubernetes.zip
4.2.2.1 Service Objects and EMS Client URLs
Two service objects are created that exposes the EMS server listen port and the EMS service probe
port.

©2022 TIBCO Software Inc. All Rights Reserved. 12


The services defined in tibemsd-nfs.yaml is of type NodePort, which means that the
corresponding port numbers will be accessible through all nodes of the cluster.

For example, if your cluster runs on two nodes called node1 and node2 that can be addressed by
those host names, and if you have exposed your EMS server through a service using port number
30722, EMS clients running outside the cluster will be able to access it either through the
tcp://node1:30722 or tcp://node2:30722 URL, regardless of the node where the container
is actually running. This works by virtue of each node proxying port 30722 into the service.

EMS clients running inside the cluster will be able to access the EMS server either in the fashion
described above or through its service name. Assuming the service name is emsserver and the
port still is 30722, that amounts to using the tcp://emsserver:30722 URL.

To ensure EMS client automated fault-tolerance failover, these must connect with FT double
URLs. Using the example above: tcp://node1:30722,tcp://node1:30722 from outside the
cluster or tcp://emsserver:30722,tcp://emsserver:30722 from inside the cluster. For
the first form, since all nodes will proxy port 30722 into the service, repeating the same node name
twice fits our purpose. The connection factories in the sample EMS server configuration generated
by default upon creating a container illustrate that pattern. Should the EMS server or its container
fail, clients will automatically reconnect to the same URL once the server has been restarted.

The EMS probe port is also now exposed, and can be accessed in a similar manner as the listen
port, except it will use http, rather than tcp. So, using the above example with the probe port on
30726, the EMS server can be accessed at http://emsserver:30726/isReady to verify
availability from within the cluster.
You can use types of service other than NodePort if they fit your requirements.
4.2.2.2 Deployment Object
A deployment includes the definition of a set of containers and the desired behavior in terms of
number of replicas (underlying ReplicaSet) and deployment strategy.
Note: Only the Docker image and registry location are required to be modified. All others are
optional.

Key items:

kind: Deployment

spec:
replicas: 1 (1)

strategy:
type: Recreate

template:

spec:
containers:
- name: tibemsd-container
©2022 TIBCO Software Inc. All Rights Reserved. 13
image: <Name and location of the Docker registry> (2)
imagePullPolicy: Always (3)
env: (4)
- name: EMS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: EMS_PUBLIC_PORT
value: “30722” (5)
- name: EMS_SERVICE_NAME
value: “emsserver” (6)
- name: EMS_PROBE_PORT
value: “7220” (7)
- name: EMS_PUBLIC_PROBE_PORT
value: “30726” (8)
args:
- ‘file’
livenessProbe:

readinessProbe:

ports:
- containerPort: 7222 (9)
name: tibemsd-tcp
protocol: TCP

securityContext:
runAsUser: 1000 (10)

volumeMounts:
- mountPath: /shared (11)
name: tibemsd-volume (12)

restartPolicy: Always (13)

volumes:
- name: tibemsd-volume (12)
persistentVolumeClaim:
claimName: claim-nfs-ems (14)
(1): The number of replicated pods: 1, since we want a single instance of the EMS server. This
should not be changed.
(2): The location and name of the Docker registry. This must be updated
(3): Determines if the EMS Docker image should be pulled from the Docker registry prior to
starting the container.
(4): Environment variables that will be passed to the container.
(5): 30722 is the environment variable value for the EMS_PUBLIC_PORT. If this value is
changed, it must be changed throughout the file.
(6): emsserver is the environment variable value for the EMS_SERVICE_NAME. If this value is
changed, it must be changed throughout the file.
(7): 7220 is the environment variable value for the EMS_PROBE_PORT. If this value is
changed, it must be changed throughout the file.
(8): 30726 is the environment variable value for the EMS_PUBLIC_PROBE_PORT. If this
value is changed, it must be changed throughout the file.
©2022 TIBCO Software Inc. All Rights Reserved. 14
(9): 7222 is the EMS container port value. If this value is changed, it must be changed
throughout the file.
(10): The uid the container will run as.
(11): The path where our NFS shared folder will be mounted inside of the container.
(12): The internal reference to the volume defined here.
(13): The pod restart policy: Set so that the kubelet will always try to restart an exited container. If
the EMS server stops or fails, its container will exit and be restarted.
(14): The name of the PVC created by the cluster administrator for NFS. Must be the same as used
in the nfs-pv-pvc.yaml

4.2.3 Health Checks: Liveness and Readiness Probes


Refer to the Kubernetes documentation for a description of health checks.
For an EMS server container, a liveness health check helps detect when an EMS server is not
running. When this health check fails a number of times in a row, the EMS server container is
restarted.
A readiness health check helps detect when an EMS server that is up and running is not in the
active state. When this health check fails a number of times in a row, the EMS server endpoints are
removed from its container, such that the server is made unreachable. As it may or may not fit your
operations, it is up to you to decide whether you need the readiness health check. If not relevant to
you, feel free to remove it from the template.
Note: If removed, the service object for the probe can also be removed.
The sample probes are configured in the deployment object (see previous section):

...
livenessProbe:
httpGet:
path: /isLive
port: probe-tcp
initialDelaySeconds: 1 (1)
timeoutSeconds: 5 (2)
periodSeconds: 6 (3)
readinessProbe:
httpGet:
path: /isReady
port: probe-tcp
initialDelaySeconds: 1 (1)
timeoutSeconds: 5 (2)
periodSeconds: 6 (3)
...

(1): Number of seconds after the container has started before the probe is initiated.
(2): Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
(3): How often (in seconds) to perform the probe.
Defaults to 10 seconds. Minimum value is 1.

©2022 TIBCO Software Inc. All Rights Reserved. 15


4.2.4 Creating a Deployment and Service
1. Edit the tibemsd-nfs.yaml template and override the default values, if needed.
2. Create a deployment and service with an EMS server using the modified template.
For example:
> kubectl apply -f tibemsd-nfs.yaml

You can verify the results using the following commands:


> kubectl get --selector name=emsserver all
> kubectl describe deploy/emsserver
> kubectl describe svc/emsserver
> kubectl describe svc/emsprobe

4.2.5 Stopping or Deleting an EMS Server


To stop an EMS server without deleting it, use the kubectl scale operation to set its number of
replicas to 0.

For example:
> kubectl scale --replicas=0 deploy emsserver

To restart this EMS server, set its number of replicas back to 1:


> kubectl scale --replicas=1 deploy emsserver

To delete an EMS server deployment and service entirely, use the kubectl delete operation.
For example:
> kubectl delete -f tibemsd-nfs.yaml

The corresponding pod and ReplicaSet will also be deleted. The PVC and PV will not be deleted,
nor will the corresponding data. To delete the data, PV, and PVC, use the following:
> kubectl delete pvc,pv –all

4.2.6 EMS Server Configuration


As mentioned in section 3.1, running a container off of the EMS Docker image creates a default
EMS server folder hierarchy and configuration. In a Kubernetes cluster, the configuration will be
created under ems/config/emsserver.json in the NFS shared folder if absent.

This is handled by the tibems.sh script embedded in tibemsfilescreateimage and is


invoked through the Docker image ENTRYPOINT. You can either alter tibems.sh or directly
provision your own configuration files to suit your needs.

4.2.7 Connecting to the EMS Server pod


The EMS server logs and configuration can be accessed directly through the kubectl exec command
using the name of the EMS Server pod name. The default name is emsserver-0. This can be useful
for viewing the logs, modifying the configuration file, etc.
> kubectl exec -it emsserver-0 -- /bin/bash

©2022 TIBCO Software Inc. All Rights Reserved. 16


4.3 EMS Kubernetes Deployment in Cloud Platforms

Deploying EMS in AKS/EKS/GKE differs from on premise deployments. These include using a
PV/PVC provided by the cloud provider, EMS will run as a Kubernetes statefulset rather than a
Kubernetes deployment, and LoadBalancer services will be configured for external access.

4.3.1 Storage Class


A persisted volume is needed for EMS. The use of the persisted volume is different if using a Files
bases EMS server or a FTL/AS based EMS server. However, the creation of the PV is the same no
matter what persisted storage is used for EMS, and requires the creation of a new storageclass in
the Kubernetes cluster. This section will outline how to create a storageclass in AKS, EKS, and
GKE. Examples of the yaml files to create the storageclass in each of the different Kubernetes
deployments is provided in the ems_10.2_files_kubernetes.zip under the respective provider name
(AKS/EKS/GKE).
4.3.1.1 AKS Storage Class
The ems-aks-storage.yaml file will create a managed Premium_LRS. storageclass by default. In
Azure, other types of disks are offered, such as standard (HDD), or Utlra. See the Azure
documentation for details. No changes are required to the file, unless wanting to change the disk
type, or other settings offered by Azure. Below is an example of ems-aks-storage.yaml. This can be
used with any of the EMS persisted storage types.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ems-ssd
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
storageaccounttype: Premium_LRS
kind: Managed

To deploy in AKS, use:


> kubectl apply -f ems-aks-storage.yaml
4.3.1.2 EKS Storage Class
EKS Storage Class
In previous releases of EKS, special drivers for persisted volumes (PV) were not required. Current
releases, (1.23 and newer) of EKS/Kubernetes do require an EBS/CSI driver. This section will
describe how to add the Addon driver.

• Before creating addon aws-ebs-csi-driver, the following two links are required for creating
IAM role for service accounts. First, complete Steps 1 through 4. Once completed, there
should be a role with ${cluster_1}-AmazonEKS_EBS_CSI_DriverRole.

https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-

©2022 TIBCO Software Inc. All Rights Reserved. 17


accounts.html

• Step 5 is required for creating StorageClass, PV and PVC since Kubernetes 1.23.
https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

• Below are steps for creating the Add-on for the EKS (region_1/cluster_1). Remember to
change XXXX to your AWS Account, YYYY to your cluster name, and ZZZZ for the
AWS region. The script, AKD_Kubernetes/3.4/kubernetes/zookeeper/EKS/ebs-csi-setup.sh,
is also provided to do the setup. Note: Modification may be necessary.

• # Step 1 ~ 4

• cluster_1=”YYYY”

• region_1=”ZZZZ”

• $ oidc_id=$(aws eks describe-cluster --region ${region_1} --name ${cluster_1} --query


"cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)

• $ aws iam list-open-id-connect-providers --region ${region_1} | grep $oidc_id | cut -d "/" -f4
**If output is returned, then you already have an IAM OIDC provider for your cluster and you can skip the next step. If no output is
returned, then you must create an IAM OIDC provider for your cluster.

• $ eksctl utils associate-iam-oidc-provider --region ${region_1} --cluster ${cluster_1} --approve

• $ eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --region


${region_1} --cluster ${cluster_1} --attach-policy-arn arn:aws:iam::aws:policy/service-
role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name ${cluster_1}-
AmazonEKS_EBS_CSI_DriverRole

• # Step 5, XXXX is the AWS Account


$ aws eks create-addon --region ${region_1} --cluster-name ${cluster_1} --addon-name aws-ebs-csi-driver -
-addon-version v1.16.0-eksbuild1 --service-account-role-arn arn:aws:iam::XXXX:role/${cluster_1}-
AmazonEKS_EBS_CSI_DriverRole

Once the above steps are completed, The storage class can be created. The ems-eks-storage.yaml
file will create the persisted storage with using AWS’s GP2 storage class. There are other options
available on AWS, such as GP3,io1, and io2. See the Amazon documentation for details. No
changes are required to the file, unless wanting to change the disk type, or other settings offered by
AWS. Below is an example of ems-eks-storage.yaml. This can be used with any of the EMS
persisted storage types.
kind: StorageClass
apiVersion: storage.k8s.io/v1

©2022 TIBCO Software Inc. All Rights Reserved. 18


metadata:
name: ems-ssd
labels:
k8s-addon: storage-aws.addons.k8s.io
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
parameters:
type: gp2

To deploy in EKS, use:


> kubectl apply -f ems-eks-storage.yaml

4.3.1.3 GKE Storage Class


The ems-gke-storage.yaml file will create the persisted storage with pd-ssd (Zonal/Regional)
storage. Other storage options are available. See the Google Cloud documentation at
https://cloud.google.com/compute/docs/disks for more details. The default persisted storage will
encrypt the data at rest, and provide replication within the region. No changes are required to the
file, unless wanting to change the disk type or other settings offered by Google.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ems-ssd
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd

To deploy in GKE, use:


> kubectl apply -f ems-gke-storage.yaml

4.3.2 EMS Server Template for Cloud Deployments


As mentioned, the EMS Server template for cloud deployments differ from the on premise EMS
Server template. It will use the statefulset deployment and have loadbalancer services for external
access. The EMS server containers are created in an Kubernetes cluster through this template. The
template is provided as part of the ems_10.2_files_kubernetes.zip.
4.3.2.1 Service Objects and EMS Client URLs
Four services are created that exposes the EMS server listen port and the EMS service probe port.
Two of these services are Nodeports, and are the same as documented in section 4.2.2.1, used for
access within the Kubernetes cluster. The two additional services are for external outside of the
cloud platform, such from an on premise EMS client to the EMS server.

apiVersion: v1
kind: Service
metadata:
annotations:

©2022 TIBCO Software Inc. All Rights Reserved. 19


description: Exposes an EMS server listen port both inside and outside the
cluster.
labels:
name: emsserverlb
name: emsserverlb
spec:
type: LoadBalancer
ports:
- name: tibemsdlb-port
nodePort: 30724 (1)
port: 30724 (1)
protocol: TCP
targetPort: 7222 (2)
selector:
name: emsserver
sessionAffinity: None
externalTrafficPolicy: Cluster
loadBalancerSourceRanges:
- <your trusted IP range in the form of 0.0.0.0/0> (3)
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes an EMS server probe port both inside and outside the
cluster.
labels:
name: emsprobelb
name: emsprobelb
spec:
type: LoadBalancer
ports:
- name: tibemsprobelb-port
nodePort: 30728 (4)
port: 30728 (4)
protocol: TCP
targetPort: 7220 (5)
selector:
name: emsserver
sessionAffinity: None
externalTrafficPolicy: Cluster
loadBalancerSourceRanges:
- <your trusted IP range in the form of 0.0.0.0/0> (3)
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:

(1): The NodePort value for the external port to access EMS. It this value is changed, it must be
changed in all locations.

©2022 TIBCO Software Inc. All Rights Reserved. 20


(2): The internal EMS port value. It this value is changed, it must be changed throughout the file
to match.
(3): The trusted source IP range to access EMS. If more than one, separate with commas. EX:
172.1.1.1/32,169.1.1.1/32
(4): The NodePort value for the external port to access the EMS probe port. It this value is
changed, it must be changed in all locations.
(5): The internal EMS probe port value. It this value is changed, it must be changed throughout
the file to match.

4.3.2.2 Statefulset Object


The statefulset for the EMS deployment in the different cloud platforms is similar to the
deployment for EMS with NFS. The main differences was the services defined above, and the setup
for the persistent volumes. The tibemsd-cloud.yaml is used to deploy the EMS Server on cloud
platforms. All changes are similar the changes discussed in previous sections, except for the size of
the volumes used for EMS.

volumeMounts:
- mountPath: /shared
name: tibemsd-volume
volumeClaimTemplates:
- metadata:
name: tibemsd-volume
spec:
accessModes:
- ReadWriteOnce
storageClassName: ems-ssd
resources:
requests:
storage: <storage size for EMS> (1)

(1): The storage size for EMS. This will differ based on what is used for storage. For file based
persisted storage, this should be a value such as 5Gi

After all changes are made, the EMS statefulset can be deployed using the following example.

> kubectl apply -f tibemsd-cloud.yaml

You can verify the results using the following commands:


> kubectl get --selector name=emsserver all
> kubectl describe statefulset/emsserver
> kubectl describe svc/emsserver
> kubectl describe svc/emsprobe

Stopping and deleting the EMS server on a cloud platform is the same as shown in section 4.2.5.

©2022 TIBCO Software Inc. All Rights Reserved. 21


4.4 Using Helm to Deploy EMS in Kubernetes

TIBCO Messaging Enterprise Messaging Service (EMS) can now be configured to use Helm
Charts to deploy EMS on Kubernetes. The section can be used to deploy EMS using Helm Charts.
All Helm charts are part of the ems_10.2_files_kubernetes.zip, and will be in the helm directory.

4.4.1 Preparing the Environment to Use Helm


Helm is an alternate way to deploy containers in Kubernetes. The Kubernetes environment must
exist and be sized appropriately for the deployment.

Follow sections 3 through 4.3 before continuing. The ems image must exist, and have been
tagged/pushed to the appropriate registry.

All Kubernetes platforms, be it private on-premise, or in a public cloud, have a different setup for
the persistent storage used by the storageclass. Use section 4.3 to configure the storage classes.
Note: On-premise persistent storage can vary. Please work with the Kubernetes administrator for
your environment to determine what pv and pvc should be configured and used.

4.4.2 Using Helm to Deploy EMS


Once the Kubernetes nodes are ready and the storage class exists, EMS can be deployed with
Helm.

Similar yaml files used for deployment in Kubernetes, are used for Helm, with one exception,
almost all values have been modified to be a variable that helm can change at deployment.
4.4.2.1 Helm Values
The yaml files used to deploy EMS have variables for most of the commonly set values. These
include variables for the Docker image registry, port values, and service names, etc. The following
examples show all variables available for modification for EMS.

# Copyright (c) 2023 Cloud Software Group, Inc. All Rights Reserved.
Confidential and Proprietary.
#
# emserver:
logLevel: "info"
imageName: "ems:latest"
ems_public_port: "30722"
ems_public_probe_port: "30726"
ems_lb_port: "30724"
probe_lb_port: "30728"
ems_storage_size: "10Gi"
service_name: "emsservers"
listen_port: "7222"
probe_port: "7220"
probe_name: "emsprobe"
storageClass: "ems-ssd"
# lbsourcerange: "0.0.0.0./0"
Figure 1 - Values.yaml example

©2022 TIBCO Software Inc. All Rights Reserved. 22


All values listed are the default values used by Helm in the different charts. All values can be
changed directly in the values.yaml file, or by setting the variable value when installing the chart.
4.4.2.2 Helm Charts
EMS uses a single chart to start EMS. If values need to be made on a temporary basis, setting the
values in a script can be helpful.
To make the install simple, a bash script, install-ems.sh, is provided. The following example shows
the script.

EMS_IMAGE_NAME="ems:latest" 1)
NAMESPACE=tibco 2)
settings="imageName=<docker
registry>/$EMS_IMAGE_NAME,ems_storage_size=5Gi,lbsourcerange=<LB Source
range in for of 0.0.0.0/0>" 3)
helm upgrade --install --namespace $NAMESPACE emsserver Charts/ems --set
$settings
Figure 2 - Install-ems.sh script example

1) The name of the EMS Docker image


2) The Kubernetes namespace to use
3) All variables to be set from the defaults. In this example the registry for the Docker image,
the broker_lb_range for external access, and the EMS storage size. This line can contain
any of the variables shown in the previous figures for the appropriate container.

The script is not required, and all Helm Chart installs can be done with any of the supported Helm
commands.
4.4.2.3 Installing the Helm Charts
To install the Helm Charts, run the install-ems script. It should only take a few seconds to run.

./install-ems.sh
Release "emsserver" does not exist. Installing it now.
NAME: emsserver
LAST DEPLOYED: Tue Jun 27 11:52:31 2023
NAMESPACE: tibco
STATUS: deployed
REVISION: 1
TEST SUITE: None

Use” kubectl get pods,svc” to verify all pods and internal and external services are running and
ready.

NAME READY STATUS RESTARTS AGE


pod/emsserver-0 1/1 Running 0 2m37s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)


AGE
service/emsprobelb LoadBalancer 10.0.130.216 20.246.184.39
30728:30728/TCP 2m37s

©2022 TIBCO Software Inc. All Rights Reserved. 23


service/emsprobe. NodePort 10.0.123.160 <none>
30726:30726/TCP 2m37s
service/emsserverlb LoadBalancer 10.0.11.96 20.84.3.202
30724:30724/TCP 2m37s
service/emsserver. NodePort 10.0.81.57 <none>
30722:30722/TCP 2m37s
service/emsservers ClusterIP None <none>
30722/TCP,30726/TCP 2m37s

To access and test the new AKD cluster, follow Section 5.

To uninstall the chart use:

> helm -n <namespace> delete emsserver


release "emsserver" uninstalled

Note: The storageclass and persistent volumes need to be uninstalled manually.

©2022 TIBCO Software Inc. All Rights Reserved. 24


5 Accessing and Testing EMS on a Cloud Platform

When tibemsd-cloud.yaml or helm chart was applied to the K8 cluster on a cloud platform
(AKS,EKS/GKE), four K8 Services were created, as shown below:

emsserver NodePort 10.4.12.85 <none> 30722:30722/TCP 4d18h


emsserverlb LoadBalancer 10.4.8.147 20.120.78.64 30724:30724/TCP 4d17h
emsprobe NodePort 10.4.12.190 <none> 30726:30726/TCP 4d18h
emsprobelb LoadBalancer 10.4.167.10 20.120.79.212 30728:30728/TCP 4d17h

The emsserver service is a NodePort service allowing access internally, while the emsserverlb
service will provide a LoadBlancer which can provide external access to the EMS Server. The
emsprobe service is a NodePort service allowing the probe to be accessed internally, while the
emsprobelb service will provide a LoadBlancer which can provide external access to the EMS
probe port to check for isLive and isReady.

5.1 Internal Access to the EMS Server

The EMS Server running in a cloud platform can be accessed via the NodePort Kubernetes service
and port. The default is tcp://emsserver:30722. Any K8 process running in the same cluster and
name space can access the EMS Server using this URL. The following example shows an EMS
client running in a cloud platform connecting to the EMS Server running in the cloud platform via
the NodePort.

> ./tibemsadmin -server emsserver:30722

TIBCO Enterprise Message Service Administration Tool.


Copyright 1997-2022 by TIBCO Software Inc.
All rights reserved.

Version 10.2.1 V1 2022-11-03

Login name (admin):


Password:
Connected to: tcp://emsserver:30722
Type 'help' for commands help, 'exit' to exit:
tcp://emsserver:30722>

5.2 External Access to the EMS Server

The LoadBalancer K8 service, emsserverlb, will provide external access to the EMS server via the
LoadBalancer Kubernetes IP address and port. Access will also be based on which trusted IP
range defined in section 4.3.2.1. The following example shows access the EMS Server from an
external source.

> tibemsadmin -server tcp://20.72.183.42:30724

©2022 TIBCO Software Inc. All Rights Reserved. 25


TIBCO Enterprise Message Service Administration Tool.
Copyright 1997-2022 by TIBCO Software Inc.
All rights reserved.

Version 10.2.1 V1 2022-11-03

Login name (admin):


Password:
Connected to: tcp://20.72.183.42:30724
Type 'help' for commands help, 'exit' to exit:
tcp://20.72.183.42:30724>

5.3 Connection Factory Update

When the EMS Server is created in the Kubernetes cluster, the default connection factories are all
created using emsserver as the hostname. While this works fine for internal EMS connections, the
connection factories will fail with external connections. A new EMS connection factory(s) should
be created to allow for connection factor/JNDI support from external sources. Using the connection
string used in the above example, an new connect factory can be created.
> ./tibemsadmin -server tcp://20.120.78.64:30724

TIBCO Enterprise Message Service Administration Tool.


Copyright 1997-2022 by TIBCO Software Inc.
All rights reserved.

Version 10.2.1 V1 2022-11-03

Login name (admin):


Password:
Connected to: tcp://20.120.78.64:30724
Type 'help' for commands help, 'exit' to exit:
tcp://20.120.78.64:30724> create factory externalems generic
url=tcp://20.120.78.64:30724,tcp://10.120.78.64:30724 reconnect_attempt_count=120
reconnect_attempt_delay=5000
ConnectionFactory 'externalems' has been created
tcp://20.120.78.64:30724> show factory externalems
Factory = ConnectionFactory
JNDI Names = "externalems"
URL = tcp://20.120.78.64:30724,tcp://10.120.78.64:30724
ClientID =
Load Balanced = no
reconnect_attempt_count = 120
reconnect_attempt_delay = 5000
tcp://20.120.78.64:30724>

The connection factory can now be used from external EMS clients to connect to the EMS server
running on the cloud platform.

The EMS sample applications can be used to test either internal or external access.

©2022 TIBCO Software Inc. All Rights Reserved. 26


Appendix A: TLS Configuration
The following topics describe how to modify the EMS server templates and the Docker image
build script so that EMS clients can connect to the server through TLS (formerly SSL).

Note: With EMS 10.X and later, EMS clients should be using TLS 1.3.

Whether an EMS listen port is configured for TCP or TLS makes no difference in terms of
exposing it through a service. However, you need to decide how to provision the corresponding
certificate files.

While these could be placed in the persisted volume or embedded in the EMS Docker image, the
standard practice in the Kubernetes world consists of using secret objects. These are meant to
decouple sensitive information from the pods and can be mounted into containers as volumes
populated with files to be accessed by programs.

In this example, we will assume we want the EMS server to be authenticated by EMS clients. This
involves providing the server with its certificate, private key and the corresponding password,
which we will store inside a secret. We will mount that secret into the container, point the EMS
server configuration to the certificate and private key files and pass the corresponding password to
the server through its -ssl_password command-line option.

Based on the sample certificates that ship with EMS, the files will eventually be made available
inside the container as follows:

/etc/secret/server.cert.pem
/etc/secret/server.key.pem
/etc/secret/ssl_password

A.1 Creating a Secret

To store the server certificate, private key and the corresponding password in a secret, based on the
sample certificates available in the EMS package under ems/<version>/samples/certs:
> cd …/ems/<version>/samples
> kubectl create secret generic tibemsd-secret \
--from-file=server.cert.pem=certs/server.cert.pem \
--from-file=server.key.pem=certs/server.key.pem \
--from-literal=ssl_password=password

Check the result using these commands:


> kubectl describe secret tibemsd-secret
> kubectl get -o yaml secret/tibemsd-secret

©2022 TIBCO Software Inc. All Rights Reserved. 27


A.2 Modifying the Template

The tibemsd-nfs.yaml, the tibemsd-cloud.yaml,or the helm ems.yaml template needs


to be adjusted to mount the secret as a volume. This involves adding one new entry to the volumes
section in tibemsd-nfs.yaml and another one to the volumeMounts sections.

spec:

template:

spec:
containers:
- name: tibemsd-container

volumeMounts:
- mountPath: /shared
name: tibemsd-volume
- mountPath: /etc/secret
name: tibemsd-secret-volume
readOnly: true

volumes:
- name: tibemsd-volume
persistentVolumeClaim:
claimName: ${{EMS_PVC}}
- name: tibemsd-secret-volume
secret:
secretName: tibemsd-secret

For the tibemsd-cloud.yaml or helm ems.yaml template, the volumes section must be added,
as shown in the following example.

©2022 TIBCO Software Inc. All Rights Reserved. 28


volumes:
- name: tibemsd-secret-volume
secret:
secretName: tibemsd-secret
volumeClaimTemplates:
- metadata:
name: tibemsd-volume
spec:
accessModes:
- ReadWriteOnce
storageClassName: ems-ssd
resources:
requests:

A.3 Modifying the tibemscreateimage EMS Docker image build script

In the tibemsd-configbase.json section:

Modify the primary_listen to use ssl:

"primary_listens":[
{
"url":"ssl://7222"
}
],

Add an ssl section pointing to the certificate files:

"tibemsd":{

"ssl":{
"ssl_server_identity":"/etc/secret/server.cert.pem",
"ssl_server_key":"/etc/secret/server.key.pem"
},

In the tibems.sh section:

The tibemsd_run() function needs to be modified to launch the EMS server with the proper
value for its -ssl_password command-line option:


if [[ \$# -ge 1 ]]; then
PARAMS=\$*
else
tibemsd_seed
PARAMS="-config /shared/ems/config/\$EMS_SERVICE_NAME.json -
ssl_password \`cat /etc/secret/ssl_password\`"
fi

©2022 TIBCO Software Inc. All Rights Reserved. 29


A.4 Applying the Modifications

● Regenerate the EMS Docker image, tag it and push it to the Registry (see section 3.4).
● Create a new deployment and service (see section 4.2.4).

You can check the result by connecting to the server with one of the EMS TLS sample clients:

Note: Ensure Java 8 has been updated to the most current version. If not, the following will fail,
with ssl context error.

> java tibjmsSSL -server ssl://emsserver:30722 \


-ssl_trusted ../certs/server_root.cert.pem

©2022 TIBCO Software Inc. All Rights Reserved. 30

You might also like