How To Configure EMS 10.2 On Kubernetes Files
How To Configure EMS 10.2 On Kubernetes Files
How To Configure EMS 10.2 On Kubernetes Files
Trademarks
TIBCO, the TIBCO, logo and TIBCO Enterprise Message Service are either registered trademarks or
trademarks of Cloud Software Group in the United States and/or other countries. All other product and
company names and marks mentioned in this document are the property of their respective owners and are
mentioned for identification purposes only.
Content Warranty
The information in this document is subject to change without notice. THIS DOCUMENT IS PROVIDED "AS
IS" AND CSG MAKES NO WARRANTY, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT
LIMITED TO ALL WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. TIBCO Software Inc. shall not be liable for errors contained herein or for incidental or
consequential damages in connection with the furnishing, performance or use of this material.
TIBCO Enterprise Message Server (EMS) version 10.2 can run with a variety of persistent storage
options. This document will outline running EMS with persisted file storage in Kubernetes
environments. Running TIBCO Enterprise Message Service on different Kubernetes Container
Platforms with persisted file storage involves:
● Creating a Docker® image embedding EMS and hosting it in a Docker registry
● Preparing a shared folder on NFS for on premise storage configurations, if required
● Provisioning Persisted volume storage for cloud platforms, if required
● Configuring and creating EMS containers based on the EMS Docker image to run in
Kubernetes
● Optionally, EMS can be deployed in Kubernetes via Helm
The steps described in this document are supported for the following versions of the products and
components involved:
● TIBCO EMS 10.2.1 and later
● Docker Community/Enterprise Edition should be most recent version.
● Kubernetes 1.2x or Red Hat OpenShift Container Platform 4.7. Recommend latest versions
of the container platform
● HELM 3.92 or later
1.3 Prerequisites
With on premise environments, NFSv4 is required to support EMS in containers. NFSv4 should be
used with Red Hat OpenShift, SUSE Rancher, or generic Kubernetes environments on premise
environments. However, in cloud Kubernetes deployments (AKS, EKS, GKE), NFSv4 is not
recommended. The provider’s Kubernetes persisted storage solutions should be used. This
document does not provide information setting up NFSv4 in cloud deployments, nor third party
persisted storage for on premise deployments. A persisted volume (PV) is always required no
matter what the storage option is.
Note: It is also possible to use the default provisioned storage class for the Kubernetes
environment. However, these usually are not suitable with EMS, as the data is not retained, and
every time the EMS pod is stopped/started all data will be lost. Check with your Kubernetes
administrator or other options for on premise environments.
A traditional EMS server configured for fault tolerance relies on its state being shared by a primary
and a secondary instance, one being in the active state while the other is in standby, ready to take
over. The shared state relies on the server store and configuration files to be located on a shared
storage such as a SAN or a NAS using NFS.
By contrast, the fault tolerance model used by EMS in Kubernetes relies on the Kubernetes restart
mechanisms. Only one EMS server instance is running and, in case of a server failure, will be
restarted inside its container. In case of a failure of the container or the corresponding cluster node,
the cluster will recreate the container, possibly on a different node, and restart the EMS server
there.
Note: In cloud environments, cluster nodes may be in different zones, and the storage may not be
available. In these environments, the use of FTL is highly recommended to ensure data persistence
and replication.
Within the container, the health of the EMS server is monitored by a health check probe for:
● liveness
● readiness
For more information on the probes, see section 4.2.3.
Note: If NFS is not being used for storage on premise environments, this section and the following
section can be skipped.
You can control access to the NFS shared folders using User and Group IDs.
Depending on how your NFS server is configured, programs accessing shared folders may have to
run with a specific user ID (uid) and group ID (gid).
©2022 TIBCO Software Inc. All Rights Reserved. 6
While you can control the uid of a container through a field called runAsUser, controlling its gid
is not possible in older versions of OpenShift, such as version 3.11. If your NFS setup requires
controlling the gid used by the EMS server, a workaround consists of creating a specific user and
group in the EMS Docker image (see section 3.1 below) and setting its uid and gid to the desired
values.
As a result, an EMS server running in a container started from that image will access its store, log,
and configuration.
● Log on to a machine that can access the NFS shared folder with the user account meant to
be used by the EMS server.
● Create the shared folder.
For example, ~/ems/shared.
● Modify the permissions to your requirements.
For example, 750 (rwxr-x---).
Example:
> mkdir -p ~/ems/shared
> chmod -R 750 ~/ems/shared
The content of the container that will run in Kubernetes derives from a Docker image that first
needs to be created and then hosted in a Docker registry.
To create an EMS Docker image, use the tibemsfilescreateimage script on a machine
equipped for building Docker images.
Note: The tibemsfilescreateimage script provided with the ems_10.2_files_kubernetes.zip
must be used.
This script needs the location of the following software packages to be installed:
● EMS installation package
● EMS hotfixes (optional)
● The Java package (optional)
Once the necessary EMS installation package and optional packages are available, the
tibemsfilescreateimage script can be ran to create the Docker image.
This script also lets you choose whether to save the image as an archive and creates a user and
group set to the required uid and gid values.
The following command creates a Docker image based on the EMS 10.2 Linux installation
package, adding a JVM, the 1000 uid and the 1000 gid.
> tibemsfilescreateimage TIB_ems_10.2.1_linux_x86_64.zip \
-j <JRE installation package>.tar.gz \
-u 1000 \
-g 1000
The following example illustrates how you can experiment with that Docker image after it has been
built:
This following command creates a sample EMS server folder hierarchy and configuration in the
current directory and starts the corresponding server:
> docker run -p 7222:7222 -v `pwd`/test:/shared ems:10.2.1
The base Docker image can be extended to include FTL client libraries and custom JAAS
authentication and JACI authorization modules.
3.3.1 Provisioning FTL Client Libraries to Use the Corresponding Transports
1. Copy the FTL client library files to a temporary folder.
2. From the temporary folder, use a Dockerfile based on the example given below to copy
these files into the base Docker image:
FROM ems:10.2.1
COPY --chown=tibuser:tibgroup . /opt/tibco/ems/docker/ftl
> docker build -t ems:10.2.0_ftl .
1. Copy your custom JAAS or JACI plugin files, including the static configuration files they
may rely on, to a temporary folder.
2. From the temporary folder, use a Dockerfile based on the example given below to copy
these files into the base Docker image:
FROM ems:10.2.1
COPY --chown=tibuser:tibgroup . /opt/tibco/ems/docker/security
3. Upon customizing your EMS configuration, make sure to include the relevant paths to those
files in the Security Class path, JAAS Classpath and JACI Classpath properties. Note: This
step can only be completed, if a files based docker image is being created.
4. Note that the other required files are in their usual location:
/opt/tibco/ems/<version>/bin and /opt/tibco/ems/<version>/lib
For example:
/opt/tibco/ems/docker/security/user_jaas_plugin.jar:/opt/tibco/ems/
10.2/bin/tibemsd_jaas.jar:/opt/tibco/ems/10.2/lib/tibjmsadmin.jar,
etc.
Tag the image to suit your Docker registry location and push it there. Note: If the image is to be
hosted on AKS/EKS/GKE, there may be additional steps required to login into the registry. See the
specific cloud provider’s documentation for details on uploading the Docker image to the
respective registry.
Note: with AKS, it is now possible to attach the Azure Container Registry (ACR) directly to the
Azure Kubernetes Cluster, making this a simple step, which does not require a secret.
As previously mentioned, TIBCO Enterprise Message Service can run on virtually all Kubernetes
Container Platforms. These include generic on premise Kubernetes, on premise or cloud versions
of Red Hat OpenShift, Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service
(EKS), and Google Kubernetes Engine (GKE). Though not tested, EMS should also work with
SUSE Rancher and Tanzu.
The EMS deployment on any of these container platforms is similar with differences mainly with
the persisted storage.
This section will provide details of configuring TIBCO EMS utilizing files based storage for EMS
in Kubernetes, highlighting the differences between container platforms.
A new or existing Kubernetes cluster can be used to deploy EMS. In general, for EMS, two (2)
Kubernetes nodes are required. For a small EMS configuration, each node requires a minimum of
2 cores, and 8+ GB of RAM for EMS depending on usage. The system resource requirements for
EMS in Kubernetes will be similar to an EMS server running on bare metal. The storage class for
EMS is set for 5GB.
On premise configurations are usually OpenShift, Rancher, Tanzu, or a generic Kubernetes cluster.
This section will outline the requirements for deploying EMS in an on premise cluster.
We will create one PV and one PVC at the same time since these are meant to be bound together.
Modify the nfs-pv-pvc.yaml file for your setup:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-ems
annotations:
# Should be replaced by spec.mountOptions in the future
volume.beta.kubernetes.io/mount-options: soft (1)
spec:
capacity:
storage: 5Gi (2)
accessModes:
©2022 TIBCO Software Inc. All Rights Reserved. 11
- ReadWriteMany
nfs:
path: /vol/home/user/ems/shared (3)
server: 10.98.128.50 (4)
persistentVolumeReclaimPolicy: Retain
claimRef:
name: claim-nfs-ems
namespace: ems-project (5)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-nfs-ems
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi (2)
volumeName: pv-nfs-ems
(1): Optional comma-separated list of NFS mount options used when the PV is mounted on a
cluster node. Note: soft mount must be retained.
(2): Storage capacity for EMS. Default is 5 Gi.
(3): The path that is exported by the NFS server. In this example, we want it to match the
~/ems/shared folder created in section 2.3.
(4): The host name or IP address of the NFS server.
(5): This needs to match the name of the namespace. Can be default if there is no specific
namespace. If a namespace is not required, the line can removed.
Note: the same PV/PVC can be used by multiple pods within the same project.
Creating the PV/PVC is done once for the lifetime of the project.
For example, if your cluster runs on two nodes called node1 and node2 that can be addressed by
those host names, and if you have exposed your EMS server through a service using port number
30722, EMS clients running outside the cluster will be able to access it either through the
tcp://node1:30722 or tcp://node2:30722 URL, regardless of the node where the container
is actually running. This works by virtue of each node proxying port 30722 into the service.
EMS clients running inside the cluster will be able to access the EMS server either in the fashion
described above or through its service name. Assuming the service name is emsserver and the
port still is 30722, that amounts to using the tcp://emsserver:30722 URL.
To ensure EMS client automated fault-tolerance failover, these must connect with FT double
URLs. Using the example above: tcp://node1:30722,tcp://node1:30722 from outside the
cluster or tcp://emsserver:30722,tcp://emsserver:30722 from inside the cluster. For
the first form, since all nodes will proxy port 30722 into the service, repeating the same node name
twice fits our purpose. The connection factories in the sample EMS server configuration generated
by default upon creating a container illustrate that pattern. Should the EMS server or its container
fail, clients will automatically reconnect to the same URL once the server has been restarted.
The EMS probe port is also now exposed, and can be accessed in a similar manner as the listen
port, except it will use http, rather than tcp. So, using the above example with the probe port on
30726, the EMS server can be accessed at http://emsserver:30726/isReady to verify
availability from within the cluster.
You can use types of service other than NodePort if they fit your requirements.
4.2.2.2 Deployment Object
A deployment includes the definition of a set of containers and the desired behavior in terms of
number of replicas (underlying ReplicaSet) and deployment strategy.
Note: Only the Docker image and registry location are required to be modified. All others are
optional.
Key items:
kind: Deployment
…
spec:
replicas: 1 (1)
…
strategy:
type: Recreate
…
template:
…
spec:
containers:
- name: tibemsd-container
©2022 TIBCO Software Inc. All Rights Reserved. 13
image: <Name and location of the Docker registry> (2)
imagePullPolicy: Always (3)
env: (4)
- name: EMS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: EMS_PUBLIC_PORT
value: “30722” (5)
- name: EMS_SERVICE_NAME
value: “emsserver” (6)
- name: EMS_PROBE_PORT
value: “7220” (7)
- name: EMS_PUBLIC_PROBE_PORT
value: “30726” (8)
args:
- ‘file’
livenessProbe:
…
readinessProbe:
…
ports:
- containerPort: 7222 (9)
name: tibemsd-tcp
protocol: TCP
…
securityContext:
runAsUser: 1000 (10)
…
volumeMounts:
- mountPath: /shared (11)
name: tibemsd-volume (12)
…
restartPolicy: Always (13)
…
volumes:
- name: tibemsd-volume (12)
persistentVolumeClaim:
claimName: claim-nfs-ems (14)
(1): The number of replicated pods: 1, since we want a single instance of the EMS server. This
should not be changed.
(2): The location and name of the Docker registry. This must be updated
(3): Determines if the EMS Docker image should be pulled from the Docker registry prior to
starting the container.
(4): Environment variables that will be passed to the container.
(5): 30722 is the environment variable value for the EMS_PUBLIC_PORT. If this value is
changed, it must be changed throughout the file.
(6): emsserver is the environment variable value for the EMS_SERVICE_NAME. If this value is
changed, it must be changed throughout the file.
(7): 7220 is the environment variable value for the EMS_PROBE_PORT. If this value is
changed, it must be changed throughout the file.
(8): 30726 is the environment variable value for the EMS_PUBLIC_PROBE_PORT. If this
value is changed, it must be changed throughout the file.
©2022 TIBCO Software Inc. All Rights Reserved. 14
(9): 7222 is the EMS container port value. If this value is changed, it must be changed
throughout the file.
(10): The uid the container will run as.
(11): The path where our NFS shared folder will be mounted inside of the container.
(12): The internal reference to the volume defined here.
(13): The pod restart policy: Set so that the kubelet will always try to restart an exited container. If
the EMS server stops or fails, its container will exit and be restarted.
(14): The name of the PVC created by the cluster administrator for NFS. Must be the same as used
in the nfs-pv-pvc.yaml
...
livenessProbe:
httpGet:
path: /isLive
port: probe-tcp
initialDelaySeconds: 1 (1)
timeoutSeconds: 5 (2)
periodSeconds: 6 (3)
readinessProbe:
httpGet:
path: /isReady
port: probe-tcp
initialDelaySeconds: 1 (1)
timeoutSeconds: 5 (2)
periodSeconds: 6 (3)
...
(1): Number of seconds after the container has started before the probe is initiated.
(2): Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
(3): How often (in seconds) to perform the probe.
Defaults to 10 seconds. Minimum value is 1.
For example:
> kubectl scale --replicas=0 deploy emsserver
To delete an EMS server deployment and service entirely, use the kubectl delete operation.
For example:
> kubectl delete -f tibemsd-nfs.yaml
The corresponding pod and ReplicaSet will also be deleted. The PVC and PV will not be deleted,
nor will the corresponding data. To delete the data, PV, and PVC, use the following:
> kubectl delete pvc,pv –all
Deploying EMS in AKS/EKS/GKE differs from on premise deployments. These include using a
PV/PVC provided by the cloud provider, EMS will run as a Kubernetes statefulset rather than a
Kubernetes deployment, and LoadBalancer services will be configured for external access.
• Before creating addon aws-ebs-csi-driver, the following two links are required for creating
IAM role for service accounts. First, complete Steps 1 through 4. Once completed, there
should be a role with ${cluster_1}-AmazonEKS_EBS_CSI_DriverRole.
https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-
• Step 5 is required for creating StorageClass, PV and PVC since Kubernetes 1.23.
https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
• Below are steps for creating the Add-on for the EKS (region_1/cluster_1). Remember to
change XXXX to your AWS Account, YYYY to your cluster name, and ZZZZ for the
AWS region. The script, AKD_Kubernetes/3.4/kubernetes/zookeeper/EKS/ebs-csi-setup.sh,
is also provided to do the setup. Note: Modification may be necessary.
• # Step 1 ~ 4
• cluster_1=”YYYY”
• region_1=”ZZZZ”
• $ aws iam list-open-id-connect-providers --region ${region_1} | grep $oidc_id | cut -d "/" -f4
**If output is returned, then you already have an IAM OIDC provider for your cluster and you can skip the next step. If no output is
returned, then you must create an IAM OIDC provider for your cluster.
Once the above steps are completed, The storage class can be created. The ems-eks-storage.yaml
file will create the persisted storage with using AWS’s GP2 storage class. There are other options
available on AWS, such as GP3,io1, and io2. See the Amazon documentation for details. No
changes are required to the file, unless wanting to change the disk type, or other settings offered by
AWS. Below is an example of ems-eks-storage.yaml. This can be used with any of the EMS
persisted storage types.
kind: StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ems-ssd
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd
apiVersion: v1
kind: Service
metadata:
annotations:
(1): The NodePort value for the external port to access EMS. It this value is changed, it must be
changed in all locations.
volumeMounts:
- mountPath: /shared
name: tibemsd-volume
volumeClaimTemplates:
- metadata:
name: tibemsd-volume
spec:
accessModes:
- ReadWriteOnce
storageClassName: ems-ssd
resources:
requests:
storage: <storage size for EMS> (1)
(1): The storage size for EMS. This will differ based on what is used for storage. For file based
persisted storage, this should be a value such as 5Gi
After all changes are made, the EMS statefulset can be deployed using the following example.
Stopping and deleting the EMS server on a cloud platform is the same as shown in section 4.2.5.
TIBCO Messaging Enterprise Messaging Service (EMS) can now be configured to use Helm
Charts to deploy EMS on Kubernetes. The section can be used to deploy EMS using Helm Charts.
All Helm charts are part of the ems_10.2_files_kubernetes.zip, and will be in the helm directory.
Follow sections 3 through 4.3 before continuing. The ems image must exist, and have been
tagged/pushed to the appropriate registry.
All Kubernetes platforms, be it private on-premise, or in a public cloud, have a different setup for
the persistent storage used by the storageclass. Use section 4.3 to configure the storage classes.
Note: On-premise persistent storage can vary. Please work with the Kubernetes administrator for
your environment to determine what pv and pvc should be configured and used.
Similar yaml files used for deployment in Kubernetes, are used for Helm, with one exception,
almost all values have been modified to be a variable that helm can change at deployment.
4.4.2.1 Helm Values
The yaml files used to deploy EMS have variables for most of the commonly set values. These
include variables for the Docker image registry, port values, and service names, etc. The following
examples show all variables available for modification for EMS.
# Copyright (c) 2023 Cloud Software Group, Inc. All Rights Reserved.
Confidential and Proprietary.
#
# emserver:
logLevel: "info"
imageName: "ems:latest"
ems_public_port: "30722"
ems_public_probe_port: "30726"
ems_lb_port: "30724"
probe_lb_port: "30728"
ems_storage_size: "10Gi"
service_name: "emsservers"
listen_port: "7222"
probe_port: "7220"
probe_name: "emsprobe"
storageClass: "ems-ssd"
# lbsourcerange: "0.0.0.0./0"
Figure 1 - Values.yaml example
EMS_IMAGE_NAME="ems:latest" 1)
NAMESPACE=tibco 2)
settings="imageName=<docker
registry>/$EMS_IMAGE_NAME,ems_storage_size=5Gi,lbsourcerange=<LB Source
range in for of 0.0.0.0/0>" 3)
helm upgrade --install --namespace $NAMESPACE emsserver Charts/ems --set
$settings
Figure 2 - Install-ems.sh script example
The script is not required, and all Helm Chart installs can be done with any of the supported Helm
commands.
4.4.2.3 Installing the Helm Charts
To install the Helm Charts, run the install-ems script. It should only take a few seconds to run.
./install-ems.sh
Release "emsserver" does not exist. Installing it now.
NAME: emsserver
LAST DEPLOYED: Tue Jun 27 11:52:31 2023
NAMESPACE: tibco
STATUS: deployed
REVISION: 1
TEST SUITE: None
Use” kubectl get pods,svc” to verify all pods and internal and external services are running and
ready.
When tibemsd-cloud.yaml or helm chart was applied to the K8 cluster on a cloud platform
(AKS,EKS/GKE), four K8 Services were created, as shown below:
The emsserver service is a NodePort service allowing access internally, while the emsserverlb
service will provide a LoadBlancer which can provide external access to the EMS Server. The
emsprobe service is a NodePort service allowing the probe to be accessed internally, while the
emsprobelb service will provide a LoadBlancer which can provide external access to the EMS
probe port to check for isLive and isReady.
The EMS Server running in a cloud platform can be accessed via the NodePort Kubernetes service
and port. The default is tcp://emsserver:30722. Any K8 process running in the same cluster and
name space can access the EMS Server using this URL. The following example shows an EMS
client running in a cloud platform connecting to the EMS Server running in the cloud platform via
the NodePort.
The LoadBalancer K8 service, emsserverlb, will provide external access to the EMS server via the
LoadBalancer Kubernetes IP address and port. Access will also be based on which trusted IP
range defined in section 4.3.2.1. The following example shows access the EMS Server from an
external source.
When the EMS Server is created in the Kubernetes cluster, the default connection factories are all
created using emsserver as the hostname. While this works fine for internal EMS connections, the
connection factories will fail with external connections. A new EMS connection factory(s) should
be created to allow for connection factor/JNDI support from external sources. Using the connection
string used in the above example, an new connect factory can be created.
> ./tibemsadmin -server tcp://20.120.78.64:30724
The connection factory can now be used from external EMS clients to connect to the EMS server
running on the cloud platform.
The EMS sample applications can be used to test either internal or external access.
Note: With EMS 10.X and later, EMS clients should be using TLS 1.3.
Whether an EMS listen port is configured for TCP or TLS makes no difference in terms of
exposing it through a service. However, you need to decide how to provision the corresponding
certificate files.
While these could be placed in the persisted volume or embedded in the EMS Docker image, the
standard practice in the Kubernetes world consists of using secret objects. These are meant to
decouple sensitive information from the pods and can be mounted into containers as volumes
populated with files to be accessed by programs.
In this example, we will assume we want the EMS server to be authenticated by EMS clients. This
involves providing the server with its certificate, private key and the corresponding password,
which we will store inside a secret. We will mount that secret into the container, point the EMS
server configuration to the certificate and private key files and pass the corresponding password to
the server through its -ssl_password command-line option.
Based on the sample certificates that ship with EMS, the files will eventually be made available
inside the container as follows:
/etc/secret/server.cert.pem
/etc/secret/server.key.pem
/etc/secret/ssl_password
To store the server certificate, private key and the corresponding password in a secret, based on the
sample certificates available in the EMS package under ems/<version>/samples/certs:
> cd …/ems/<version>/samples
> kubectl create secret generic tibemsd-secret \
--from-file=server.cert.pem=certs/server.cert.pem \
--from-file=server.key.pem=certs/server.key.pem \
--from-literal=ssl_password=password
spec:
…
template:
…
spec:
containers:
- name: tibemsd-container
…
volumeMounts:
- mountPath: /shared
name: tibemsd-volume
- mountPath: /etc/secret
name: tibemsd-secret-volume
readOnly: true
…
volumes:
- name: tibemsd-volume
persistentVolumeClaim:
claimName: ${{EMS_PVC}}
- name: tibemsd-secret-volume
secret:
secretName: tibemsd-secret
For the tibemsd-cloud.yaml or helm ems.yaml template, the volumes section must be added,
as shown in the following example.
"primary_listens":[
{
"url":"ssl://7222"
}
],
"tibemsd":{
…
"ssl":{
"ssl_server_identity":"/etc/secret/server.cert.pem",
"ssl_server_key":"/etc/secret/server.key.pem"
},
The tibemsd_run() function needs to be modified to launch the EMS server with the proper
value for its -ssl_password command-line option:
…
if [[ \$# -ge 1 ]]; then
PARAMS=\$*
else
tibemsd_seed
PARAMS="-config /shared/ems/config/\$EMS_SERVICE_NAME.json -
ssl_password \`cat /etc/secret/ssl_password\`"
fi
…
● Regenerate the EMS Docker image, tag it and push it to the Registry (see section 3.4).
● Create a new deployment and service (see section 4.2.4).
You can check the result by connecting to the server with one of the EMS TLS sample clients:
Note: Ensure Java 8 has been updated to the most current version. If not, the following will fail,
with ssl context error.