ELK Cookbook

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

ELK CONFIGURATION RUNBOOK

ENVIRONMENT

This document describes the steps required to build an Elastic cluster having following architecture:

- 3 Nodes each acting as dedicated Data and Master eligible.


- 2 Nodes dedicated for coordinator role.
- Intercommunication among the Elastic nodes is on SSL.
- TLS been enabled over HHTP layer for Elasticsearch.
- Two instances of Kibana each on a coordinator node.
- HTTPS connection with Kibana and Kibana connects with Elastic nodes over SSL.
- Both the Kibana instances are behind load balancer to provide resiliency and balance the load.
- Role based access been enabled for end users.
- Single node cluster acting as remote monitoring node for all the production cluster nodes.

The deployment is centralized to following version of software and OS:

- Elasticsearch v7.4.2
- Kibana 7.4.2
- Logstash 7.4.2
- RHEL 7.6

Following is the architectural diagram for ELK Cluster Setup in each DC:
CONFUGURATION STEPS

PHASE 1: Installing and configuring Basic Elasticsearch cluster


Here we would be outlining the steps required to configure a basic Elastic cluster and bootstrap the
cluster.

We have 3 data nodes and 2 coordinator nodes, perform following steps on all the 5 nodes:

1) install Elasticsearch
 pwd
/etc/yum.repos.d

 cat elastic.repo

[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

 yum install elasticsearch

2) disable auto start of Elasticsearch service for now


 systemctl disable elasticsearch

3) add to /etc/sysctl.conf as we need to make swapping as minimum as possible.


vm.swappiness = 1

4) In /etc/security/limits.conf do following entry:

* soft nofile 65535


* hard nofile 65535
* soft nproc 4096
* hard nproc 4096
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

5) In /etc/elasticsearch/jvm.options edit default heap memory to 30g:


-Xms30g
-Xmx30g

6) /tmp must not be mounted as noexec

7) For elasticsearch user to lock memory for heap do following steps:

 In /etc/sysconfig/elasticsearch:

ES_JAVA_OPTS="-Xms30g -Xmx30g"
MAX_LOCKED_MEMORY=unlimited

 Create an override.conf file using "systemctl edit elasticsearch" command with this content:

[Service]
LimitMEMLOCK=infinity

# File would be created as "/etc/systemd/system/elasticsearch.service.d/override.conf"

8) Reboot the machine for system settings to take effect

9) Create the data directory and provide necessary ownership:


 mkdir /data/elasticsearch
 chown elasticsearch:elasticsearch /data/elasticsearch

10) Make required configuration for cluster setup.

We have three data/master eligible nodes and two coordinator nodes. The configuration for
data and coordinator node differs a bit.

On all the three data/master eligible nodes put following configuration in


“/etc/elasticsearch/elasticsearch.yml” file.

# sample configuration file from one of the data master nodes “noida-elk01-prod”:
cluster.name: noida-elk
node.name: noida-elk01-prod
network.host: _site_
http.port: 9200
bootstrap.memory_lock: true
node.master: true
node.data: true

discovery.seed_hosts:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod

cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod

path:
logs: /var/log/elasticsearch
data: /data/elasticsearch

On the other two nodes which would be acting as coordinator nodes put following
configuration in “/etc/elasticsearch/elasticsearch.yml” file.

# sample configuration file from one of the data master nodes “noida-elk-cod01”:
cluster.name: noida-elk
node.name: noida-elk-cod01
network.host: _site_
http.port: 9200
bootstrap.memory_lock: true
node.master: false
node.voting_only: false
node.data: false
node.ingest: false
node.ml: false
cluster.remote.connect: false
discovery.seed_hosts:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod

cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod

path:
logs: /var/log/elasticsearch
data: /data/elasticsearch

Note: “discovery.seed_hosts” and “cluster.initial_master_nodes” parameter have details of only


data/master nodes, do not put in the coordinator nodes here.
11) Start Elasticsearch service one by one on all the 5 nodes.

 systemctl start elasticsearch

# monitor the logs as the service starts

Once the service gets started without any error, you can check the cluster status by running following
commands from any of the node:

Following is the example from one of the exiting cluster nodes:

# check cluster health:

 curl 'http://noida-elk01-prod:9200/_cluster/health?pretty'

{
"cluster_name" : "noida-elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

# View current master elected Node:

 curl -X GET "noida-elk01-prod:9200/_cat/master?v&pretty"

id host ip node
PsERiX8-SMWT138-gg8p3w 172.23.48.211 172.23.48.211 noida-elk02-prod

# check memory lock:

 curl http://noida-elk01-prod:9200/_nodes?filter_path=**.mlockall\&pretty

{
"nodes" : {
"EapWqMy1Sk6AgzDH7dGQWg" : {
"process" : {
"mlockall" : true
}
},
"PsERiX8-SMWT138-gg8p3w" : {
"process" : {
"mlockall" : true
}
},
"GvjDYPYbTjWfwB2V-REljQ" : {
"process" : {
"mlockall" : true
}
},
"gzKspovHTkuKHK7TPoz28A" : {
"process" : {
"mlockall" : true
}
},
"f1va0pExQoe8-1uO64JBrg" : {
"process" : {
"mlockall" : true
}
}
}
}

# Elasticsearch info for specific node:

 curl 'http://noida-elk-cod02:9200/'

{
"name" : "noida-elk-cod02",
"cluster_name" : "noida-elk",
"cluster_uuid" : "9Exjga50TGGvgMvwHYKMYg",
"version" : {
"number" : "7.4.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date" : "2019-10-28T20:40:44.881551Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

By this time, you should have your elastic cluster up and running on "Basic" License.

On all the 5 nodes (data + coordinator) perform following steps:

1) Remove following configurations from "/etc/elasticsearch/elasticsearch.yml"

cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod

# As these are one-time configuration when we bootstrap the cluster for the first time.

2) Enable the Elasticsearch service to start on boot on all the nodes (data + coordinator):

 systemctl enable elasticsearch


PHASE 2: Enable SSL/TLS communication between the elastic nodes
As we are securing the communication between the elastic nodes thus it could be achieved by following
below mentioned steps:

1) Generate Node certificate on all the nodes (data + coordinator):

 /usr/share/elasticsearch/bin/elasticsearch-certutil csr --dns noida-elk01-


prod.cadence.com,noida-elk01-prod --name noida-elk01-prod

WARNING: An illegal reflective access operation has occurred


WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG
(file:/usr/share/elasticsearch/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor
sun.security.provider.Sun()
WARNING: Please consider reporting this to the maintainers of
org.bouncycastle.jcajce.provider.drbg.DRBG
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'csr' mode generates certificate signing requests that can be sent to
a trusted certificate authority
* By default, this generates a single CSR for a single instance.
* You can use the '-multiple' option to generate CSRs for multiple
instances, each with their own private key.
* The '-in' option allows for the CSR generation to be automated
by describing the details of each instance in a YAML file

* An instance is any piece of the Elastic Stack that requires an SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.

The 'csr' mode produces a single zip file which contains the certificate
signing requests and private keys for each instance.
* Each certificate signing request is provided as a standard PEM encoding of a PKCS#10 CSR.
* Each key is provided as a PEM encoding of an RSA private key

Please enter the desired output file [csr-bundle.zip]: noida-elk01-prod.zip

Certificate signing requests have been written to /usr/share/elasticsearch/noida-elk01-prod.zip

This file should be properly secured as it contains the private keys for all
instances.

After unzipping the file, there will be a directory for each instance containing
the certificate signing request and the private key. Provide the certificate
signing requests to your certificate authority. Once you have received the
signed certificate, copy the signed certificate, key, and CA certificate to the
configuration directory of the Elastic product that they will be used for and
follow the SSL configuration instructions in the product guide.

2) Unzip the archive "/usr/share/elasticsearch/<node name>.zip" and send the generated certificate for
internal CA signing from all the nodes (data + coordinator):

3) For now, Cadence internal CA generates signs certificates in pkcs7 format, we need to get the signed
certificates and convert each of them into pem format using following command:

eg:

 openssl pkcs7 -print_certs -in noida-elk01-prod.p7b -out noida-elk01-prod.cer

# Do this for all the node's CA signed certificates been received.

In case you see below error while converting the certificate:


unable to load PKCS7 object
139934578898760:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:703:Expecting: PKCS7

You may have received as DER formatted certificate, follow below steps for PEM conversion.
 openssl pkcs7 -inform der -in noida-elk-mon01.p7b -out noida-elk-mon01_temp.cer
 openssl pkcs7 -print_certs -in noida-elk-mon01_temp.cer -out noida-elk01-prod.cer

# “noida-elk01-prod.cer” will be the required PEM formatted certificate

4) move the certs on all the nodes (data + coordinator) as below:

 cd /etc/elasticsearch
 mkdir certs
 cd certs

# copy the Node's CA signed certificate, Node's private key and root CA certificate to this folder.
# private key for the node could be found on each node where the key-pair archive was saved while
generating the same.
# You may also receive the root CA certificate in pks7 format which would again be required to be
converted into PEM format.

From any of the Elasticsearch node you must see three such files

 ls certs

ca_cert.cer noida-elk01-prod.cer noida-elk01-prod.key

5) By now we have the relevant certificated placed local to each Node, we would proceed with making
required configurations in elasticsearch.yml.

On all the nodes (data + coordinator) edit "/etc/elasticsearch/elasticsearch.yml" to include following


configurations:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/noida-elk01-prod.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/noida-elk01-prod.cer
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]

Note: key, certificate parameters must point to the appropriate path on each elastic Node

6) we would also like to enable TLS on HTTP layer for Elasticsearch, on all the nodes (data + coordinator)
edit "/etc/elastcsearch/elastcsearch.yml" to include following configurations:

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/noida-elk01-prod.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/noida-elk01-prod.cer
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]

Note:

- key, certificate parameters must point to the appropriate path on each elastic Node
- These paths are same to the one’s we used above in Step 5 as we are using the same
certificates.

7) restart the Elasticsearch service on all the nodes (data + coordinator).

 systemctl restart elasticsearch

By now, the Elasticsearch cluster nodes would be communicating over SSL and TLS would be enabled
over HTTP.
# To verity TLS over HTTP from any elastic node run plain http query, you would see it to be failed:

 curl 'http://noida-elk01-prod:9200/_cluster/health?pretty'

curl: (52) Empty reply from server

# Now try to query over https first by disabling curl’s certificate verification, you must see it through:

 curl -k 'https://noida-elk01-prod:9200/_cluster/health?pretty'

# To run curl query with certificate verification (this may be required in case you did not create you
node’s CSR with alternative DNS names for both fqdn and shortname):

 curl --cacert /etc/elasticsearch/certs/ca_cert.cer 'https://noida-elk01-


prod.cadence.com:9200/_cluster/health?pretty'
PHASE 3: Setup Kibana and configure it to communicate with elastic nodes over
SSL
We would deploy Kibana on both the coordinator nodes, thus perform following steps on both the
nodes acting as coordinator.

1) Download and install Kibana v7.4.2:

 wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-x86_64.rpm
 rpm -ivh kibana-7.4.2-x86_64.rpm

2) Create a certificate directory:

 mkdir /etc/kibana/certs/

3) As we are installing Kibana on the existing coordinator nodes thus we would copy the
certificates which were earlier used for Elasticsearch for the nodes.

 cd /etc/kibana/certs/
 cp /etc/elasticsearch/certs/* .
 chown kibana:kibana *

4) Now we would proceed to do Kibana configuration. Open the file "/etc/kibana/kibana.yml" and
do following entry:

# sample configuration from node: noida-elk-cod01

server.port: 5602
server.ssl.redirectHttpFromPort: 5601
server.host: noida-elk-cod01.cadence.com
server.name: noida-elk-cod01
elasticsearch.hosts:
- https://noida-elk-cod01:9200
- https://noida-elk-cod02:9200

server.ssl.enabled: true
elasticsearch.ssl.verificationMode: certificate
server.ssl.key: /etc/kibana/certs/noida-elk-cod01.key
server.ssl.certificate: /etc/kibana/certs/noida-elk-cod01.cer
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca_cert.cer

NOTE:

- Here we have configured Kibana in such a way that it will accept the request over HTTP and
redirect it to HTTPS. Port 5601 would accept HTTP request and will redirect to port 5602 over
HTTPS.
- For elasticsearch.hosts parameter we would mention details of both the coordinator nodes in
order of their locality.
- Make sure to mention certificate paths as applicable to the node

5) Start Kibana service:


 systemctl start kibana

# Look for the logs in /var/log/messages

6) Enable Kibana service to start at boot time:


 systemctl enable kibana

7) On successful service startup, you can try to access the Kibana URL via browser as:
http://noida-elk-cod01:5601/

# In the browser you can see the request been redirected to port 5602 over HTTPS.
# check by launching the Kibana url hosted on both the coordinator nodes.
Phase 4: Enable production license
At this point we are good to enable the production license on the cluster.

There are couple of ways to do it:

- Through Kibana UI: open the Kibana UI and browse to “Management -> License management”
here you can upload the production license file
- It can also be achieved by issuing a REST query from any of the elastic nodes as:
 curl -XPUT -u <user> 'https://<host>:<port>/_license' -H "Content-Type: application/json" -d
@license.json

From any node run following command to verify the License:

 curl -k -X GET "https://noida-elk01-prod.cadence.com:9200/_license?pretty" -u elastic

Enter host password for user 'elastic':

{
"license" : {
"status" : "active",
"uid" : "cd4e9431-0900-4745-bb37-1d7188d27118",
"type" : "platinum",
"issue_date" : "2019-10-31T00:00:00.000Z",
"issue_date_in_millis" : 1572480000000,
"expiry_date" : "2022-10-30T23:59:59.999Z",
"expiry_date_in_millis" : 1667174399999,
"max_nodes" : 12,
"issued_to" : "Cadence Design Systems",
"issuer" : "API",
"start_date_in_millis" : 1572480000000
}
}
Phase 5: Setup User Authentication
Elastic provides several authentication mechanisms, for our purpose we would be setting up the Native
and LDAP authentication realms.

 Setting up native authentication:

1) Stop Kibana on the coordinator nodes and stop Elasticsearch service on all the Nodes.
 systemctl stop kibana
 systemctl stop elasticsearch

2) Add the line below to the configuration file (/etc/elasticsearch/elasticsearch.yml) of


each node (data + coordinator).

xpack.security.enabled: true

3) Start Elasticsearch service on all the nodes (data + coordinator)


 systemctl start elasticsearch

4) From any of the nodes try running the following command from a command prompt:
 curl -k 'https://noida-elk01-prod:9200/_cat/nodes?pretty'
You should get a security error because you are trying to access a secure cluster without
any credentials
5) In order to connect to your cluster, you need to configure users and passwords. The first
step is to create passwords for the built-in users. Run the elasticsearch-setup-passwords
script with the interactive option to configure the passwords:
From any of the nodes:
 /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive --url
https://noida-elk01-prod.cadence.com:9200
Initiating the setup of passwords for reserved users
elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user
.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

Enter password for [elastic]:


Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

6) Now that all the built-in users have a password, use the elastic user to run the command
that failed earlier due to lack of credentials:
 curl -k 'https://noida-elk01-prod:9200/_cat/nodes?pretty' -u elastic
Enter host password for user 'elastic':
172.23.49.85 16 10 0 0.00 0.02 0.05 - - noida-elk-cod02
172.23.48.211 2 9 0 0.00 0.01 0.05 dilm * noida-elk02-prod
172.23.48.197 5 9 0 0.01 0.02 0.05 dilm - noida-elk01-prod
172.23.48.219 3 9 0 0.00 0.01 0.05 dilm - noida-elk03-prod
172.23.49.24 8 15 0 0.00 0.01 0.05 - - noida-elk-cod01

At this point Elasticsearch has security been activated. Now we need to set up the user and
password used by Kibana to connect to Elasticsearch.

Perform following steps on both the coordinator nodes as we have Kibana running on both:

1) create a Kibana keystore to store your security settings:


 /usr/share/kibana/bin/kibana-keystore create --allow-root
Created Kibana keystore in /var/lib/kibana/kibana.keystore

2) add the username and password of the built-in kibana user to the keystore:
 /usr/share/kibana/bin/kibana-keystore add elasticsearch.username --allow-root
Enter value for elasticsearch.username: ******
 /usr/share/kibana/bin/kibana-keystore add elasticsearch.password --allow-root
Enter value for elasticsearch.password: ********

Use kibana for the username and password the one been setup while bootstrapping
the native users using command “elasticsearch-setup-passwords” previously.

3) Provide required ownership to the keystore:


 chown kibana:kibana /var/lib/kibana/kibana.keystore

4) Start Kibana service:


 systemctl start kibana

Launch the Kibana UI and try to login with user “elastic” and the password. This user is
assigned default superuser role.

 Setting up LDAP authentication:


Once the native authentication has been enabled and tested successfully, we can
proceed to enable LDAP authentication for our environment.

Perform following steps on all the nodes in the cluster (data + coordinator) :

1) Edit “/etc/elasticsearch/elasticsearch.yml” to place following configuration to


include

xpack.security.authc.realms:
native.realm1:
order: 0
ldap.realm2:
order: 1
url: "ldap://itsdj-lb-noidc01.cadence.com:389"
bind_dn: "uid=ldapbind,ou=Groups,o=cadence.com"
bind_password: "ldapbind"
user_search:
base_dn: "ou=people,o=cadence.com"
filter: "(uid={0})"
group_search:
base_dn: "ou=Groups,o=cadence.com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
2) Restart Elasticsearch service:
 systemctl restart elsaticsearch

Open Kibana UI and login using a LDAP user. You may get a forbidden message on the
browser as we have not assigned any role to the user who would be login in, however
authentication passes.

To assign a “superuser” role to a LDAP user so as to test the logins, use following API call
from Kibana dev tools (you can still login to Kibana using native user elastic):

PUT /_security/role_mapping/administrators
{
"roles" : [ "superuser" ],
"rules" : { "field" : {
"dn" : "uid=vsaurabh,ou=people,o=cadence.com"
} },
"enabled": true
}

This API call assign the LDAP user “vsaurabh” a superuser role. Now try to login using
this user and you should be through.

Few more example around using API calls for specific tasks are as follows:

# To add more users to above role mapping:

PUT /_security/role_mapping/administrators
{
"roles" : [ "superuser" ],
"rules": {
"any": [
{
"field": {
"dn" : "uid=another_user1,ou=people,o=cadence.com"
}
},
{
"field": {
"dn" : "uid=another_user2,ou=people,o=cadence.com"
}
}
]
},
"enabled": true
}

# List the created role mappings:


GET /_security/role_mapping/

# To delete a role mapping:

DELETE /_security/role_mapping/<role_name>

# To update a role type:

PUT /_security/role_mapping/administrators
{
"roles" : [ "other_role" ],
"rules" : { "field" : {
"dn" : "uid=vsaurabh,ou=people,o=cadence.com"
} },
"enabled": true
}

Reference link for a detailed document around role mapping API:

You can create different roles using the Kibana UI under “Management -> Security ->
Roles” as per the required level of access to be granted to different set of users/group.

To assign specific roles to any LDAP based user / group, we need to do that using the
API calls.
Phase 6: Setting up remote Monitoring node
As per our architecture we have a single monitoring Node that will collect monitoring data from
all the ELK clusters in the environment. This node would be configured as single node elastic
cluster, having it’s own Kibana instance running for monitoring data visualization.
On the Node which would be acting as Remote monitoring Node follow below steps:
1) Follow PHASE1: Step 1 to Step 9 as described earlier in this document.

2) Do following entry into “/etc/elasticsearch/elasticsearch.yml” file:


cluster.name: elk-monitor
node.name: noida-elk-mon01
network.host: _site_
http.port: 9200
bootstrap.memory_lock: true
node.master: true
node.data: true
discovery.seed_hosts:
- noida-elk-mon01
path:
logs: /var/log/elasticsearch
data: /data/elasticsearch

3) Start Elasticsearch service:


 systemctl start elasticsearch
# monitor the logs as the service starts
4) Enable the Elasticsearch service to start on boot:
 systemctl enable elasticsearch

5) Check the node status:


 curl 'http://noida-elk-mon01:9200/_cluster/health?pretty'
{
"cluster_name" : "elk-monitor",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

6) Remove following configurations from "/etc/elasticsearch/elasticsearch.yml"


cluster.initial_master_nodes:
- noida-elk-mon01
# At this point the Node is up with basic Elasticsearch configuration enabled
Now we would proceed with configurations to enable SSL and TLS over HTTP

7) Follow Steps in PHASE 2: Steps 1 to Step 4 to generate the CSR and place the certificates
and key in the required area.

8) Make following entry into "/etc/elasticsearch/elasticsearch.yml” file to enable SSL/TLS


communication:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/noida-elk-mon01.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/noida-elk-mon01.cer
xpack.security.transport.ssl.certificate_authorities: [
"/etc/elasticsearch/certs/ca_cert.cer" ]

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/noida-elk-mon01.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/noida-elk-mon01.cer
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]

9) Restart Elasticsearch service


 systemctl restart elasticsearch
At this point the Elasticsearch Monitoring nodes would be communicating over SSL and TLS
would be enabled over HTTP.

# To verity TLS over HTTP from any elastic node run plain http query, you would see it to be
failed:

 curl 'http:// noida-elk-mon01:9200/_cluster/health?pretty'

curl: (52) Empty reply from server

# Now try to query over https first by disabling curl’s certificate verification, you must see it
through:

 curl -k 'https:// noida-elk-mon01:9200/_cluster/health?pretty'

We will now proceed with configuring Kibana for the Monitoring node. We would deploy Kibana
on the same Elasticsearch node which we configured as monitoring node this continue with following
steps on the same node where Steps 1 to 9 been performed.

10) Follow PHASE 3: Step 1 to Step 3

11) Do following entry in “/etc/kibana/kibana.yml”:


server.port: 5602
server.ssl.redirectHttpFromPort: 5601
server.host: noida-elk-mon01.cadence.com
server.name: noida-elk-mon01
elasticsearch.hosts:
- https://noida-elk-mon01:9200

server.ssl.enabled: true
elasticsearch.ssl.verificationMode: certificate
server.ssl.key: /etc/kibana/certs/noida-elk-mon01.key
server.ssl.certificate: /etc/kibana/certs/noida-elk-mon01.cer
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca_cert.cer

12) Start Kibana service:


 systemctl start kibana

# Look for the logs in /var/log/messages

13) On successful service startup, you can try to access the Kibana URL via browser as:
http:// noida-elk-mon01:5601

14) Enable Kibana service to start at boot time:


 systemctl enable kibana
# At this point Kibana is up over TLS communication with Elasticsearch.
We would now proceed with enabling native authentication for Elasticsearch.

15) On monitoring nodes stop following services:


 systemctl stop kibana
 systemctl stop elasticsearch

16) Add the below line to the configuration file “/etc/elasticsearch/elasticsearch.yml”:


xpack.security.enabled: true

17) start Elasticsearch service:


 systemctl start elasticsearch

18) try running the following command, it should fail as we have enabled security now:
 curl -k 'https://noida-elk-mon01:9200/_cat/nodes?pretty'

19) In order to connect to your node, you need to configure users and passwords. The first
step is to create passwords for the built-in users. Run the elasticsearch-setup-passwords
script with the interactive option to configure the passwords:
 /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive --url https://noida-
elk-mon01:9200

Initiating the setup of passwords for reserved users


elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
20) Now again try to connect to cluster by running below command:
 curl -k 'https://noida-elk-mon01:9200/_cat/nodes?pretty' -u elastic
Enter host password for user 'elastic':
172.23.48.67 3 56 0 0.07 0.14 0.11 dilm * noida-elk-mon01

At this point we have enabled Elasticsearch over security, now we need to configure Kibana with the
required authentication credentials:

21) Perform following steps monitoring node which is also running Kibana instance:
- create a Kibana keystore to store your security settings:
 /usr/share/kibana/bin/kibana-keystore create --allow-root
Created Kibana keystore in /var/lib/kibana/kibana.keystore

- add the username and password of the built-in kibana user to the keystore:
 /usr/share/kibana/bin/kibana-keystore add elasticsearch.username --allow-root
Enter value for elasticsearch.username: ******

 /usr/share/kibana/bin/kibana-keystore add elasticsearch.password --allow-root


Enter value for elasticsearch.password: ********

Note: Use Kibana as the username and password the one been setup while bootstrapping the
native users using command “elasticsearch-setup-passwords” previously.

- Provide required ownership to the keystore:


 chown kibana:kibana /var/lib/kibana/kibana.keystore

- Start Kibana service:


 systemctl start kibana

Open the Kibana UI and you would be prompted to enter the credential, user credential for native user
“elastic” to login as it is the built-in super user.

By now our monitoring node is completely setup as a single node elastic cluster, now we would
proceed with the configuration steps required to enable this node to accept monitoring data and view
the same into Kibana and direct the remote clusters nodes to send their monitoring data.

On the monitoring node perform the following steps (22 to 25):

22) Stop the services:


 systemctl stop kibana
 systemctl stop elasticsearch
23) make following entry into “/etc/elasticsearch/elasticsearch.yml” file:

xpack.monitoring.exporters.my_local_exporter:
type: local
xpack.monitoring.collection.enabled: false
xpack.monitoring.history.duration: 3d

24) make following entry into “/etc/kibana/kibana.yml” file:


xpack.monitoring.ui.enabled: true

25) start the services:


 systemctl start elasticsearch
 systemctl start kibana

Now the monitoring node is ready to accept monitoring data from remote production cluster nodes.
We would proceed with configuring the production Elastic cluster nodes to be able to send their
monitoring data to the configured elastic monitoring node.
Perform following steps on all the production elastic cluster nodes (data + coordinator)

26) Do following entry into “/etc/elasticsearch/elasticsearch.yml”:

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters:
my_remote:
type: http
host: "https://noida-elk-mon01:9200"
auth:
username: remote_monitoring_user
password: ********
ssl:
certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]

In the above mentioned configuration “username: remote_monitoring_user” ,


“remote_monitoring_user” is the built-in user for Elasticsearch which has required roles assign
to be able to write monitoring index into the monitoring node.
27) Restart the Elasticsearch service on nodes in rolling manner:
 systemctl restart elasticsearch

28) Visit the monitoring node’s Kibana GUI under “Stack Monitoring”, you would see the remote
node’s monitoring dashboard as shown below:
Phase 7: Setting Logstash to send data to Elasticsearch over SSL
Logstash needs to be able to manage index templates, create indices, and write and delete
documents in the indices it creates. As we have enabled security on our elastic cluster thus we
would need to do required configurations for Logstash to be able to authenticate to
Elasticsearch and write index data.
To set up authentication credentials for Logstash:

1. Use the Management > Roles UI in Kibana or the role API to create
a logstash_writer role.
For cluster privileges, add manage_index_templates and monitor.
For indices privileges, add write, create, delete, create_index, manage & manage_ilm

2. Create a logstash_ingest user and assign it the logstash_writer role. You can create
users from the Management > Users UI in Kibana.

3. As we would be also using TLS encryption to send the data to the Elastic nodes thus
place the CA certificate in some location on the Logstash server.
"/etc/logstash/certs/ca_cert.cer"

4. Make ownership to logstash user:

 chown logstash:logstash cacert => "/etc/logstash/certs/ca_cert.cer"

5. We would create a Keystore for Logstash to store the password for logstash_ingest
As we would not like it to be present as plain text in Logstash pipeline configuration
files. On the node running Logstash instance follow below steps:

 cd /usr/share/logstash

 ln -s /etc/logstash config

 /usr/share/logstash/bin/logstash-keystore create
ARNING: The keystore password is not set. Please set the environment variable
`LOGSTASH_KEYSTORE_PASS`. Failure to do so will result in reduced security.
Continue without password protection on the keystore? [y/N] y
Created Logstash keystore at /etc/logstash/logstash.keystore
 /usr/share/logstash/bin/logstash-keystore add ES_PWD
Enter value for ES_PWD:
Added 'es_pwd' to the Logstash keystore.

# Above, provide the password that was been set in Kibana while creating the
logstash_ingest user.

 /usr/share/logstash/bin/logstash-keystore list
es_pwd

6. Configure Logstash to authenticate as the logstash_ingest user you just created:


output {
elasticsearch {
hosts => ["https://noida-elk01-prod.cadence.com:9200","https://noida-elk02-
prod.cadence.com:9200"]
ssl_certificate_verification => true
cacert => "/etc/logstash/certs/ca_cert.cer"
user => "logstash_ingest"
password => "${ES_PWD}"
index => "network_logs-%{+YYYY.MM.dd}"
}
}
}

- Above configuration also has required configuration for Logstash to use TLS encryption
and authentication to write into elastic index.
Phase 7: Send Logstash instance monitoring data to remote Elastic monitoring
Node.
We may end up running multiple Logstash instances running on multiple Nodes in a give Elastic
Cluster. We need a mechanism to put a monitoring in place for all the Logstash instances that
we put up to send data to Elasticsearch.
Below steps describe how to configure the Node running Logstash instance to send monitoring
data to remote Monitoring Node.
We would be using the external collector (Metricbeat) to send monitoring data to monitoring
Node as future major version release of the Elastic Stack will remove internal collection entirely.
We would create a user in the Monitoring Node that would be used by Metricbeat to connect
to monitoring node and write the data into index. This is a onetime task.

On the Kibana UI of the monitoring Node perform following steps:

1) Navigate to Management  Security  Roles and create role “cluster_priv_read-ilm”


with following privileges:
2) Now navigate to Management  Security  Users and create a user
“metricbeat_monitoring_user” with following roles assigned:

Now we have put in place a user that would be used by Metricbeat to connect to the
Monitoring Elastic Nodes.

On the Node(s) running Logstash instance perform following steps:

1) Download metricbeat RPM:


 curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-
7.4.2-x86_64.rpm

2) Install the RPM:


 rpm -ivh metricbeat-7.4.2-x86_64.rpm

3) Disable default collection of Logstash monitoring metrics


The monitoring setting is in the Logstash configuration file (logstash.yml), but is
commented out:
xpack.monitoring.enabled: false
Remove the # at the beginning of the line to enable the setting.

4) Enable the logstash-xpack module in Metricbeat


 metricbeat modules enable logstash-xpack

5) Disable the system module in the Metricbeat


 metricbeat modules disable system
6) Configure the logstash-xpack module in Metricbeat.
 cat /etc/metricbeat/modules.d/logstash-xpack.yml
- module: logstash
metricsets:
- node
- node_stats
period: 10s
hosts: ["localhost:9600"]
xpack.enabled: true

7) Specify the Elasticsearch output information in the Metricbeat configuration


(“/etc/metricbeat/metricbeat.yml”)
output.elasticsearch:
username: metricbeat_monitoring_user
password: ********
protocol: https
hosts: ["noida-elk-mon01.cadence.com:9200"]
ssl.certificate_authorities:
- /etc/logstash/certs/ca_cert.cer
ssl.certificate: "/etc/logstash/certs/noida-elk-rsyslog01.cer"
ssl.key: "/etc/logstash/certs/noida-elk-rsyslog01.key"

8) Generate the CSR and get the signed certificate:


- Create an answer file as below:
 cat csr_details.txt
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn

[ dn ]
C=IN
ST=UP
L=Noida
O=Cadence
OU=IT
[email protected]
CN = noida-elk-rsyslog01.cadence.com
[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = noida-elk-rsyslog01
DNS.2 = noida-elk-rsyslog01.cadence.com

- Generate the CSR and the Node key:


 openssl req -new -sha256 -nodes -out noida-elk-rsyslog01.csr -newkey rsa:2048 -
keyout noida-elk-rsyslog01.key -config <( cat csr_details.txt )

- Get the CSR signed from internal CA.

- Convert the obtained signed certificate from pkcs7 to PEM format


 openssl pkcs7 -print_certs -in noida-elk-rsyslog01.p7b -out noida-elk-
rsyslog01.cer

- Place the above PEM certificate, Node Private key and CA public certificate into
"/etc/logstash/certs/” directory

- Change the ownership as:


 chown -R logstash:logstash /etc/logstash/certs

9) Start the metricbeat service:


 systemctl status metricbeat

At this point the Node running Logstash instance would start sending the monitoring data to
remote monitoring Node.

Login to Kibana UI of the remote Monitoring Node and navigate to “Stack Monitoring” to view
the Logstash instance metrics in the cluster.

You might also like