ELK Cookbook
ELK Cookbook
ELK Cookbook
ENVIRONMENT
This document describes the steps required to build an Elastic cluster having following architecture:
- Elasticsearch v7.4.2
- Kibana 7.4.2
- Logstash 7.4.2
- RHEL 7.6
Following is the architectural diagram for ELK Cluster Setup in each DC:
CONFUGURATION STEPS
We have 3 data nodes and 2 coordinator nodes, perform following steps on all the 5 nodes:
1) install Elasticsearch
pwd
/etc/yum.repos.d
cat elastic.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
In /etc/sysconfig/elasticsearch:
ES_JAVA_OPTS="-Xms30g -Xmx30g"
MAX_LOCKED_MEMORY=unlimited
Create an override.conf file using "systemctl edit elasticsearch" command with this content:
[Service]
LimitMEMLOCK=infinity
We have three data/master eligible nodes and two coordinator nodes. The configuration for
data and coordinator node differs a bit.
# sample configuration file from one of the data master nodes “noida-elk01-prod”:
cluster.name: noida-elk
node.name: noida-elk01-prod
network.host: _site_
http.port: 9200
bootstrap.memory_lock: true
node.master: true
node.data: true
discovery.seed_hosts:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod
cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod
path:
logs: /var/log/elasticsearch
data: /data/elasticsearch
On the other two nodes which would be acting as coordinator nodes put following
configuration in “/etc/elasticsearch/elasticsearch.yml” file.
# sample configuration file from one of the data master nodes “noida-elk-cod01”:
cluster.name: noida-elk
node.name: noida-elk-cod01
network.host: _site_
http.port: 9200
bootstrap.memory_lock: true
node.master: false
node.voting_only: false
node.data: false
node.ingest: false
node.ml: false
cluster.remote.connect: false
discovery.seed_hosts:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod
cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod
path:
logs: /var/log/elasticsearch
data: /data/elasticsearch
Once the service gets started without any error, you can check the cluster status by running following
commands from any of the node:
curl 'http://noida-elk01-prod:9200/_cluster/health?pretty'
{
"cluster_name" : "noida-elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
id host ip node
PsERiX8-SMWT138-gg8p3w 172.23.48.211 172.23.48.211 noida-elk02-prod
curl http://noida-elk01-prod:9200/_nodes?filter_path=**.mlockall\&pretty
{
"nodes" : {
"EapWqMy1Sk6AgzDH7dGQWg" : {
"process" : {
"mlockall" : true
}
},
"PsERiX8-SMWT138-gg8p3w" : {
"process" : {
"mlockall" : true
}
},
"GvjDYPYbTjWfwB2V-REljQ" : {
"process" : {
"mlockall" : true
}
},
"gzKspovHTkuKHK7TPoz28A" : {
"process" : {
"mlockall" : true
}
},
"f1va0pExQoe8-1uO64JBrg" : {
"process" : {
"mlockall" : true
}
}
}
}
curl 'http://noida-elk-cod02:9200/'
{
"name" : "noida-elk-cod02",
"cluster_name" : "noida-elk",
"cluster_uuid" : "9Exjga50TGGvgMvwHYKMYg",
"version" : {
"number" : "7.4.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date" : "2019-10-28T20:40:44.881551Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
By this time, you should have your elastic cluster up and running on "Basic" License.
cluster.initial_master_nodes:
- noida-elk01-prod
- noida-elk02-prod
- noida-elk03-prod
# As these are one-time configuration when we bootstrap the cluster for the first time.
2) Enable the Elasticsearch service to start on boot on all the nodes (data + coordinator):
The 'csr' mode generates certificate signing requests that can be sent to
a trusted certificate authority
* By default, this generates a single CSR for a single instance.
* You can use the '-multiple' option to generate CSRs for multiple
instances, each with their own private key.
* The '-in' option allows for the CSR generation to be automated
by describing the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires an SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
The 'csr' mode produces a single zip file which contains the certificate
signing requests and private keys for each instance.
* Each certificate signing request is provided as a standard PEM encoding of a PKCS#10 CSR.
* Each key is provided as a PEM encoding of an RSA private key
This file should be properly secured as it contains the private keys for all
instances.
After unzipping the file, there will be a directory for each instance containing
the certificate signing request and the private key. Provide the certificate
signing requests to your certificate authority. Once you have received the
signed certificate, copy the signed certificate, key, and CA certificate to the
configuration directory of the Elastic product that they will be used for and
follow the SSL configuration instructions in the product guide.
2) Unzip the archive "/usr/share/elasticsearch/<node name>.zip" and send the generated certificate for
internal CA signing from all the nodes (data + coordinator):
3) For now, Cadence internal CA generates signs certificates in pkcs7 format, we need to get the signed
certificates and convert each of them into pem format using following command:
eg:
You may have received as DER formatted certificate, follow below steps for PEM conversion.
openssl pkcs7 -inform der -in noida-elk-mon01.p7b -out noida-elk-mon01_temp.cer
openssl pkcs7 -print_certs -in noida-elk-mon01_temp.cer -out noida-elk01-prod.cer
cd /etc/elasticsearch
mkdir certs
cd certs
# copy the Node's CA signed certificate, Node's private key and root CA certificate to this folder.
# private key for the node could be found on each node where the key-pair archive was saved while
generating the same.
# You may also receive the root CA certificate in pks7 format which would again be required to be
converted into PEM format.
From any of the Elasticsearch node you must see three such files
ls certs
5) By now we have the relevant certificated placed local to each Node, we would proceed with making
required configurations in elasticsearch.yml.
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/noida-elk01-prod.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/noida-elk01-prod.cer
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]
Note: key, certificate parameters must point to the appropriate path on each elastic Node
6) we would also like to enable TLS on HTTP layer for Elasticsearch, on all the nodes (data + coordinator)
edit "/etc/elastcsearch/elastcsearch.yml" to include following configurations:
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/noida-elk01-prod.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/noida-elk01-prod.cer
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]
Note:
- key, certificate parameters must point to the appropriate path on each elastic Node
- These paths are same to the one’s we used above in Step 5 as we are using the same
certificates.
By now, the Elasticsearch cluster nodes would be communicating over SSL and TLS would be enabled
over HTTP.
# To verity TLS over HTTP from any elastic node run plain http query, you would see it to be failed:
curl 'http://noida-elk01-prod:9200/_cluster/health?pretty'
# Now try to query over https first by disabling curl’s certificate verification, you must see it through:
curl -k 'https://noida-elk01-prod:9200/_cluster/health?pretty'
# To run curl query with certificate verification (this may be required in case you did not create you
node’s CSR with alternative DNS names for both fqdn and shortname):
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-x86_64.rpm
rpm -ivh kibana-7.4.2-x86_64.rpm
mkdir /etc/kibana/certs/
3) As we are installing Kibana on the existing coordinator nodes thus we would copy the
certificates which were earlier used for Elasticsearch for the nodes.
cd /etc/kibana/certs/
cp /etc/elasticsearch/certs/* .
chown kibana:kibana *
4) Now we would proceed to do Kibana configuration. Open the file "/etc/kibana/kibana.yml" and
do following entry:
server.port: 5602
server.ssl.redirectHttpFromPort: 5601
server.host: noida-elk-cod01.cadence.com
server.name: noida-elk-cod01
elasticsearch.hosts:
- https://noida-elk-cod01:9200
- https://noida-elk-cod02:9200
server.ssl.enabled: true
elasticsearch.ssl.verificationMode: certificate
server.ssl.key: /etc/kibana/certs/noida-elk-cod01.key
server.ssl.certificate: /etc/kibana/certs/noida-elk-cod01.cer
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca_cert.cer
NOTE:
- Here we have configured Kibana in such a way that it will accept the request over HTTP and
redirect it to HTTPS. Port 5601 would accept HTTP request and will redirect to port 5602 over
HTTPS.
- For elasticsearch.hosts parameter we would mention details of both the coordinator nodes in
order of their locality.
- Make sure to mention certificate paths as applicable to the node
7) On successful service startup, you can try to access the Kibana URL via browser as:
http://noida-elk-cod01:5601/
# In the browser you can see the request been redirected to port 5602 over HTTPS.
# check by launching the Kibana url hosted on both the coordinator nodes.
Phase 4: Enable production license
At this point we are good to enable the production license on the cluster.
- Through Kibana UI: open the Kibana UI and browse to “Management -> License management”
here you can upload the production license file
- It can also be achieved by issuing a REST query from any of the elastic nodes as:
curl -XPUT -u <user> 'https://<host>:<port>/_license' -H "Content-Type: application/json" -d
@license.json
{
"license" : {
"status" : "active",
"uid" : "cd4e9431-0900-4745-bb37-1d7188d27118",
"type" : "platinum",
"issue_date" : "2019-10-31T00:00:00.000Z",
"issue_date_in_millis" : 1572480000000,
"expiry_date" : "2022-10-30T23:59:59.999Z",
"expiry_date_in_millis" : 1667174399999,
"max_nodes" : 12,
"issued_to" : "Cadence Design Systems",
"issuer" : "API",
"start_date_in_millis" : 1572480000000
}
}
Phase 5: Setup User Authentication
Elastic provides several authentication mechanisms, for our purpose we would be setting up the Native
and LDAP authentication realms.
1) Stop Kibana on the coordinator nodes and stop Elasticsearch service on all the Nodes.
systemctl stop kibana
systemctl stop elasticsearch
xpack.security.enabled: true
4) From any of the nodes try running the following command from a command prompt:
curl -k 'https://noida-elk01-prod:9200/_cat/nodes?pretty'
You should get a security error because you are trying to access a secure cluster without
any credentials
5) In order to connect to your cluster, you need to configure users and passwords. The first
step is to create passwords for the built-in users. Run the elasticsearch-setup-passwords
script with the interactive option to configure the passwords:
From any of the nodes:
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive --url
https://noida-elk01-prod.cadence.com:9200
Initiating the setup of passwords for reserved users
elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user
.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
6) Now that all the built-in users have a password, use the elastic user to run the command
that failed earlier due to lack of credentials:
curl -k 'https://noida-elk01-prod:9200/_cat/nodes?pretty' -u elastic
Enter host password for user 'elastic':
172.23.49.85 16 10 0 0.00 0.02 0.05 - - noida-elk-cod02
172.23.48.211 2 9 0 0.00 0.01 0.05 dilm * noida-elk02-prod
172.23.48.197 5 9 0 0.01 0.02 0.05 dilm - noida-elk01-prod
172.23.48.219 3 9 0 0.00 0.01 0.05 dilm - noida-elk03-prod
172.23.49.24 8 15 0 0.00 0.01 0.05 - - noida-elk-cod01
At this point Elasticsearch has security been activated. Now we need to set up the user and
password used by Kibana to connect to Elasticsearch.
Perform following steps on both the coordinator nodes as we have Kibana running on both:
2) add the username and password of the built-in kibana user to the keystore:
/usr/share/kibana/bin/kibana-keystore add elasticsearch.username --allow-root
Enter value for elasticsearch.username: ******
/usr/share/kibana/bin/kibana-keystore add elasticsearch.password --allow-root
Enter value for elasticsearch.password: ********
Use kibana for the username and password the one been setup while bootstrapping
the native users using command “elasticsearch-setup-passwords” previously.
Launch the Kibana UI and try to login with user “elastic” and the password. This user is
assigned default superuser role.
Perform following steps on all the nodes in the cluster (data + coordinator) :
xpack.security.authc.realms:
native.realm1:
order: 0
ldap.realm2:
order: 1
url: "ldap://itsdj-lb-noidc01.cadence.com:389"
bind_dn: "uid=ldapbind,ou=Groups,o=cadence.com"
bind_password: "ldapbind"
user_search:
base_dn: "ou=people,o=cadence.com"
filter: "(uid={0})"
group_search:
base_dn: "ou=Groups,o=cadence.com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
2) Restart Elasticsearch service:
systemctl restart elsaticsearch
Open Kibana UI and login using a LDAP user. You may get a forbidden message on the
browser as we have not assigned any role to the user who would be login in, however
authentication passes.
To assign a “superuser” role to a LDAP user so as to test the logins, use following API call
from Kibana dev tools (you can still login to Kibana using native user elastic):
PUT /_security/role_mapping/administrators
{
"roles" : [ "superuser" ],
"rules" : { "field" : {
"dn" : "uid=vsaurabh,ou=people,o=cadence.com"
} },
"enabled": true
}
This API call assign the LDAP user “vsaurabh” a superuser role. Now try to login using
this user and you should be through.
Few more example around using API calls for specific tasks are as follows:
PUT /_security/role_mapping/administrators
{
"roles" : [ "superuser" ],
"rules": {
"any": [
{
"field": {
"dn" : "uid=another_user1,ou=people,o=cadence.com"
}
},
{
"field": {
"dn" : "uid=another_user2,ou=people,o=cadence.com"
}
}
]
},
"enabled": true
}
DELETE /_security/role_mapping/<role_name>
PUT /_security/role_mapping/administrators
{
"roles" : [ "other_role" ],
"rules" : { "field" : {
"dn" : "uid=vsaurabh,ou=people,o=cadence.com"
} },
"enabled": true
}
You can create different roles using the Kibana UI under “Management -> Security ->
Roles” as per the required level of access to be granted to different set of users/group.
To assign specific roles to any LDAP based user / group, we need to do that using the
API calls.
Phase 6: Setting up remote Monitoring node
As per our architecture we have a single monitoring Node that will collect monitoring data from
all the ELK clusters in the environment. This node would be configured as single node elastic
cluster, having it’s own Kibana instance running for monitoring data visualization.
On the Node which would be acting as Remote monitoring Node follow below steps:
1) Follow PHASE1: Step 1 to Step 9 as described earlier in this document.
7) Follow Steps in PHASE 2: Steps 1 to Step 4 to generate the CSR and place the certificates
and key in the required area.
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/noida-elk-mon01.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/noida-elk-mon01.cer
xpack.security.transport.ssl.certificate_authorities: [
"/etc/elasticsearch/certs/ca_cert.cer" ]
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/noida-elk-mon01.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/noida-elk-mon01.cer
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]
# To verity TLS over HTTP from any elastic node run plain http query, you would see it to be
failed:
# Now try to query over https first by disabling curl’s certificate verification, you must see it
through:
We will now proceed with configuring Kibana for the Monitoring node. We would deploy Kibana
on the same Elasticsearch node which we configured as monitoring node this continue with following
steps on the same node where Steps 1 to 9 been performed.
server.ssl.enabled: true
elasticsearch.ssl.verificationMode: certificate
server.ssl.key: /etc/kibana/certs/noida-elk-mon01.key
server.ssl.certificate: /etc/kibana/certs/noida-elk-mon01.cer
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca_cert.cer
13) On successful service startup, you can try to access the Kibana URL via browser as:
http:// noida-elk-mon01:5601
18) try running the following command, it should fail as we have enabled security now:
curl -k 'https://noida-elk-mon01:9200/_cat/nodes?pretty'
19) In order to connect to your node, you need to configure users and passwords. The first
step is to create passwords for the built-in users. Run the elasticsearch-setup-passwords
script with the interactive option to configure the passwords:
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive --url https://noida-
elk-mon01:9200
At this point we have enabled Elasticsearch over security, now we need to configure Kibana with the
required authentication credentials:
21) Perform following steps monitoring node which is also running Kibana instance:
- create a Kibana keystore to store your security settings:
/usr/share/kibana/bin/kibana-keystore create --allow-root
Created Kibana keystore in /var/lib/kibana/kibana.keystore
- add the username and password of the built-in kibana user to the keystore:
/usr/share/kibana/bin/kibana-keystore add elasticsearch.username --allow-root
Enter value for elasticsearch.username: ******
Note: Use Kibana as the username and password the one been setup while bootstrapping the
native users using command “elasticsearch-setup-passwords” previously.
Open the Kibana UI and you would be prompted to enter the credential, user credential for native user
“elastic” to login as it is the built-in super user.
By now our monitoring node is completely setup as a single node elastic cluster, now we would
proceed with the configuration steps required to enable this node to accept monitoring data and view
the same into Kibana and direct the remote clusters nodes to send their monitoring data.
xpack.monitoring.exporters.my_local_exporter:
type: local
xpack.monitoring.collection.enabled: false
xpack.monitoring.history.duration: 3d
Now the monitoring node is ready to accept monitoring data from remote production cluster nodes.
We would proceed with configuring the production Elastic cluster nodes to be able to send their
monitoring data to the configured elastic monitoring node.
Perform following steps on all the production elastic cluster nodes (data + coordinator)
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.exporters:
my_remote:
type: http
host: "https://noida-elk-mon01:9200"
auth:
username: remote_monitoring_user
password: ********
ssl:
certificate_authorities: [ "/etc/elasticsearch/certs/ca_cert.cer" ]
28) Visit the monitoring node’s Kibana GUI under “Stack Monitoring”, you would see the remote
node’s monitoring dashboard as shown below:
Phase 7: Setting Logstash to send data to Elasticsearch over SSL
Logstash needs to be able to manage index templates, create indices, and write and delete
documents in the indices it creates. As we have enabled security on our elastic cluster thus we
would need to do required configurations for Logstash to be able to authenticate to
Elasticsearch and write index data.
To set up authentication credentials for Logstash:
1. Use the Management > Roles UI in Kibana or the role API to create
a logstash_writer role.
For cluster privileges, add manage_index_templates and monitor.
For indices privileges, add write, create, delete, create_index, manage & manage_ilm
2. Create a logstash_ingest user and assign it the logstash_writer role. You can create
users from the Management > Users UI in Kibana.
3. As we would be also using TLS encryption to send the data to the Elastic nodes thus
place the CA certificate in some location on the Logstash server.
"/etc/logstash/certs/ca_cert.cer"
5. We would create a Keystore for Logstash to store the password for logstash_ingest
As we would not like it to be present as plain text in Logstash pipeline configuration
files. On the node running Logstash instance follow below steps:
cd /usr/share/logstash
ln -s /etc/logstash config
/usr/share/logstash/bin/logstash-keystore create
ARNING: The keystore password is not set. Please set the environment variable
`LOGSTASH_KEYSTORE_PASS`. Failure to do so will result in reduced security.
Continue without password protection on the keystore? [y/N] y
Created Logstash keystore at /etc/logstash/logstash.keystore
/usr/share/logstash/bin/logstash-keystore add ES_PWD
Enter value for ES_PWD:
Added 'es_pwd' to the Logstash keystore.
# Above, provide the password that was been set in Kibana while creating the
logstash_ingest user.
/usr/share/logstash/bin/logstash-keystore list
es_pwd
- Above configuration also has required configuration for Logstash to use TLS encryption
and authentication to write into elastic index.
Phase 7: Send Logstash instance monitoring data to remote Elastic monitoring
Node.
We may end up running multiple Logstash instances running on multiple Nodes in a give Elastic
Cluster. We need a mechanism to put a monitoring in place for all the Logstash instances that
we put up to send data to Elasticsearch.
Below steps describe how to configure the Node running Logstash instance to send monitoring
data to remote Monitoring Node.
We would be using the external collector (Metricbeat) to send monitoring data to monitoring
Node as future major version release of the Elastic Stack will remove internal collection entirely.
We would create a user in the Monitoring Node that would be used by Metricbeat to connect
to monitoring node and write the data into index. This is a onetime task.
Now we have put in place a user that would be used by Metricbeat to connect to the
Monitoring Elastic Nodes.
[ dn ]
C=IN
ST=UP
L=Noida
O=Cadence
OU=IT
[email protected]
CN = noida-elk-rsyslog01.cadence.com
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = noida-elk-rsyslog01
DNS.2 = noida-elk-rsyslog01.cadence.com
- Place the above PEM certificate, Node Private key and CA public certificate into
"/etc/logstash/certs/” directory
At this point the Node running Logstash instance would start sending the monitoring data to
remote monitoring Node.
Login to Kibana UI of the remote Monitoring Node and navigate to “Stack Monitoring” to view
the Logstash instance metrics in the cluster.