In this blog post we will learn how to install ELK stack on Rocky Linux 8. The ELK stack is an acronym that comprises of three popular projects: Elasticsearch, Logstash, and Kibana. This stack is referred to as Elasticsearch, the ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.
Let us briefly see what these 3 stacks is all about
- E = Elasticsearch – This is a distributed search and analytics engine built on Apache Lucene. Support for various languages, high performance, and schema-free JSON documents makes It is an ideal choice for various log analytics and search use cases.
- L = Logstash : This is an opensource data ingestion tool is used for the processing of the logs and aligning them so that they can be indexed by the Elasticsearch
- K – Kibana : It is a data visualization and exploration tool that serves as the front-end of the entire stack. This provides access to the dashboards where we can visualize metrics which is indexed by the Elasticsearch engine
Pre-requisite
- Rocky Linux 8 Server – You can follow our previous blog post for the Installation of the Rocky Linux
- OpenJDK
- 2 Core CPU
- 4 GB RAM
- Root user level access to the Linux Host
Installation of Elasticsearch
Installing the OpenJDK packages
Start with installing the Java Development Kit on the Rocky Host
sudo dnf -y install java-openjdk-devel java-openjdk
Install Elasticsearch 7.X
Let us configure the required repository
cat <<EOF | sudo tee /etc/dnf.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/dnf
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
Import the GPG Key
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Update the system cache
sudo dnf clean all
sudo dnf makecache
Now we are ready to Install the Elasticsearch
[admin@elk ~]$ sudo dnf -y install elasticsearch
Configure the Elasticsearch
Start with Modifying the cluster name for Elasticsearch followed by uncommenting the network.host option.
Please make sure you add the IP of the Elasticsearch host for port binding if you intend to have the multi node cluster.
$ sudo vim /etc/elasticsearch/elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: my-elk-cluster
# ---------------------------------- Network -----------------------------------
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
network.host: 192.168.255.125
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#http.port: 9200
# For more information, consult the network module documentation.
Start and enable the services
sudo systemctl enable --now elasticsearch.service
You can also verify if the service has started successfully
[vic@rocky ~]$ systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2023-12-12 21:38:27 EAT; 7s ago
Docs: https://www.elastic.co
Main PID: 52063 (java)
Tasks: 66 (limit: 6001)
Memory: 642.5M
CGroup: /system.slice/elasticsearch.service
├─52063 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encodi>
└─52228 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
Dec 12 21:37:59 rocky systemd[1]: Starting Elasticsearch...
Dec 12 21:38:27 rocky systemd[1]: Started Elasticsearch.
lines 1-13/13 (END)
Verify if the service is up using the cURL or running HTTP GET
[admin@elk ~]$ curl http://127.0.0.1:9200
[admin@elk ~]$ curl -X GET "localhost:9200"
Install & Configure Logstash
Let us download & install using the yum/dnf package manager
[admin@elk ~]$ sudo dnf install logstash -y
Edit the configuration file to add the input and output variable . Also need to map the Logstash application to the Elasticsearch engine which is running on the same host at port 9200
$ sudo vi /etc/logstash/conf.d/logstash.conf
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
Start and enable the logstash service
sudo systemctl enable --now logstash
Verify the status with the below output
[admin@elk ~]$ sudo systemctl status logstash
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2023-12-12 21:45:35 EAT; 22s ago
Main PID: 52516 (java)
Tasks: 14 (limit: 6001)
Memory: 384.9M
CGroup: /system.slice/logstash.service
└─52516 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 >
Dec 12 21:45:35 rocky systemd[1]: Started logstash.
Dec 12 21:45:35 rocky logstash[52516]: Using bundled JDK: /usr/share/logstash/jdk
Dec 12 21:45:36 rocky logstash[52516]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Dec 12 21:45:47 rocky logstash[52516]: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/rubygems_integration.rb:200: warning: constant Gem::ConfigMap is deprecated
Install & Configure Kibana
Now to complete the the stack we need the 3rd major step which is installation of the Kibana dashboard
$ sudo dnf -y install kibana
Configure the port binding for Kibana to use any IP or specific IP. Here in this blog I will confgure Kibana to listen in any IP
$ sudo vim /etc/kibana/kibana.yml
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"
Start & Enable Kibana
sudo systemctl enable --now kibana
Verify the status with the below output
[admin@elk ~]$ systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2023-12-12 22:09:11 EAT; 1min 31s ago
Docs: https://www.elastic.co
Main PID: 52892 (node)
Tasks: 18 (limit: 6001)
Memory: 189.0M
CGroup: /system.slice/kibana.service
├─52892 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist --logging.dest=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid
└─52904 /usr/share/kibana/node/bin/node --preserve-symlinks-main --preserve-symlinks /usr/share/kibana/src/cli/dist --logging.dest=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid
Dec 12 22:09:11 rocky systemd[1]: Started Kibana.
Allow the Firewall ports access for Kibana
sudo firewall-cmd --permanent --add-port=5601/tcp
sudo firewall-cmd --reload
Shipping logs to ELK using Filebeat
Now we will try to ship the logs to the ELK stack using the Filebeat and for that we will install the Fileeat packages on the Host
sudo dnf install filebeat
Filebeat make use of different modules that is used to ship logs to the linux system. We can list those modules and enable them as per our requirement
sudo filebeat modules list
You can enable module using the below commands
sudo filebeat modules enable <module>
Now we can initialize the Filebeat
[admin@elk ~]$ sudo filebeat modules enable system
Enabled system
[admin@elk ~]# sudo filebeat setup
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
Loaded machine learning job configurations
Loaded Ingest pipelines
Run the below command to load the Filebeat module and connect to the ELK host
[admin@elk ~]$ filebeat -e
2023-12-12T22:44:15.393+0300 INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2023-12-12T22:44:15.395+0300 INFO instance/beat.go:673 Beat ID: bfa28747-655b-4753-87d2-ef5adcb5d7ab
2023-12-12T22:44:15.402+0300 INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
....
2023-12-12T22:44:18.917+0300 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
2023-12-12T22:44:19.090+0300 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
2023-12-12T22:44:19.202+0300 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2023-12-12T22:44:19.603+0300 INFO [index-management.ilm] ilm/std.go:160 ILM policy filebeat exists already.
2023-12-12T22:44:19.617+0300 INFO [index-management] idxmgmt/std.go:401 Set setup.template.name to '{filebeat-7.14.1 {now/d}-000001}' as ILM is enabled.
2023-12-12T22:44:19.703+0300 INFO template/load.go:111 Template "filebeat-7.14.1" already exists and will not be overwritten.
2023-12-12T22:44:19.703+0300 INFO [index-management] idxmgmt/std.go:297 Loaded index template.
2023-12-12T22:44:19.798+0300 INFO [index-management.ilm] ilm/std.go:121 Index Alias filebeat-7.14.1 exists already.
vT22:44:19.811+0300 INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(http://localhost:9200)) established
Start the Filebeat service
sudo systemctl start filebeat
Access the ELK dashboard through Kibana
You can access via any browser of your choice using the URL : http://server-IP:5601
You can now visualize system logs for the modules enabled using the Filebeat.
For more Information you can also refer the official ELK Guide