How To Setup ELK Stack V7.15 On Ubuntu

In this tutorial I will show you how to install and configure ELK stack v7.15 (Latest version) on Ubuntu which will be used as a Centralized Log management system.

ELK stands for Elasticsearch , Logstash and Kibana.

Before proceeding further , Refer this article , To create an EC2 Instance using AWS Console

ELASTICSEARCH

ElasticSearch is an open source, RESTful search engine built on top of Apache Lucene and released under an Apache license. It is Java-based and can search and index document files in diverse formats.

Elasticsearch is a scalable search engine that can be used to search for all kind of text documents, including log files

The data is queried, retrieved and stored in a JSON document scheme.

LOGSTASH

Logstash is a tool based on the filter/pipes patterns for gathering, processing and generating the logs or events. It helps in centralizing and making real time analysis of logs and events from different sources.

Logstash is written on JRuby programming language that runs on the JVM

Logstash can collect data from different sources and send to multiple destinations.

KIBANA

Kibana is a visualization tool, which accesses the logs from Elasticsearch and is able to display to the user in the form of line graph, bar graph, pie charts.

The flow of ELK stack is shown in the image below.

Input data source can be application or Nginx logs which will be collected by the filebeat and sent to the logstash.

The logstash then collect and parse and transform the data and it will be sent to Elasticsearch as an Index.

The indexes stored in the Elasticsearch will be used by Kibana and it will be represented to the user in the form of Barcharts , Graphs etc.

INSTALLING ELASTICSEARCH:

In this setup we will install and configure elasticsearch on Ubuntu server.

But before installing the Elasticsearch , We need to install JAVA on the system , As the elasticsearch requires Java.

To install Java 11 use the below commands.

sudo apt install openjdk-11-jdk openjdk-11-jre

Now that we have installed java on the system.To verify the version and installation of Java use the below command.

java --version
openjdk 11.0.11 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)

Then we need to install dependencies such as software-properties-common’ & ‘apt-transport-https’ packages.

sudo apt install software-properties-common apt-transport-https -y

After install Java and the dependent packages on the system.

We will install Elasticsearch , Following the below procedures.

Add the elastic stack key and the elastic stack repository to the system.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list

Once the repository is added to the system ,update the repository and install the elasticsearch package in the system.

sudo apt-get update
sudo apt-get install elasticsearch -y

Once the installation is completed , Go the /etc/elasticsearch folder and configure the elasticsearch on elasticsearch.yml

Uncomment the network.host and change the value to localhost.By doing this , elasticsearch port will listen only to localhost.

uncomment http.port on which the elasticsearch will run.

Hence the elasticsearch running on the port 9200 will listen only on localhost.

network.host: localhost
http.port: 9200

Save the configuration and exit the file.

Nnow start the elasticsearch service and enable it to run on system boot.

sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

To check the status of the elasticsearch , use the below command.

sudo systemctl status elasticsearch

Now the elasticsearch service should be UP and running.

We can check the version of running elasticsearch , Use the below command,

curl http://localhost:9200/?pretty

INSTALLING KIBANA

In this step we will install and configure Kibana.

It is recommended to install Kibana after setting up Elasticsearch to ensure that all the required components are already in place.

As we have already added the repository , It comes with the kibana package.

To install kibana use the below command.

sudo apt-get install kibana

To edit the kibana configuration file , Go to /etc/kibana folder and open kibana.yml file.

Uncomment the following lines.

server.port: 5601

5601 is the port in which kibana runs.

server.host: 127.0.0.1

Once done , Save and exit.

To start the kibana service,

sudo systemctl start kibana

To enable the service to start on the system boot.

sudo systemctl enable kibana

To check the status of the kibana service

sudo systemctl status kibana

Now that we have successfully installed and configure Kibana dashboard which is running on default port 5601 and listening on any host.

INSTALL AND CONFIGURE NGINX FOR KIBANA

In this step We will configure Nginx web server as a reverse proxy for the kibana dashboard.

Lets Install Nginx package to the system.

sudo apt-get install nginx

After the nginx is installed , Go to /etc/nginx/sites-available folder and then create a virtual host file named “kibana”.

sudo /etc/nginx/sites-available
sudo vi kibana

And paste the below nginx configuration in the kibana virtual host file.

server {
listen 80;
server_name fitdevops.in;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.kibana-user;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

In the server_name You should replace with your domain name. Save and exit the file.

Next We have to setup a basic authentication for the Kibana dashboard.

use the below commands to create basic authentication.

sudo htpasswd -c /etc/nginx/.kibana-user elastic
Type the elastic user password

Provide the password for the elastic user and confirm the password for authentication.

We should enable the virtual host configuration.

ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
nginx -t

You should get the below response.

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Start the nginx service and enable the service to start on the system boot.

sudo systemctl start nginx
sudo systemctl enable nginx

Now We have successfully installed and configure kibana and nginx reverse proxy for the Kibana Dashboard.

INSTALLING LOGSTASH

We will install and configure logstash which will collect the logs from filebeat and then it will transform the logs and sent it to elasticsearch.

Install logstash using the below command.

sudo apt-get install logstash

Then we will generate the SSL certificate key to secure the log transfer from Filebeat to Logstash.

By doing so the data will be secure during transactions.

once the logstash packge in installed to the system , Go to /etc/logstash folder and then create a folder named ssl

sudo cd /etc/logstash
sudo mkdir ssl

Before generating SSL certificate , Edit the /etc/hosts file and add the below configuration.

192.168.10.3                        elk-master

We will generate the SSL certificate using openssl command.

openssl req -subj '/CN=elk-master/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt

The SSL certificate will be created and stored in the ssl directory.

We have to create new configuration files for collecting inputs from filebeat , processing syslog files and then sending them to elasticsearch.

Go to /etc/logstash/conf.d folder and then create the below mentioned files.

The first file filebeat-input.conf file is to receive inputs from the incoming sources such as filebeat.

sudo vi filebeat-input.conf
input {
beats {
port => 5443
type => syslog
ssl => true
ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
}
}

Save and close the file.

Then to parse the syslog files and to process them , we are going to use a grok expression which is a filtering mechanism.

sudo vi syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] }
}
}

Save and exit the file.

Finally for the elasticsearch output , we will create a confiuration file named output-elasticsearch.conf

output {
elasticsearch { hosts => ["localhost:9200"] hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Save and close the file.

Once all the required configurations are implemented , We will start the logstash service.

sudo systemctl start logstash
sudo systemctl enable logstash

Now the Logstash service is running and by default it runs on the port 5443.

The ELK stack setup is completed.

CONCLUSION

We have installed and configured the Centralized Log management tool using latest version of Elasticsearch , Logstash and Kibana.

Hope you find it helpful.

Thanks for reading this article.Please do check out my other publications.

Monitor AWS Resources using Cloudwatch Alarms

Attach Elastic IP Address to EC2 Instance

Scan EC2 Instances using ClamAV in AWS

Create AMI from existing EC2 Instances

How to Create and Restore RDS snapshots