How to: Set up your Elastic Stack (Ubuntu) - Part I

Created by Michelle Ritzema at 01-04-2018 15:56:50 +0200

This is part 1 of our Elastic Stack tutorials. In this tutorial, we will show you how to set up a basic installation on your Ubuntu container. No prior knowledge about the Elastic Stack is required, however some basic Ubuntu skills are recommended!

Content

  • Create an Ubuntu container
  • Install software
    • 1. Install Java
    • 2. Install Elasticsearch
    • 3. Install Kibana
    • 4. Install Logstash
    • 5. Install Filebeat
  • Configure the stack
    • 1. Basic configuration
    • 2. Logstash data processing
    • 3. Filebeat data sending
    • 4. Index pattern


Create an Ubuntu container

First of all, a container should be created on www.cloudcontainers.net to run the Elastic Stack software. Because the stack requires a lot of resources to keep running, we recommend creating a container with a size of at least 'Large' (4 GB RAM, 60 GB DISK). Make sure the container is up to date. Continue once your container has been created successfully.

apt update && apt upgrade

Install software

1. Install Java

Your first step is to install Java 8 or higher. Add a Java repository that contains your preferred java version, update your container, and install Java. We will be using Java 8, since later versions are not compatible with logstash at the time of writing this tutorial. Make sure to also configure the Oracle JDK as default on your container, by installing the default package (or optionally by setting the JAVA environment variable manually).

add-apt-repository -y ppa:webupd8team/java
apt update
apt -y install oracle-java8-installer
apt -y install oracle-java8-set-default

2. Install Elasticsearch

The first step in installing the Elastic Stack is to install Elasticsearch, which is the search and analytics engine. Add the signing key, update the repositories and install Elasticsearch. For this tutorial we will be installing version 6.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
apt update && apt -y install elasticsearch

To make sure that the Elasticsearch service starts up automatically when the system boots up, we hook the service. Then we start the service and check if it's running correctly.

systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
systemctl status elasticsearch.service

3. Install Kibana

Next up is Kibana, the analytics and visualization user interface. Here we follow the same steps as with Elasticsearch.

apt -y install kibana
systemctl daemon-reload
systemctl enable kibana.service
systemctl start kibana.service
systemctl status kibana.service

4. Install Logstash

The third is Logstash, the data collection engine that we use for parsing logs and sending data to Elasticsearch. Again, we complete the same steps as we have done before.

apt -y install logstash
systemctl daemon-reload
systemctl enable logstash.service
systemctl start logstash.service
systemctl status logstash.service

5. Install Beats

Now we want to actually send data to our Elastic Stack. This is done by using Beats, and there are many types for different purposes. For our starting setup however, we will only be using Filebeat to collect data from log files. We need to install Filebeat on each container that we want to collect data from.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
apt update && apt -y install filebeat
systemctl daemon-reload
systemctl enable filebeat.service
systemctl restart filebeat.service
systemctl status filebeat.service

Configure the stack

We have installed everything we need to get started. Now we can start configuring the stack to our liking.

Important: Everytime you change something in a configuration file, don't forget to restart the service and check if it keeps running!


1. Basic configuration

To set up a simple Elastic Stack, you won't have to change much configuration. You can find the configuration files for each of the three services in their respective directories.

Elasticsearch --> /etc/elasticsearch/elasticsearch.yml
Kibana --> /etc/kibana/kibana.yml
Logstash --> /etc/logstash/logstash.yml

Filebeat will also have its own directory.

Filebeat --> /etc/filebeat/filebeat.yml

Important: YAML (.yml) files are very allergic to tabs, always use spaces instead!


The web UI will be available by visiting your IP address with port 5601.

http://your_ip_address:5601/app/kibana

If you don't see the Kibana dashboard, you might have to update the 'kibana.yml' file. Add the container's IP to the 'server.host' (for this example we use localhost, you should fill in your actual IP address).

vim /etc/kibana/kibana.yml
systemctl restart kibana.service

You might also want to allow Logstash to reload the configuration automatically each time you make a change.

vim /etc/logstash/logstash.yml
systemctl restart logstash.service

2. Logstash data processing

Now we need to configure Logstash to process our data. We have to create three files in the logstash configuration directory. The settings below are how we like to start collecting data; you can alter it and add to it as you please.

cd /etc/logstash/conf.d/
vim input_standard.conf
vim filter_standard.conf
vim output.conf
systemctl restart logstash.service

input_standard.conf

input {
  beats {
    port => 5044
    include_codec_tag => false
  }
}

filter_standard.conf

filter {
  if "json" in [tags] {
    json {
      source => "message"
    }
    mutate {
      remove_field => ["message"]
    }
  } else {
    grok {
      match => [
        "message", "%{GREEDYDATA:message}"
      ]
      overwrite => ["message"]
      add_field => {
        "type" => "filter_greedy"
      }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}


output_standard.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}



3. Filebeat data sending

Now we need to configure Filebeat to send data to our stack container. Update the hosts file to point to the correct container (we use a random IP address '1.2.3.4', you should fill in your actual stack container's IP address). Make sure to call the host 'logstash'.

vim /etc/hosts

Next we update the filebeat configuration file. For this example we only collect data from syslog. You can create different prospectors to collect data from other files. After the restart this container should start sending data to our Elastic Stack container.

vim /etc/filebeat/filebeat.yml
systemctl restart filebeat.service

filebeat.yml

#=========================== Filebeat prospectors =============================
filebeat.prospectors:

- tags: ["syslog"]
  input_type: log
  paths:
    - /var/log/syslog
#================================ General =====================================
name: test
tags: ["test"]
fields:
  env: development
#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------
output.logstash:
  hosts: ["logstash:5044"]


4. Index pattern

We can start visualising data in Kibana now. To do this, we add an index pattern. Because of the settings we put in the 'output_standard.conf', our index pattern will be "filebeat-*".

Select the default timestamp and click on the create index pattern button.

Conclusion

Congratulations, you've set up your very own Elastic Stack! In the next tutorial we will be diving deeper into custom configurations and visualisations.

Comments

Comments are turned off.