Centralized Logs - Elasticsearch, Logstash and Kibana: Difference between revisions

mNo edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 3: Line 3:
<div class="col-md-12 ibox-content">
<div class="col-md-12 ibox-content">
=Centralized Logs - Elasticsearch, Logstash and Kibana=
=Centralized Logs - Elasticsearch, Logstash and Kibana=
{{KB|{{Unsupported}}|{{ZCS 8.8}}|{{ZCS 8.7}}|{{ZCS 8.6}}|}}


The goal is install in a dedicated server or VM, all the components to have a Centralized Log Server, and also a powerfull Dashboard to configure all the reports.
Using RSyslog. A new version of this guide is available at https://github.com/Zimbra/elastic-stack
 
[[File:Zimbra-kibana-logstash-Diagram.png|800px]]
 
The Logstash, Elasticsearch and Kibana will be installed on this dedicated VM, in the Zimbra Server, or servers, will be installed the Agent.
 
 
==Hardware and Software requisites==
In the Server, or VM, we will install a fresh Ubuntu Server 14.04LTS.
For the Hardware part, depends on how many Zimbra Servers, and how detailed are the Logs. For a regular environment, with the next resources is enough:
<ul>
<li><strong>OS:</strong> Ubuntu 14.04 LTS</li>
<li><strong>vRAM:</strong> 4GB</li>
<li><strong>vCPU:</strong> 2</li>
<li><strong>vDisk:</strong> 100GB (SAS 10K or even better 15K)</li>
</ul>
==Install the Centralized Log Server==
===Installing Java===
Elasticsearch and Logstash needs Java 7 to work, to install it, we need to add the PPA from Oracle to our apt:
<pre>root@logstashkibana01:/home/oper# sudo add-apt-repository -y ppa:webupd8team/java
gpg: keyring `/tmp/tmptjs1zwc5/secring.gpg' created
gpg: keyring `/tmp/tmptjs1zwc5/pubring.gpg' created
gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmptjs1zwc5/trustdb.gpg: trustdb created
gpg: key EEA14886: public key "Launchpad VLC" imported
gpg: Total number processed: 1
gpg:              imported: 1  (RSA: 1)
OK</pre>
 
Once the Oracle repository is added, is time to do an apt-get update to refresh the packages list:
<pre>root@logstashkibana01:/home/oper# apt-get update</pre>
 
Great! Now, install the last stable Java 7 version:
<pre>root@logstashkibana01:/home/oper# sudo apt-get -y install oracle-java7-installer</pre>
 
==Installing Elasticsearch==
To install Elasticsearch, we need to add the public GPG key into our apt:
<pre>root@logstashkibana01:/home/oper# wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -</pre>
 
Now, we need to add the source tree with the next command:
<pre>root@logstashkibana01:/home/oper# echo 'deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list</pre>
 
And do an apt-get update to update the packages list:
<pre>root@logstashkibana01:/home/oper#apt-get update</pre>
 
With all this previous steps, now is time to finally install Elasticsearch:
<pre>root@logstashkibana01:/home/oper# sudo apt-get -y install elasticsearch=1.1.1</pre>
 
Once installed, we need to edit a few parameters to improve the security of our Environment:
<pre>root@logstashkibana01:/home/oper# sudo vi /etc/elasticsearch/elasticsearch.yml</pre>
 
At the end of the file, add the next line to disable the dynamic scripts:
<pre>script.disable_dynamic: true</pre>
 
Also, to disable the API calls, we need to edit the network.host line:
<pre>network.host: localhost</pre>
 
Once we tunned our Elasticsearch, is time to restar the service:
<pre>root@logstashkibana01:/home/oper# sudo service elasticsearch restart
* Starting Elasticsearch Server
...done.</pre>
 
To add the Elasticsearch service into the init, run the next command:
<pre>root@logstashkibana01:/home/oper# sudo update-rc.d elasticsearch defaults 95 10
Adding system startup for /etc/init.d/elasticsearch ...
/etc/rc0.d/K10elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc1.d/K10elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc6.d/K10elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc2.d/S95elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc3.d/S95elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc4.d/S95elasticsearch -&gt; ../init.d/elasticsearch
/etc/rc5.d/S95elasticsearch -&gt; ../init.d/elasticsearch</pre>
 
==Installing Kibana==
At the time of this Wiki, we will install Kibana 3.1.2, please go to the Official Kibana Webiste to use the last release.
 
Download the Kibana release with the next command:
<pre>root@logstashkibana01:/home/oper# wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz</pre>
 
Extract the Kibana package:
<pre>root@logstashkibana01:/home/oper# tar xvf kibana-3.1.2.tar.gz</pre>
 
Move into the Kibana directory and edit the Config File:
<pre>root@logstashkibana01:/home/oper# cd kibbana-3.1.2
root@logstashkibana01:/home/oper/kibana-3.1.2# vi kibana-3.1.2/config.js</pre>
 
Once inside the file, search for the line elasticsearch: and change the port number (default 9200) for the port number 80, later we will connect to the Kibana Server in a easy way, trought the 80 HTTP port:
<pre>elasticsearch: "http://"+window.location.hostname+":80",</pre>
 
Also, we will use nginx to serve our app, Kibana, so we will create first the folder in the /var/www directory:
<pre>root@logstashkibana01:/home/oper/kibana-3.1.2# sudo mkdir -p /var/www/kibana3</pre>
 
Now, copy all the Kibana folder inside the new path:
<pre>root@logstashkibana01:/home/oper# sudo cp -R ~/kibana-3.1.2/* /var/www/kibana3/</pre>
 
Like I said, we will use Nginx to serve our Kibana app.
 
==Installing Nginx==
We will install nginx from the official apt repositories:
<pre>root@logstashkibana01:/home/oper# sudo apt-get install nginx</pre>
 
Kibana and Elsaticsearch works in a particular way, the user needs to access to Elasticsearch directly, so we need to configure Nginx to redirect all the packets to the 9200 port to the 80 port.
But no worries, Kibana have and example that we can use for this. .
 
We will download the Nginx configuration from the GitHub to our folder:
<pre>cd ~; wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf</pre>
 
Edit the Config file:
<pre>root@logstashkibana01:~# vi nginx.conf</pre>
 
Find the line called server_name and add our own FQDN, or localhost if we don't use any particular fqdn. Also we need to add the path to our Kibana installation:
<pre>server_name FQDN;
root /var/www/kibana3;</pre>
 
Save the file and copy it inside the nginx, to make it a default config file:
<pre>root@logstashkibana01:~# sudo cp nginx.conf /etc/nginx/sites-available/default</pre>
 
To allow other users to acces into the Kibana, we need to install the apache2-utils:
</pre>root@logstashkibana01:~# sudo apt-get install apache2-utils</pre>
 
Is time to create an username for Kibana, to save the dashboards.
<pre>root@logstashkibana01:~# sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd <span style="color: #ff0000;">user</span> <span style="color: #ff0000;">admin</span>
New password:
Re-type new password:
Adding password for user admin</pre>
 
We are almost done, just restart the nginx service:
<pre>root@logstashkibana01:~# sudo service nginx restart
* Restarting nginx nginx
...done.</pre>
 
==Installing Logstash==
This is the last package that we will install on the Server or VM. Now is time to install Logstash. We will install it from the Elasticsearch repository, that we have from before, so just launch the next commands:
<pre>root@logstashkibana01:~# echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
deb http://packages.elasticsearch.org/logstash/1.4/debian stable main</pre>
 
Run an apt-get update to refresh the packages list:
<pre>root@logstashkibana01:~# apt-get update</pre>
 
Run the next command to install Logstash:
<pre>root@logstashkibana01:~# sudo apt-get install logstash=1.4.2-1-2c0f5a1</pre>
 
Logstash is now installed, but we need to do this step before continue.
 
===Generate the SSL Certificates to use in the server/client connection ===
We will use Logstash Forwarder in the Zimbra servers to send the logs to the Centralized Log Server. We want to do it in a secure way also. We need to generate an SSL and a key pair. The SSL will be used for the Client to verify the server identity.
 
First step is create the path while we will save the SSL and the private key:
<pre>root@logstashkibana01:~# sudo mkdir -p /etc/pki/tls/certs
root@logstashkibana01:~# sudo mkdir /etc/pki/tls/private</pre>
 
Generate the SSL and the private key:
<pre>root@logstashkibana01:~# cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt</pre>
 
==Configuring the Logstash Server==
All the Logstash configiration files are based in JSON format, and the path where they are located is /etc/logstash/conf.d. The configuration is based in three sections; inputs, filters y outputs.
 
Let's create a Configuration file called <strong>01-lumberjack-input.conf</strong> and then we will configure oir input "lumberjack":
<pre>root@logstashkibana01:/etc/pki/tls# sudo vi /etc/logstash/conf.d/01-lumberjack-input.conf</pre>
 
And fill the file with the next configuration:
<pre>input {
lumberjack {
port =&gt; 5000
type =&gt; "logs"
ssl_certificate =&gt; "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key =&gt; "/etc/pki/tls/private/logstash-forwarder.key"
}
}</pre>
 
Save the file. With this step we specified an input "lumberjack" that is listening in the TCP port 5000 and also will use the SSL and the private key.
 
Now, is time create the file called <strong>10-syslog.conf</strong>, and we will add the filter to our syslog messages:
<pre>root@logstashkibana01:/etc/pki/tls# sudo vi /etc/logstash/conf.d/10-syslog.conf</pre>
 
We will add the next content to the file to define our <strong>filter</strong>:
 
<pre>filter {
if [type] == "syslog" {
grok {
match =&gt; { "message" =&gt; "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field =&gt; [ "received_at", "%{@timestamp}" ]
add_field =&gt; [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match =&gt; [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}</pre>
 
Save it. This filter will look into the logs, and will parse them using grok, to make them easier to understand it.
 
The last file that we need to edit, we will call it '''30-lumberjack-output.conf''':
<pre>root@logstashkibana01:/etc/pki/tls# sudo vi /etc/logstash/conf.d/30-lumberjack-output.conf</pre>
 
And the content needs to have the next:
<pre>output {
elasticsearch { host =&gt; localhost }
stdout { codec =&gt; rubydebug }
}</pre>
 
Bascially, this output save the logs from Logstash inside Elasticsearch.
 
Restart the services:
<pre>root@logstashkibana01:/etc/pki/tls# sudo service logstash restart</pre>
 
We have our Server or VM 100% ready.
 
==Configuring the Zimbra Servers==
Now is time to configure the Zimbra Servers to send the Logs to our Centralized Log Server.
 
Next steps are for Ubuntu 14.04 LTS.
 
===Copy the SSL certificate from the Logstash Server to Zimbra Servers===
On the Logstash server, launch the next command to our Zimbra Server:
<pre>root@logstashkibana01:/etc/pki/tls# scp /etc/pki/tls/certs/logstash-forwarder.crtuser@server_private_IP:/tmp:</pre>
 
===Installing the Logstash Forwarder Package===
On the Zimbra Servers, we need to create the packages list for Logstash Forwarder:
<pre>root@zimbra-sn-u14-01:/home/oper# echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list</pre>
 
Once we've added the repository, install the Logstash Forwarder package:
<pre>root@zimbra-sn-u14-01:/home/oper# sudo apt-get update
root@zimbra-sn-u14-01:/home/oper# sudo apt-get install logstash-forwarder</pre>
 
Add the Logstash Forwarder to the boot sequence:
<pre>root@zimbra-sn-u14-01:/home/oper# cd /etc/init.d/; sudo wget https://raw.github.com/elasticsearch/logstash-forwarder/master/logstash-forwarder.init -O logstash-forwarder
root@zimbra-sn-u14-01:/home/oper# sudo chmod +x logstash-forwarder
root@zimbra-sn-u14-01:/home/oper# sudo update-rc.d logstash-forwarder defaults</pre>
 
Copy the SSL certificate to the proper path:
</pre>root@zimbra-sn-u14-01:/home/oper# sudo mkdir -p /etc/pki/tls/certs
root@zimbra-sn-u14-01:/home/oper# sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/</pre>
 
===Configuring Logstash Forwarder===
We are close to finish, inside the Zimbra Server, we need to think about what Logs we need to send to the Centralized Log Server.
 
Create a configuration file for Logstash Forwarder in JSON format:
<pre>root@zimbra-sn-u14-01:/home/oper# sudo vi /etc/logstash-forwarder</pre>
 
Now, we will fill the configuration file, change the IP for your own Centralized Log Server IP. Here in this example I will send to the Centralized Log Server the next logs: syslog, auth.log, mailbox.log, nginx.access.log, nginx.log, zimbra.log y mail.log, but you can add whatever log that you want:
<pre>{
"network": {
"servers": [ "IPDEVUESTROLOGSTASH:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log",
"/opt/zimbra/log/mailbox.log",
"/opt/zimbra/log/nginx.access.log",
"/opt/zimbra/log/nginx.log",
"/var/log/zimbra.log",
"/var/log/mail.log"
],
"fields": { "type": "syslog" }
}
]
}</pre>
 
Save the configuration file.
 
Last step here is restart the Logstash Forwarder Service inside the Zimbra server:
<pre>root@zimbra-sn-u14-01:/home/oper#  sudo service logstash-forwarder restart</pre>
 
We need to repeat this steps in each Zimbra Server that we want to have the Logs centralized.
 
==Connecting to Kibana==
Now is time to play! and also play in HTML5. Open a Web browser and type your IP or FQDN from your Centralized Log Server.
The first thing that we will see is an overview of Kibana, etc.
We will select the option 1<strong>1. Sample Dashboard</strong>.
 
[[File:Zimbra-logstashkibana-001.png|800px]]
 
I really like it Kibana and also have a Centralized Log Server, but this is specially useful because we can search inside the Logs using checkbox, to filter and have the answer easier.
Also we can mix the search and order for type of field, awesome!
 
[[File:Zimbra-logstashkibana-002.png|800px]]
 
Also, we can play with the Dashboard as we want, share a Public URL with some Customers, or between the IT Department and other Departments, etc.
 
[[File:Zimbra-logstashkibana-003.png|800px]]
 
This is a real overview, and I can see the total Logs received during a period of time.
 
[[File:Zimbra-logstashkibana-004.png|800px]]
 
Here we can see an example of how we can see a Log file, perfectly parsed to consume the information in a easier, and human, way.
 
[[File:Zimbra-logstashkibana-005.png|800px]]
 
That's it folks!
 
This Wiki is based on [https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04 DigitalOcean ELK Tutorial]
 
=CentOS 6 Installation of ELK=
 
==YUM Configuration==
 
===Install the ElasticSearch Signing Key===
rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch
 
===Configure the repos===
 
; Create /etc/yum.repos.d/elasticsearch.repo (for the server)
 
/etc/yum.repos.d/elasticsearch.repo:
[elasticsearch-1.4]
name=Elasticsearch repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
 
; Create /etc/yum.repos.d/logstash.repo (for the server)
 
/etc/yum.repos.d/logstash.repo
[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
 
; Create /etc/yum.repos.d/logstash.repo (for the client)
 
/etc/yum.repos.d/logstash-forwarder.repo
[logstash-forwarder]
name=logstash repository packages
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos/
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
 
==Installation==
 
===ElasticSearch===
 
yum install elasticsearch
chkconfig --add elasticsearch
 
===Logstash===
 
yum install logstash
chkconfig --add logstash
 
===Kibana===
 
mkdir /data/kibana4    # locate in your data location of choice
wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz
tar xvfz kibana-4.0.1-linux-x64.tar.gz
cd kibana-4.0.1-linux-x64
cp -R * /data/kibana4/
 
====Kibana init.d start/stop script ====
 
Note: Kibana4 has an integrated webserver and does not require a separate webserver. The following script is for the integrated web server startup.
 
; Create /etc/init.d/kibana
 
<pre>
#!/bin/sh
#
# /etc/init.d/kibana -- startup script for kibana
# Wolfyxvf 2015-04-16; used httpd init script as template
#
### BEGIN INIT INFO
# Provides:          kibana
# Default-Start:    2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Starts kibana
# Description:      Starts kibana using daemon
### END INIT INFO
 
#configure this with wherever you unpacked kibana:
KIBANA_BIN=/data/kibana4/bin
KIBANA_LOG="/var/log/kibana.log"
 
NAME=kibana
DESC="Kibana"
PID_FOLDER=/var/run/kibana/
PID_FILE=/var/run/kibana/$NAME.pid
LOCK_FILE=/var/lock/subsys/$NAME
PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
DAEMON=$KIBANA_BIN/kibana
RETVAL=0
 
if [ `id -u` -ne 0 ]; then
        echo "You need root privileges to run this script"
        exit 1
fi
 
# Source function library.
. /etc/rc.d/init.d/functions
 
if [ -f /etc/sysconfig/kibana ]; then
        . /etc/sysconfig/kibana
fi
 
start() {
        echo "Starting $DESC : "
 
        pid=`pidofproc -p $PID_FILE kibana`
        if [ -n "$pid" ] ; then
                echo "Already running."
                exit 0
        else
        # Start Daemon
                if [ ! -d "$PID_FOLDER" ] ; then
                        mkdir $PID_FOLDER
                fi
 
                daemon $DAEMON >> $KIBANA_LOG 2>&1 &
                sleep 2
                pidofproc node > $PID_FILE
                echo
                RETVAL=$?
                [ $RETVAL = 0 ] && touch $LOCK_FILE
                return $RETVAL
        fi
}
 
stop() {
        echo -n $"Stopping $DESC : "
        killproc -p $PID_FILE $DAEMON
        RETVAL=$?
        echo
        [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE
}
 
 
# See how we were called.
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  status)
        status -p $PID_FILE $DAEMON
        RETVAL=$?
        ;;
  restart)
        stop
        start
        ;;
  *)
        echo $"Usage: $prog {start|stop|restart|status}"
        RETVAL=2
esac
 
exit $RETVAL
</pre>
 
; make the init file executable
chmod +x /etc/init.d/kibana
 
; Configure to start at boot
chkconfig --add kibana
 
; Create Certs files for putting the Kibana Web interface in HTTPS
cd /etc/pki/tls;  openssl req -subj '/CN=FQDN_OF_KIBANA_SERVER/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/kibana.key -out certs/kibana.crt
 
; Configure the kibana configuration file take our modification
/data/kibana4/config/kibana.yml
<pre>
change to :
...
port 443
...
# SSL for outgoing requests from the Kibana Server (PEM formatted)
ssl_key_file: /etc/pki/tls/private/kibana.key
ssl_cert_file: /etc/pki/tls/certs/kibana.crt
</pre>
 
 
{{Article Footer|Zimbra Collaboration Suite 8.6, 8.5|02/11/2015}}
[[Category: Logger]]
[[Category: Logging]]

Latest revision as of 17:36, 8 June 2021

Centralized Logs - Elasticsearch, Logstash and Kibana

Using RSyslog. A new version of this guide is available at https://github.com/Zimbra/elastic-stack

Jump to: navigation, search