For those who are not familiar with an ELK stack, it is a centralized log management system made up of three components: Elasticsearch, Logstash and Kibana. In this article, we will learn how to setup such an environment inside a Docker container and then how to push the logs from an Endian Firewall to it using Filebeat.
For the container, I have selected as base image, an image derived from Alpine Linux which has Oracle Server JRE 8 added to it, something we need because the stack is based on Java. As we learned in the previous articles, we will need to create a Dockerfile, which will contain the instructions on building the container; in order to understand what is actually going on inside it, we will split it in sections and try to describe briefly each section.
In the first section, we specify the image on top of which we want to build our new image using the FROM statement and we provide some maintenance information.
FROM joshdev/alpine-oraclejre8 MAINTAINER purplesrl, https://github.com
In the next section, we add the elk username, and this is very important because Elasticsearch is unable to run under the root username anymore since version 5; we also configure the version of the stack we want to install in our case version 5.2.0. Note that it is recommended to install the same version of each component in order to obtain the best compatibility. Another important modification that needs to be mentioned here is the changing of /dev/random to /dev/urandom in the java.security file, which will provide a significant speed boost to the stack; /dev/urandom is not as safe as /dev/random, but the trade between improved security and performance is worth it.
FROM joshdev/alpine-oraclejre8 MAINTAINER purplesrl, https://github.com # adding elk username because elasticsearch can not run as root anymore RUN addgroup -g 1000 elk RUN adduser -G elk elk -D -h /opt # set env variables ENV ELK_VERSION 5.2.0 ENV ELASTIC_URL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ELK_VERSION.tar.gz ENV LOGSTASH_URL https://artifacts.elastic.co/downloads/logstash/logstash-$ELK_VERSION.tar.gz ENV KIBANA_URL https://artifacts.elastic.co/downloads/kibana/kibana-$ELK_VERSION-linux-x86_64.tar.gz RUN apk update \ && apk upgrade \ && apk add nano curl bash openssl libstdc++ \ && sed -i 's/dev\/random/dev\/urandom/g' /usr/lib/jvm/jre/jre/lib/security/java.security
In the following section we switch from the user root to the user elk, and we start to setup the stack in the newly created image. Notice that we changed the listening host from localhost to 0.0.0.0, so that Kibana and Elasticsearch will be available outside the container if needed.
USER elk WORKDIR /opt RUN curl -o /tmp/elastic.tgz $ELASTIC_URL \ && tar -xzf /tmp/elastic.tgz -C /opt \ && curl -o /tmp/logstash.tgz $LOGSTASH_URL \ && tar -xzf /tmp/logstash.tgz -C /opt \ && curl -o /tmp/kibana.tgz $KIBANA_URL \ && tar -xzf /tmp/kibana.tgz -C /opt \ && ln -s kibana-* kibana \ && ln -s logstash-* logstash \ && ln -s elasticsearch-* elasticsearch \ && echo 'network.host: 0.0.0.0' >> /opt/elasticsearch/config/elasticsearch.yml \ && echo 'server.host: "0.0.0.0"' >> /opt/kibana/config/kibana.yml
Finally, we create the directories we need in the /opt directory, we add the sample configuration files and the certificate and we expose the ports needed for the stack to talk to the outside world.
RUN mkdir -p /opt/ssl /opt/logstash/config/conf.d ADD files/startup.sh / ADD files/*.conf /opt/logstash/config/conf.d/ ADD files/logstash-forwarder* /opt/ssl/ EXPOSE 9200 9300 5601 5044 VOLUME /opt/elasticsearch/data ENTRYPOINT "/startup.sh"
In the next step we will have to build the image we need, and fortunately, there is a github repository that can be used to accomplish this task:
purple@docker ~/elk $ git clone http://github.com/purplesrl/docker-alpine-elk Cloning into 'docker-alpine-elk'... remote: Counting objects: 35, done. remote: Compressing objects: 100% (26/26), done. remote: Total 35 (delta 7), reused 33 (delta 5), pack-reused 0 Unpacking objects: 100% (35/35), done. Checking connectivity... done. purple@docker ~/elk $ cd docker-alpine-elk/ purple@docker ~/elk/docker-alpine-elk $ docker build -t docker-alpine-elk . Sending build context to Docker daemon 113.2 kB
At this point we should have a Docker image that we will use to create a container which will be used to gather the logs from the Endian Firewall. We are opening ports 5601 and 5044, 5601 is where we will access Kibana, and 5044 is where the logs will be sent by the beats. After we start the daemon, we will use the docker logs command to see what is going on inside the container.
purple@docker ~ $ mkdir data purple@docker ~ $ chmod 777 data purple@docker ~ $ docker run -d --name elk -ti -p 5601:5601 -p 5044:5044 -v /home/purple/alpine-elk/data:/opt/elasticsearch/data docker-alpine-elk a1f51c206dd02a4af27bbe35bffe076ff60f6bf3a1d9360a8b5e9f945ee71e17 purple@docker ~ $ docker logs -f elk Starting elasticsearch [2017-02-16T15:08:35,018][INFO ][o.e.n.Node ]  initializing ...
Moving on to the Endian Appliance, first we need to download and install the custom RPM for Filebeat and then add the certificate we need in the /etc/filebeat directory:
root@beats:~ # curl -L --remote-name https://github.com/purplesrl/docker-alpine-elk/raw/master/files/filebeat-5.2.1-1.x86_64.rpm [...] root@beats:~ # rpm -ivh filebeat-5.2.1-1.x86_64.rpm Preparing... ########################################### [100%] 1:filebeat ########################################### [100%] root@beats:~ # filebeat -version filebeat version 5.2.1 (amd64), libbeat 5.2.1 root@beats:~ # cd /etc/filebeat root@beats:/etc/filebeat # curl -L --remote-name https://raw.githubusercontent.com/purplesrl/docker-alpine-elk/master/files/logstash-forwarder.crt
In the next step we will create a new filebeat.yml file and place it in the same directory:
filebeat.prospectors: - input_type: log paths: - /var/log/messages - /var/log/maillog - /var/log/endian/* - /var/log/en/* - /var/log/squid/access* - /var/log/snort/* output.logstash: hosts: ["FQDN_OR_HOSTNAME:5044"] ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
Finally, we add a startup script in /var/efw/inithooks so that filebeat will be run in case the appliance is restarted and we start the filebeat daemon:
root@beats:~ # echo '#!/bin/bash' > /var/efw/inithooks/start.local root@beats:~ # echo '/etc/init.d/filebeat start' >> /var/efw/inithooks/start.local root@beats:~ # chmod 0755 /var/efw/inithooks/start.local root@beats:~ # /etc/init.d/filebeat start Starting filebeat: 2017/02/24 10:12:21.949188 logp.go:219: INFO Metrics logging every 30s 2017/02/24 10:12:21.949101 beat.go:267: INFO Home path: [/var/filebeat] Config path: [/etc/filebeat] Data path: [/var/filebeat] Logs path: [/var/log/filebeat] 2017/02/24 10:12:21.949338 beat.go:177: INFO Setup Beat: filebeat; Version: 5.2.1 2017/02/24 10:12:21.949930 logstash.go:90: INFO Max Retries set to: 3 2017/02/24 10:12:21.950008 outputs.go:106: INFO Activated logstash as output plugin. 2017/02/24 10:12:21.950276 publish.go:291: INFO Publisher name: beats.endian 2017/02/24 10:12:21.950917 async.go:63: INFO Flush Interval set to: 1s 2017/02/24 10:12:21.950956 async.go:64: INFO Max Bulk Size set to: 2048 Config OK [ OK ] root@beats:~ #
Now that the stack is working and the logs are send from the appliance to it, the last thing we need to do is to configure Kibana in order to visualize the logs correctly, by changing the index pattern from logstash-* to filebeat-* :
In the end, if everything was correctly set up, we should be able to visualize the logs from the appliance in the Discover section: