Sending kubernetes logs to kibana and s3

In the previous article, we set up a kubernetes cluster using terraform and eks at AWS:

Now, we are going to configure the ELK stack inside the kubernetes to export and visualize the logs of our applications inside the cluster.

All the resources needed to execute this tutorial are on my github: https://github.com/renandeandradevaz/terraform-kubernetes

If you want to know how to configure the cluster using terraform and eks, please, read the previous article first.

After the cluster created, you have to execute the following commands:

First, we need to install the elasticsearch in our cluster. The command is:

kubectl apply -f elasticsearch-ss.yaml

Then, we are going to install the logstash. Logstash will be responsible to send the container logs to elasticsearch and s3. In this file, we have the name of kubernetes indexes and the buckets and directories on s3. Before execute the command, you must change the bucket name and region name inside the file. And then, execute:

kubectl apply -f logstash-deployment.yaml

Now, install filebeat. Filebeat is the plugin responsible to obtain the container logs. Logstash connects with filebeat.

kubectl apply -f filebeat-ds.yaml

(Optional) If you also want to have the kubernetes metrics in kibana, you can install the metric beat plugin.

kubectl apply -f metricbeat-ds.yaml

Install kibana:

kubectl apply -f kibana-deployment.yaml

Install a cron job to delete old logs in elasticsearch (You can configure the period inside this file. The current is 30 days)

kubectl apply -f curator-cronjob.yaml

And, in the end, we are going to install a simple application to verify if all was configured successfully

kubectl apply -f json-log-generator.yaml

Notice that if you run the command “kubectl logs json-log-generator-app -n kube-system”, you will see that the application is using the JSON format to write the logs. And it has a field named “app”. This field is very important. Because the logstash uses this field to know which elasticsearch index will be used and s3 directory, as well. For example, if you install your application inside this configured cluster, use the json format to log the data, and the value of app field in the json is “my-app” or “my-awesome-app”, this index will be created in elasticsearch and a folder will that name will be created on s3.

To verify if the logs are going to s3, you can verify directly on your bucket (The rotation period configured in the file is 10 minutes)

To verify if the logs are going to kibana, you have to do that:

Get the name of kibana pod:

kubectl get pods -n kube-system

And open a tunnel to access kibana in port 5601

kubectl port-forward -n kube-system {KIBANA_LOGGING_POD_NAME_HERE} 5601:5601

And then, access the URL on your machine “localhost:5601” Configure the name of the index on kibana as “my-app*” And you will see the logs there.

At the end, remember to remove the resources to dont increase your bill at aws:

terraform destroy

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

SCL: Student Law Tech Challenge 2019

From thick client exploitation to becoming Kubernetes cluster Admin — The story of a fun bug we…

Beyond 3D Podcast: Simulation and Collaboration in the Cloud

Sketch2React Framework Assistant for Sketch 68

How To: Music Visualizer (Web Audio API)

Using some stuff to solve other stuff: my dive into dynamic programming

Raspberry Pi — torrenting with Transmission

MGTLIC #27 — What’s the actual value of complaining?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Renan Vaz

Renan Vaz

More from Medium

Building Kubernetes Admission Webhooks (Part 2 of 2)

Monitoring Camel K applications using Prometheus and Grafana

MYSQL Database Instance on Red Hat Openshift

Kubernates StatefulSet — RabbitMQ