Fluentd + Kibana: A Working Log Pipeline That Doesn’t Get in the Way
In setups with multiple servers, logs tend to fragment — each system doing its own thing. That’s fine until troubleshooting becomes a scavenger hunt. Fluentd and Kibana, when paired properly, offer a way to collect and view logs across systems — without turning it into a full-scale platform deployment.
It’s not magic. But it’s clean and maintainable.
What Each Component Actually Does
Component | What It Does | Notes from Use |
Fluentd | Watches logs, tags them, pushes to the next destination | Accepts input from files, sockets, journald, or other processes |
Kibana | Lets users browse and search logs | Requires Elasticsearch underneath; good for exploring patterns |
Minimal Setup (No containers, single server, Ubuntu tested)
Fluentd
Install Ruby tools:
sudo apt install ruby ruby-dev make g++
Install Fluentd and the necessary plugin:
gem install fluentd
fluent-gem install fluent-plugin-elasticsearch
Write a config that listens to a log — say /var/log/auth.log — and forwards it:
<source>
@type tail
path /var/log/auth.log
tag auth
format none
</source>
<match **>
@type elasticsearch
host localhost
port 9200
</match>
Start Fluentd using:
fluentd -c path/to/fluent.conf
If it errors out — it will say so. That’s usually enough to debug.
Elasticsearch + Kibana
Install them both (details skipped — Elastic provides .deb packages). Start services:
sudo systemctl start elasticsearch
sudo systemctl start kibana
Check both are alive. Default ports: 9200 (ES), 5601 (Kibana). If Elasticsearch isn’t happy, Kibana won’t do much either.
Once it’s up:
– Open http://localhost:5601
– Go to Index Patterns and create one (e.g. logstash-*)
– Use the Discover tab to check incoming logs
When This Stack Makes Sense
– Logs are coming from different apps and need to land in one place
– SSH-ing into each box to check logs isn’t working anymore
– Need to search logs from the last hour/day/week without stress
– Alerts aren’t the goal, just visibility
– Installing something massive like Graylog feels overkill
Observations from Use
Positives:
– Doesn’t lock anything in — config is yours, routing is up to you
– Lightweight by default (before ES tuning)
– Works with both structured (JSON) and raw logs
– Fluentd buffers and retries if things break downstream
– Kibana is intuitive once basic filters are known
Drawbacks:
– Config errors in Fluentd don’t always fail gracefully
– Elasticsearch eats memory if left unbounded
– Kibana visualizations take some time to click with
– Auth, TLS — manual
– Docs can feel scattered unless following a narrow use case
Summary
This stack isn’t trying to be fancy. It collects logs, and it shows logs. But when that’s all that’s needed — it’s more than enough. There’s setup time, yes. But once it runs, it stays reliable. Which, in production, counts for more than flash.
Once running, it tends to stay out of the way… until something goes wrong. And then it becomes indispensable.