K8S + EFK Simple Logging
Let’s say you are running all your workloads on Kubernetes. Your workload contains different languages, frameworks etc. You don’t want to deal with varying configurations of logging. In this article, I’ll show you how to achieve simple logging with Elasticsearch, Fluentbit, Kibana and Kubernetes.
Why not Logstash
The answer is simple, Logstash is a resource enemy!
Requirements
💡
If you already have Elasticsearch + Kibana, you may want to use it. If so, please skip to Install the Fluentbit part.
Elasticsearch + Kibana Setup
I’m going to install Elasticsearch with AWS Marketplace AMI in a private VPC. I don’t want to deal with any security concerns.
aws ec2 run-instances \
--image-id ami-0d1ddd83282187d18 \
--instance-type t3a.large \
--count 1 \
--subnet-id subnet-08fc749671b2d077c \
--security-group-ids sg-0b0384b66d7d692f9 \
--key-name KeyPair
💡
— ami-0f126cc4606d8fa91 : ELK packaged by Bitnami-8.6.2–4-r05 on Debian 11
– t3a.large : 2vCPU 8GB Memory
– subnet-08fc749671b2d077c : my private subnet
– sg-0b0384b66d7d692f9: One of my pre-defined security groups.
– KeyPair : My previously created key-pair name.
– You can pass more parameters if you wish. For more detail click here.
Don’t forget to write down your private IP address and auto-generated password after successful installation.
Connect to the ELK Instance
You can connect your instance with the previously provided KeyPair with the “bitnami” default username.
ssh -i "my-key-pair.pem" bitnami@10.10.10.10
Also, you can find your default ELK password from the application log or you can execute the following command
sudo cat /home/bitnami/bitnami_credentials
If you want to change the default “user” password execute the following line
sudo /opt/bitnami/apache/bin/htpasswd -c /opt/bitnami/kibana/config/password user
sudo /opt/bitnami/ctlscript.sh restart apache
If you want to stop Logstash also, you need to execute the following command (We don’t need it anyway)
sudo /opt/bitnami/ctlscript.sh stop logstash
Connect From Different Machine
If you want to connect your new ELK instance from a different machine or application, (we definitely want it) you need to make some changes.
Edit /opt/bitnami/elasticsearch/config/elasticsearch.yml file:
- network.host: Specify the hostname or IP address where the server will be accessible. Set it to 0.0.0.0 to listen on every interface.
- network.publish_host: Specify the hostname that the node publishes to other nodes for communication.
Install Fluentbit
This is the easy part, you only need to execute the following yaml files.
Create a namespace
kubectl create namespace logging
Create ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: logging
Create ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: logging
Create Configmap
💡
If you want to add a prefix, please change the Logstash_Prefix value.
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: logging
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10 filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
Logstash_Prefix my-k8s-cluster
Logstash_Format Off
Retry_Limit False
Type _doc
Time_Key @timestamp
Suppress_Type_Name On
Replace_Dots On parsers.conf: |
[PARSER]
Name apache
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER]
Name apache2
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER]
Name apache_error
Format regex
Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$ [PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On [PARSER]
# http://rubular.com/r/tjUt3Awgg4
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z [PARSER]
Name syslog
Format regex
Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
Create DaemonSet
💡
Please change
FLUENT_ELASTICSEARCH_HOST FLUENT_ELASTICSEARCH_PORT
accordingly.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit-logging
template:
metadata:
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.9.10
imagePullPolicy: Always
ports:
- containerPort: 2020
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "10.10.10.10"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
When you look at the DaemonSets, you need to see 1 entry with x pods. (x: Each for every Node).
If you look at the Kibana, you should see a new index named fluent-bit like below
Result
With Fluentbit we capture all container logs and send them to Elasticsearch. We do not have to make adjustments according to the variety of applications.
Thanks to Fluentbit, all logs are kept with minimal resource consumption.
See you next week.
As always, you can find related files in my GitHub Repository