Kubernetes 集群本身不提供日志收集的解决方案,一般来说有主要的3种方案来做日志收集:

  • 在节点上运行一个 agent 来收集日志
  • 在 Pod 中包含一个 sidecar 容器来收集应用日志
  • 直接在应用程序中将日志信息推送到采集后端

本文使用以下方案:

fluentd-->kafka-->logstash-->elasticsearch-->kibana

搭建 EFK 日志系统

elasticsearch安装使用集群外部环境

192.168.1.122  9200

 kafka安装使用集群外部环境

192.168.1.122 9092

kubernetes集群创建名称空间

kubectl create namespace logging

 

fluentd安装

默认没有kafka插件

使用官方镜像安装:fluent-gem install fluent-plugin-kafka后,commit一下

创建configmap,添加fluentd配置文件

[root@k8s-master ~]# cat fluentd-configmap.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-config
  namespace: logging
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
  containers.input.conf: |-
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      localtime
      tag raw.kubernetes.*
      format json
      read_from_head true
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
  system.input.conf: |-
    # Logs from systemd-journal for interesting services.
    <source>
      @id journald-docker
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag docker
    </source>
    <source>
      @id journald-kubelet
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag kubelet
    </source>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @type forward
    </source>
  output.conf: |-
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      host 192.168.1.122
      port 9200
      logstash_format true
      request_timeout    30s
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>
[root@k8s-master ~]# 
View Code

相关文章:

  • 2022-12-23
  • 2021-09-21
  • 2022-02-15
  • 2021-09-24
  • 2022-02-03
  • 2022-12-23
  • 2022-12-23
  • 2021-08-12
猜你喜欢
  • 2021-10-17
  • 2022-12-23
  • 2022-01-08
  • 2022-02-20
  • 2022-12-23
  • 2021-12-31
相关资源
相似解决方案