zabbix Server 4.0监控Flume关键参数

                                          作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

 

   Flume本身提供了http, ganglia的监控服务。当然我们也可以使用JMX的方式去监控Flume,然后只要能集成JMX监控的监控系统应该都能实现简介监控Flume,有网友说,监控Flume我们需要修改其源码让他支持zabbix监控,当然这得让咱们运维人员懂Java开发才行,而且还得有一定的功底,要是改出Bug了反而麻烦。Ganglia监控起来的确方便,但我们公司的监控系统使用的是Zabbix,有的小伙伴使用的是Open Falcon,我们建议大家统一一套监控系统,当然有2套监控系统也是可以的,留作备份也是不错的。

  本片博客会手把手教你使用Zabbix Server监控Flume的重要参数。其实就是利用Flume本身提供的HTTP借口,关于zabbix和flume的部署方式我这里就不再赘述了。我假设你已经将zabbix监控系统和flue-ng进程启动成功啦。接下来我们就来动手干活。

 

一.启用Flume自身的Http监控功能

1>.查看Flume进程的启动脚本

[root@flume112 ~]# cat /soft/flume/conf/job/flume-conf-p2p01.properties 
#定义别名
agent.sources = kafkaSource
agent.channels = kafkaSource
agent.sinks = hdfsSink

#绑定关系
agent.sources.kafkaSource.channels = kafkaSource
agent.sinks.hdfsSink.channel = kafkaSource


#指定source源为kafka source
agent.sources.kafkaSource.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.kafkaSource.kafka.bootstrap.servers = 10.1.2.114:9092,10.1.2.115:9092,10.1.2.116:9092,10.1.2.117:9092,10.1.2.118:9092
agent.sources.kafkaSource.topic = account-check
agent.sources.kafkaSource.kafka.consumer.group.id  = 20190507-account-check
agent.sources.kafkaSource.kafka.consumer.max.partition.fetch.bytes = 20485760
agent.sources.kafkaSource.kafka.consumer.heartbeat.interval.ms = 120000
agent.sources.kafkaSource.kafka.consumer.rebalance.timeout.ms = 300000
agent.sources.kafkaSource.kafka.consumer.fetch.min.bytes = 10000
agent.sources.kafkaSource.kafka.consumer.session.timeout.ms = 180000
agent.sources.kafkaSource.kafka.consumer.request.timeout.ms = 300000
agent.sources.kafkaSource.interceptors = i1
agent.sources.kafkaSource.interceptors.i1.userIp = true
agent.sources.kafkaSource.interceptors.i1.type = host

#指定channel类型为kafka
agent.channels.kafkaSource.type = org.apache.flume.channel.kafka.KafkaChannel
agent.channels.kafkaSource.kafka.bootstrap.servers = 10.1.2.114:9092,10.1.2.115:9092,10.1.2.116:9092,10.1.2.117:9092,10.1.2.118:9092
agent.channels.kafkaSource.kafka.topic = channel.account-check-20190507-01
agent.channels.kafkaSource.kafka.consumer.group.id = 20190507-channel.account-check-20190507-01
agent.channels.kafkaSource.kafka.consumer.heartbeat.interval.ms = 120000
agent.channels.kafkaSource.kafka.consumer.rebalance.timeout.ms = 300000
agent.channels.kafkaSource.kafka.consumer.fetch.min.bytes = 10000
agent.channels.kafkaSource.kafka.consumer.session.timeout.ms = 180000
agent.channels.kafkaSource.kafka.consumer.request.timeout.ms = 300000


#指定sink的类型为hdfs
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.hdfs.path = hdfs://hdfs-ha/user/p2p_kafka/%Y%m%d
agent.sinks.hdfsSink.hdfs.filePrefix = 10-1-2-112_p2p01_%Y%m%d_%H
agent.sinks.hdfsSink.hdfs.fileSuffix = .txt
agent.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
agent.sinks.hdfsSink.hdfs.writeFormat = Text
agent.sinks.hdfsSink.hdfs.fileType=DataStream
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.rollSize = 0
agent.sinks.hdfsSink.hdfs.rollInterval = 300
agent.sinks.hdfsSink.hdfs.batchSize = 1000
agent.sinks.hdfsSink.hdfs.threadsPoolSize = 25
agent.sinks.hdfsSink.hdfs.idleTimeout = 0
agent.sinks.hdfsSink.hdfs.minBlockReplicas = 1
agent.sinks.hdfsSink.hdfs.callTimeout=100000
agent.sinks.hdfsSink.hdfs.request-timeout=100000
agent.sinks.hdfsSink.hdfs.connect-timeout=80000
[root@flume112 ~]# 
[root@flume112 ~]# cat /soft/flume/conf/job/flume-conf-p2p01.properties

相关文章:

  • 2021-09-14
  • 2021-09-03
  • 2022-12-23
  • 2022-12-23
  • 2022-01-20
  • 2021-07-17
  • 2021-09-30
猜你喜欢
  • 2021-12-15
  • 2021-06-26
  • 2021-07-27
  • 2022-01-17
  • 2022-12-23
  • 2022-12-23
  • 2021-11-13
相关资源
相似解决方案