阅读前必看:
==============================================
本章集成ELK到spring boot,搭建日志系统
即,使用ELK对spring cloud分布式服务器集群日志做收集、汇总、分析、统计和检索操作。
那对于spring boot服务的日志 和 ELK的对接,就是通过Logstash来完成。
那spring boot的日志,如何能让logstash收集到呢?
就是本章的内容
在Spring Boot当中,默认使用logback进行log操作。和其他日志工具如log4j一样,logback支持将日志数据通过提供IP地址、端口号,以Socket的方式远程发送。在Spring Boot中,通常使用logback-spring.xml来进行logback配置。
==============================================
1.pom.xml依赖
要想将logback与Logstash整合,必须引入logstash-logback-encoder包。
<!-- logback 推送日志文件到logstash --> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.1</version> </dependency>
2.在resources下新建logback-spring.xml文件
<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml" /> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>192.168.92.130:5044</destination> //logstash ip和暴露的端口,logback就是通过这个地址把日志发送给logstash <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> </root> </configuration>
3.application.properties文件添加配置
#logback对接logstash的日志配置文件
logging.config=classpath:logback-spring.xml
OK此时spring boot这边的配置就结束了
二、logstash相关配置
1.首先你已经成功搭建了ELK的环境,docker搭建ELK步骤可以参考文章头的链接。
2.自第一步的基础上,进入logstash容器内部
docker exec -it logstash /bin/bash
3.进入目录pipeline中
cd /usr/share/logstash/pipeline
4.修改logstash.conf文件
vi logstash.conf
将本文件的内容修改为:
input{
tcp {
port => 5044
codec => json_lines
}
}
output{
elasticsearch{
hosts=>["http://192.168.92.130:9200"]
index => "user-%{+YYYY.MM.dd}"
}
stdout{codec => rubydebug}
}
修改完成,:wq保存并退出!
注释1:
input
通过tcp方式,logback将日志内容发送给了logstash,也就是logstash的日志来源input为logstash暴露的5044所接收到的日志信息。
因为logstash为本服务所在的服务器上,所以未标明IP即代表logstash服务所在的服务器的IP。
注释2:
output
hosts标明logstash的输出端是存储到ES中,而ES的地址就是http://ES服务所在服务器IP:端口
index代表 日志在ES中所创建的index名为 “user-2019-02-27” 这样的 每天创建新的index
stdout标明 spring boot的日志不仅输出到ES中,还在logstash的控制台也会输出,这样有助于查看
5.退出容器,并重启logstash服务,并打开日志查看
exit
docker restart logstash
docker logs -f logstash
此时可以看到logstash的启动日志
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties [2019-02-27T02:27:18,327][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"} [2019-02-27T02:27:19,454][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml [2019-02-27T02:27:21,138][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.92.130:9200/]}} [2019-02-27T02:27:21,564][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://192.168.92.130:9200/"} [2019-02-27T02:27:21,789][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6} [2019-02-27T02:27:21,804][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [2019-02-27T02:27:22,074][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK [2019-02-27T02:27:22,075][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline. [2019-02-27T02:27:26,124][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2019-02-27T02:27:26,300][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.92.130:9200/]}} [2019-02-27T02:27:26,346][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.92.130:9200/"} [2019-02-27T02:27:26,384][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} [2019-02-27T02:27:26,384][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [2019-02-27T02:27:26,471][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2019-02-27T02:27:26,481][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.92.130:9200"]} [2019-02-27T02:27:26,521][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2019-02-27T02:27:26,613][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5044", :ssl_enable=>"false"} [2019-02-27T02:27:27,111][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3f9c0d16 run>"} [2019-02-27T02:27:27,205][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2019-02-27T02:27:28,253][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://192.168.92.130:9200], sniffing=>false, manage_template=>false, id=>"e9ec7881a954a6e97c29d4cbeb03d4e51feaacda627de4612a8fa1b48627670e", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_c3abebc4-4dff-40b7-834d-517734b43506", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>} [2019-02-27T02:27:28,274][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50} [2019-02-27T02:27:28,324][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.92.130:9200/]}} [2019-02-27T02:27:28,345][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.92.130:9200/"} [2019-02-27T02:27:28,378][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} [2019-02-27T02:27:28,379][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [2019-02-27T02:27:28,420][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.92.130:9200"]} [2019-02-27T02:27:28,476][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x53a8137b run>"} [2019-02-27T02:27:28,484][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]} [2019-02-27T02:27:29,098][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}