docker-compose 部署EFK

这两天在搭建EFK(Elasticsearch+fluentd+Kibana)

EFK是什么???

EFK不是一个软件,而是一套解决方案,开源软件之间的互相配合使用,高效的满足了很多场合的应用,是目前主流的一种日志系统。EFK是三个开源软件的缩写,分别表示:Elasticsearch , Fluentd, Kibana , 其中ELasticsearch负责日志保存和搜索,Fluentd负责收集日志,Kibana 负责界面,三者配合起来,形成一个非常完美的解决方案;

搭建前的准备

1.docker-compose

这玩意应该人手一个了吧,这里就不做赘述

目录配置

在这里插入图片描述

编写docker-compose.yaml

version: "2.2"
services:

  es:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
    container_name: elasticsearch
    restart: unless-stopped
    environment:
      - discovery.type=single-node
    ports:
      - 9200:9200
      - 9300:9300

  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.1
    container_name: kibana
    restart: unless-stopped
    environment:
      - ELASTICSEARCH_HOSTS=http://es:9200
      - I18N_LOCALE=zh-CN
    ports:
      - 5601:5601
    links:
      - "es"
    ports:
      - "5601:5601"

  fluentd:
    build: ./fluentd
    volumes:
      - ./fluentd/conf/:/fluentd/etc/
      - ./storage/logs/:/etc/logs/   //此处映射laravel的日志文件
    links:
      - "es"
    ports:
      - "24224:24224"
      - "24224:24224/udp"

编写Dockerfile文件

FROM fluent/fluentd
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-rdoc", "--no-ri"]

编写fluent.conf

<source>
    @type forward
    port 24224
    bind 0.0.0.0
  </source>


  <source>
    @type tail
    @label @saveData   ##类似于定义一个名字(可以这么理解)
    path /etc/logs/saveData/**/**/*  ##需要监控的日志文件路径
    tag saveData.*         ## 标签
    refresh_interval 5s    ##设置监控刷新时间
    read_from_head true    ##在文件头读取
    limit_recently_modified 86400  ##只读取这个时间段内的
    follow_inodes true
    pos_file /etc/logs/message_30.log  ## fluentd记录上次读取位置,防止从头读取
    <parse>
      @type none
    </parse>
  </source>

  <source>
      @type tail
      @label @condition
      path /etc/logs/condition/**/**/*
      tag condition.*
      refresh_interval 10s
      read_from_head true
      limit_recently_modified 86400
      read_from_head true
      follow_inodes true
      pos_file /etc/logs/message_29.log
      <parse>
        @type none
      </parse>
  </source>



<label @saveData>
  <match>
    @type copy
    <store>
      @type elasticsearch  ##链接es
      host 172.27.0.2
      port 9200
      logstash_format true
      logstash_prefix saveData  ##保存在es的index名
      logstash_dateformat %Y%m%d  ##index后缀
      include_tag_key true
      type_name access_log
      tag_key @log_name
      suppress_type_name true
      include_timestamp true
      ssl_verify    false
      <buffer>
        flush_interval 2s
      </buffer>
    </store>
    <store>
      @type stdout
    </store>
  </match>
</label>


<label @condition>
  <match>
    @type copy
    <store>
      @type elasticsearch
      host 172.27.0.2
      port 9200
      logstash_format true
      logstash_prefix condition
      logstash_dateformat %Y%m%d
      include_tag_key true
      type_name access_log
      tag_key @log_name
      suppress_type_name true
      <buffer>
        flush_interval 5s
      </buffer>
      include_timestamp true
      ssl_verify   false
    </store>
    <store>
      @type stdout
    </store>
  </match>
</label>

配置文件准备就绪,开始整起来

docker-compose up -d
当三个服务都跑起来后,通过docker logs -f 查看日志,要是有哪个服务没启动起来,可以根据此命令查看,一般来说最可能没启动起来的就是fluentd服务,一般没启动起来的原因大概就是fluentd和es服务没能连接起来,可以看看fluent.conf文件配置是否正确,然后可以根据docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' imageId 来查看具体的es地址,一般来说没连上大概率是地址不对

测试本地es是否启动:通过浏览器访问:http://127.0.0.1:9200,出现如图所示即代表成功
在这里插入图片描述

然后这个时候我们在浏览器访问:http://127.0.0.1:5601 即可看到kibana管理后台,然后在索引管理中就可以看到我们刚才在fluent.conf中设置的两个laravel日志的索引存储记录
在这里插入图片描述
为了测试是否可以及时的存储日志文件,我们在命令行执行文件写入(为什么要在命令行写入呢?因为fluentd监控的是path下文件的滚动,要是直接打开文本日志写入是不会被监控到的)
下面是我写入的日志文件例子

echo "[2021-10-29 03:39:35] saveData.INFO: saveData {"params":{"index":"test","id":888sasa,"body":{"id":5941107,"shippingMethodId":null,"shippingMethodName":null,"pluginId":null,"shipToName":"tan2","shipToPhone":null,"shipToSuburb":"FRASER RISE","shipToState":"VIC","shipToPostcode":"3336","shipToCountry":"AU","shipToAddress1":"Second St","shipToAddress2":null,"shipToCompanyName":"eiz","shipToEmail":null,"fromAddress1":"tet-1","fromAddress2":null,"fromSuburb":"Moorabbin","fromState":"VIC","fromCountry":"AU","fromPostcode":"3189","fromCompany_name":"eiz","fromName":"jin2","fromPhone":"47658975","fromEmail":null,"carrierName":null,"labelNumber":[],"fulfillmentStatus":1,"consignments":[],"products":[{"id":4,"account_id":1,"product_id":4,"sku":"124","title":"dsadasds","weight":1,"length":11,"width":11,"height":11,"quantity":0,"location":null,"insured_amount":null,"status":0,"custom_label":null,"custom_label2":null,"custom_label3":null,"img_url":null,"barcode":null,"wms_stock":0,"pivot":{"fulfillment_id":5941107,"product_id":4,"qty":1,"note":null,"sku":"124"}}],"consignmentStatus":0,"picklistStatus":0,"createdAt":"2021-10-26 13:33:03","updatedAt":"2021-10-29 14:39:35","package_info":[{"packObj":[],"qty":"3","weight":"9","length":"6","width":"7","height":"8","package_id":null}],"price":null,"note":null,"tags":[{"id":95,"account_id":1,"parent_id":null,"name":"test","description":"{\"name\":\"test\",\"color\":\"#eb2f96\"}"}],"errors":null,"tracking_status":0,"packageNum":3,"productNum":1,"autoQuoteResult":[],"orders":[],"log":[],"shipToRef":"test_two_test"}}} []" >> 30.log

这个时候我们在执行 docker logs -f fluentdImageId 可以看到日志中记录了一条传输的数据
这个时候我们在打开Kibana后台,刷新刚才追加日志的文件对应的索引记录,即可看到增加了一条日志记录。
到此EFK就部署结束了,我们关闭运行的服务

docker-compose down

码代码不易,觉得有用可以给个小赞,🙏

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐