ELK日志系统一般是3台或以上的服务端搭建。
Kafka和Zookeeper一般是每个服务端都需要安装,logstash一般是一个ELK一个,elasticsearch、Kibana和filebeat一般是一个ELK一个(可以不装filebeat),
Filebeat一般是每个客户端都要安装。

在2.129、2.173和2.185三台机器上搭建Kafka集群,在2.129、2.173和2.185三台机器上都部署filebeat,分别采集日志,
发送到kafka集群,由2.129机器上的logstash接收kafka消息做过滤或者分组,然后把日志数据发送给2.129机器上的es,
最后由2.129机器上的kibana进行日志可视化展示。elasticsearch、Kibana和logstash一般没有要求在同一台主机上,只需要可以互相连通就可以。

Kafka 特性(一般是用来处理高并发):
高吞吐量、低延迟每个topic可以分多个partition, consumer group 对partition进行consume操作。
可扩展性:kafka集群支持热扩展。
持久性、可靠性:消息被持久化到本地磁盘,并且支持数据备份防止数据丢失
容错性:允许集群中节点失败(若副本数量为n,则允许n-1个节点失败)。
高并发:支持数千个客户端同时读写。

zookeeper是集群管理的工具。

目录

一、安装部署 Elasticsearch

二、安装部署 Kibana

三、安装部署 Logstash

四、安装zookeeper(129、130、131都安装,这里虚拟机作为服务端)

五、安装kafka(129、130、131都安装,这里虚拟机作为服务端)

六、安装Filebeat(129、130、131都安装,这里虚拟机作为客户端)


一、安装部署 Elasticsearch

1、mkdir /usr/local/elasticsearch
2、wget https://mirrors.huaweicloud.com/elasticsearch/7.8.0/elasticsearch-7.8.0-linux-x86_64.tar.gz
3、tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz -C /usr/local/elasticsearch
4、cd /usr/local/elasticsearch/elasticsearch-7.8.0/
5、cd config
vi elasticsearch.yml

# 修改以下几项:
node.name: node-1 # 设置节点名
path.data: /usr/local/elasticsearch/elasticsearch-7.8.0/data
path.logs: /usr/local/elasticsearch/elasticsearch-7.8.0/logs
network.host: 192.168.146.129     # 允许能连通192.168.146.129的ip访问
http.port: 9200         #设置端口为9200
cluster.initial_master_nodes: ["node-1"] # 设置集群初始主节点

6、adduser es
passwd es
chown -R es /usr/local/elasticsearch
7、su es
cd ..
cd bin
./elasticsearch -d               #启动 ElasticSearch (-d 表示在后台启动)

报错:
your Java version from [/usr/local/jdk/jdk1.8.0_281/jre] does not meet this requirement
原因:1.7或1.8版本的JDK与Elasticsearch的要求版本无法对应
解决方案:使用JDK11(之后配置elasticsearch文件)或使用ES内置的jdk
【方法一】jdk11安装包下载:wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz
具体步骤查看CSDN:https://blog.csdn.net/weixin_39643007/article/details/110393759
【方法二】修改elasticsearch-env配置文件
vi elasticsearch-env

修改内容如下(先注释掉之前的配置,再添加):
# now set the path to java
# 注释掉原来的配置信息
#if [ ! -z "$JAVA_HOME" ]; then
#  JAVA="$JAVA_HOME/bin/java"
#  JAVA_TYPE="JAVA_HOME"
#else
#  if [ "$(uname -s)" = "Darwin" ]; then
#    # macOS has a different structure
#    JAVA="$ES_HOME/jdk.app/Contents/Home/bin/java"
#  else
#    JAVA="$ES_HOME/jdk/bin/java"
#  fi
#  JAVA_TYPE="bundled jdk"
#fi
# 添加新的配置信息
if [ ! -z "$JAVA_HOME" ]; then
  JAVA="$ES_HOME/jdk/bin/java"
  JAVA_TYPE="JAVA_HOME"
else
  if [ "$(uname -s)" = "Darwin" ]; then
    # macOS has a different structure
    JAVA="$ES_HOME/jdk.app/Contents/Home/bin/java"
  else
    JAVA="$ES_HOME/jdk/bin/java"
  fi  
  JAVA_TYPE="bundled jdk"
fi

8、采用使用ES内置的jdk来处理报错your Java version from [/usr/local/jdk/jdk1.8.0_281/jre] does not meet this requirement后,
输入./elasticsearch -d后台启动 ElasticSearch。

报错1:
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max number of threads [3795] for user [es] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
处理方法:
su root
vi /etc/security/limits.conf        //[1][2]的处理方法
添加以下内容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

vi /etc/sysctl.conf        //[3]的处理方法
添加以下内容:
vm.max_map_count=655360
之后输入sysctl -p


报错2:
java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/local/elasticsearch/elasticsearch-7.8.0/data]] with lock id [0]; 
maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
处理方法:
cd /usr/local/elasticsearch/elasticsearch-7.8.0/data/nodes/0
rm -rf node.lock

报错3:
ElasticsearchException[failed to load metadata]; nested: LockObtainFailedException
[Lock held by another program: /usr/local/elasticsearch/elasticsearch-7.8.0/data/nodes/0/_state/write.lock];
Likely root cause: org.apache.lucene.store.LockObtainFailedException:
 Lock held by another program: /usr/local/elasticsearch/elasticsearch-7.8.0/data/nodes/0/_state/write.lock
处理方法:
cd /usr/local/elasticsearch/elasticsearch-7.8.0/data/nodes/0/_state
 rm -rf write.lock

9、su es
./elasticsearch -d

10、su root
firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --reload

11、浏览器输入http://192.168.146.129:9200/得到以下内容:
{
  "name" : "node-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "MQEnPYDxSfy3L5py9-E4hA",
  "version" : {
    "number" : "7.8.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65",
    "build_date" : "2020-06-14T19:35:50.234439Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}


二、安装部署 Kibana

1、mkdir /usr/local/kibana-7.8.0
2、选择和Elasticsearch版本一致的Kibana版本下载。
wget https://mirrors.huaweicloud.com/kibana/7.8.0/kibana-7.8.0-linux-x86_64.tar.gz
3、tar -zxvf kibana-7.8.0-linux-x86_64.tar.gz -C /usr/local/kibana-7.8.0
4、cd /usr/local/kibana-7.8.0/kibana-7.8.0-linux-x86_64
5、修改配置文件
cd /config
vi kibana.yml

修改内容如下:
# 服务端口
server.port: 5601
# 服务器ip
server.host: "192.168.146.129"
# Elasticsearch 服务地址
elasticsearch.hosts: ["http://192.168.146.129:9200"]
# 设置语言为中文
i18n.locale: "zh-CN"

6、chown -R es /usr/local/kibana-7.8.0      //给 es 用户授予 kibana 目录的权限
7、firewall-cmd --zone=public --add-port=5601/tcp --permanent
firewall-cmd --reload
8、启动 Kibana
su es
cd ..
cd bin
(1)前台启动
./kibana 
(2)后台启动
./kibana &

三、安装部署 Logstash

1、mkdir /usr/local/logstash-7.8.0
2、wget https://mirrors.huaweicloud.com/logstash/7.8.0/logstash-7.8.0.tar.gz
3、tar -zxvf logstash-7.8.0.tar.gz -C  /usr/local/logstash-7.8.0
4、cd /usr/local/logstash-7.8.0/logstash-7.8.0
5、新增配置文件
根据原有的 logstash-sample.conf 配置文件复制出一个新的配置文件并修改。
cd config
cp logstash-sample.conf logstash-es.conf               

6、修改配置文件
vi logstash-es.conf

修改成如下内容:

input{                    # input输入源配置
kafka {
bootstrap_servers => "192.168.146.129:9092,192.168.146.130:9092,192.168.146.131:9092"
topics=> "sxqiu_topic"
codec=> json                   
}
}
output{                    # output 数据输出配置
if [fields][systemname]=="nginx129"{
elasticsearch{                                                       # 使用elasticsearch接收
hosts=>"http://192.168.146.129:9200"           # 集群地址 多个用,隔开
index=>"nginx129-%{+YYYY-MM-dd}"
}
}
if [fields][systemname]=="nginx130"{
elasticsearch{                                                      # 使用elasticsearch接收
hosts=>"http://192.168.146.129:9200"           # 集群地址 多个用,隔开
index=>"nginx130-%{+YYYY-MM-dd}"
}
}
else if [fields][systemname]=="nginx131"{
elasticsearch{                                                      # 使用elasticsearch接收
hosts=>"http://192.168.146.129:9200"           # 集群地址 多个用,隔开
index=>"nginx131-%{+YYYY-MM-dd}"
}
}
}


7、chown -R es /usr/local/logstash-7.8.0

8、后台启动Logstash
./logstash -f /usr/local/logstash-7.8.0/logstash-7.8.0/config/logstash-es.conf &


报错1:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/logstash-7.8.0/logstash-7.8.0/bin/hs_err_pid64295.log
解决方法:
由于Logstash默认分配jvm空间大小为1g,我的服务器内存不够大,修改jvm空间分配
cd ..
cd config
vi jvm.options 
将以下内容 
-Xms1g  
-Xmx1g  
修改为  
-Xms128m  
-Xmx128m  
【如果还是报这个错误,则还修改Elasticsearch安装目录下的config的jvm.options,内容修改与上面一致】

报错2:
Sending Logstash logs to /usr/local/logstash-7.8.0/logstash-7.8.0/logs which is now configured via log4j2.properties
[2022-08-17T22:59:02,479][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-08-17T22:59:02,878][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
[2022-08-17T22:59:02,898][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
原因:之前运行的instance有缓冲,保存在path.data里面有.lock文件,删除掉就可以。
解决办法:
在 logstash.yml 文件中找到 Data path 的路径(默认在安装目录的data目录下)
cd /usr/local/logstash-7.8.0/logstash-7.8.0/data
ls -la
rm -rf .lock

四、安装zookeeper(129、130、131都安装,这里虚拟机作为服务端)

1、su root
mkdir /usr/local/zookeeper
cd /usr/local/zookeeper
chown -R es /usr/local/zookeeper
chown -R es /home/zookeeper
chown -R es /tmp/zookeeper
su es
tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz
2、配置环境变量 
export ZOOK=/usr/local/zookeeper/apache-zookeeper-3.8.0-bin
3、cd apache-zookeeper-3.8.0-bin
cd conf
4、cp zoo_sample.cfg zoo.cfg
5、vi zoo.cfg

修改配置文件:
#数据目录
dataDir=/home/zookeeper/data
#日志目录
dataLogDir=/home/zookeeper/logs
#心跳间隔时间,zookeeper中使用的基本时间单位,毫秒值。每隔2秒发送一个心跳
tickTime=2000
#leader与客户端连接超时时间。表示10个心跳间隔
initLimit=10
#Leader与Follower之间的超时时间,表示5个心跳间隔
syncLimit=5
#客户端连接端口
clientPort=2181

6、cd ..
cd bin
./zkServer.sh start &


五、安装kafka(129、130、131都安装,这里虚拟机作为服务端)

1、su root
mkdir /usr/local/kafka
cd /usr/local/kafka
chown -R es /usr/local/kafka
tar -zxvf kafka_2.12-2.5.0.tgz
cd kafka_2.12-2.5.0
2、mkdir /usr/local/kafka/log
mkdir /usr/local/kafka/log/kafka #创建kafka日志目录 
3、cd config
vi server.properties      #配置kafka
修改配置文件:
broker.id=0 
listeners=PLAINTEXT://192.168.146.129:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka/log/kafka #日志存放路径,上面创建的目录 
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
#zookeeper.connect=192.168.146.129:2181 #zookeeper地址和端口,单机配置部署
zookeeper.connect=192.168.146.129:2181,192.168.146.130:2181,192.168.146.131:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

4、mkdir /usr/local/kafka/zookeeper #创建zookeeper目录 
5、 vi zookeeper.properties 
编辑修改相应的参数 :
dataDir=/usr/local/kafka/zookeeper #zookeeper数据目录 
clientPort=2181
initLimit=20
syncLimit=20
server.3=192.168.146.129:2888:3888
server.2=192.168.146.130:2888:3888
server.1=192.168.146.131:2888:3888

6、cd ..
cd bin
./zookeeper-server-start.sh /usr/local/kafka/kafka_2.12-2.5.0/config/zookeeper.properties &     #需要先kill掉之前开启的zookeeper进程
./kafka-server-start.sh /usr/local/kafka/kafka_2.12-2.5.0/config/server.properties &

报错1:ERROR Error while creating ephemeral at /brokers/ids/1, node already exists
修改server.properties和/usr/local/kafka/log/kafka的文件meta.properties的broker.id为1。

7、创建名为test的topic
./kafka-topics.sh --zookeeper 192.168.146.129:2181 --create --topic test --partitions 30  --replication-factor 1
注: partitions指定topic分区数,replication-factor指定topic每个分区的副本数
9、查看所有topic列表
./kafka-topics.sh --zookeeper 192.168.146.129:2181 --list
查看指定topic信息
./kafka-topics.sh --zookeeper 192.168.146.129:2181--describe --topic test
控制台向topic生产数据
./kafka-console-producer.sh --broker-list 192.168.146.129:9092 --topic test
控制台消费topic的数据
./kafka-console-consumer.sh  --zookeeper 192.168.146.129:2181  --topic test--from-beginning
查看topic某分区偏移量最大(小)值
./kafka-run-class.sh kafka.tools.GetOffsetShell --topic hive-mdatabase-hostsltable  --time -1 --broker-list 192.168.146.129:9092 --partitions 0
注: time为-1时表示最大值,time为-2时表示最小值

增加topic分区数, 为topic t_cdr 增加10个分区
./kafka-topics.sh --zookeeper 192.168.146.129:2181  --alter --topic test --partitions 10
删除topic,慎用,只会删除zookeeper中的元数据,消息文件须手动删除
./kafka-run-class.sh kafka.admin.DeleteTopicCommand --zookeeper 192.168.146.129:2181 --topic test
查看topic消费进度
这个会显示出consumer group的offset情况, 必须参数为--group, 不指定--topic,默认为所有topic
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker

10、su root
firewall-cmd --zone=public --add-port=9092/tcp --permanent
systemctl restart firewalld
firewall-cmd --zone=public --add-port=2181/tcp --permanent
systemctl restart firewalld

六、安装Filebeat(129、130、131都安装,这里虚拟机作为客户端)

1、mkdir /usr/local/filebeat-7.8.0
chown -R es /usr/local/filebeat-7.8.0
cd /usr/local/filebeat-7.8.0
2、wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.8.0-linux-x86_64.tar.gz
3、tar -zxvf filebeat-7.8.0-linux-x86_64.tar.gz -C /usr/local/filebeat-7.8.0
4、cd /usr/local/filebeat-7.8.0/filebeat-7.8.0-linux-x86_64
5、vi filebeat.yml
修改如下内容:
filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /usr/local/webserver/nginx/logs/access.log 
#需要确保es用户对这两个log文件有读写功能,且有进入这个文件路径的权限。例:
#chmod 777 /usr/local/webserver/nginx/logs/access.log
#chmod 777 /usr/local/webserver/nginx/logs
#chmod 777 /usr/local/webserver/nginx
#chmod 777 /usr/local/webserver
#chmod 777 /usr/local
#chmod 777 /usr
  fields:
    systemname: nginx131
    fields_under_root: true         #自定义字段将为文档中的顶级字段

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
output.kafka:
  enabled: true
  hosts: ["192.168.146.129:9092","192.168.146.130:9092","192.168.146.131:9092"]
  topic: "sxqiu_topic"               #配置topic
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

6、./filebeat modules enable kafka          //启用和配置 kafka 模块
7、启动 Filebeat
./filebeat setup
(1)前台启动
./filebeat -e
(2)后台启动-c
./filebeat -c filebeat.yml  &

报错1: ERROR   instance/beat.go:958    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
处理:cd data
ls -la
rm -rf xxx.lock

报错2:Exiting: error loading config file: config file ("filebeat.yml") can only be writable by the owner 
but the permissions are "-rwxrwxrwx" (to fix the permissions use: 'chmod go-w /usr/local/filebeat-7.8.0/filebeat-7.8.0-linux-x86_64/filebeat.yml')
处理:su root
chmod go-w /usr/local/filebeat-7.8.0/filebeat-7.8.0-linux-x86_64/filebeat.yml

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐