1.安装ElasticSearch
1.1准备工作
1.1.1节点介绍
192.168.1.21、192.168.1.22、192.168.1.23
1.1.2拉取镜像(三台虚拟机都执行)

docker pull elasticsearch:6.7.0

1.2.宿主机配置(三台虚拟机都执行)
1.2.1创建文件夹

mkdir -p /data/server/elasticsearch/config
mkdir -p /data/server/elasticsearch/data
mkdir -p /data/server/elasticsearch/plugins/ik
chmod  777 /data/server/elasticsearch/plugins/ik
chmod  777 /data/server/elasticsearch/data
chmod  777 /data/server/elasticsearch/config

1.2.2修改配置sysctl.conf

vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
执行命令:
sysctl -p

1.2.3修改配置limits.conf

vi /etc/security/limits.conf
添加下面配置:
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096

1.2.4修改配置20-nproc.conf

vim /etc/security/limits.d/20-nproc.conf
添加下面配置:
* soft nproc 4096

1.3.准备配置文件
1.3.1节点1(192.168.1.21)
vi /data/server/elasticsearch/config/elasticsearch.yml

#集群名
cluster.name: ESCluster
#节点名
node.name: node-1
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断
#值必须是个真实的ip地址
network.publish_host: 192.168.1.21
#设置对外服务的http端口,默认为9200
http.port: 9200
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true 
node.data: true
#有成为主节点资格的节点列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1) 
discovery.zen.minimum_master_nodes: 2

1.3.2节点2(192.168.1.22)
vi /data/server/elasticsearch/config/elasticsearch.yml

#集群名
cluster.name: ESCluster
#节点名
node.name: node-2
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断
#值必须是个真实的ip地址
network.publish_host: 192.168.1.22
#设置对外服务的http端口,默认为9200
http.port: 9200
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true 
node.data: true
#有成为主节点资格的节点列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1) 
discovery.zen.minimum_master_nodes: 2

1.3.3节点3(192.168.1.23)
vi /data/server/elasticsearch/config/elasticsearch.yml

#集群名
cluster.name: ESCluster
#节点名
node.name: node-3
#设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0
#指绑定这台机器的任何一个ip
network.bind_host: 0.0.0.0
#设置其它节点和该节点交互的ip地址,如果不设置它会自动判断
#值必须是个真实的ip地址
network.publish_host: 192.168.1.23
#设置对外服务的http端口,默认为9200
http.port: 9200
#设置节点之间交互的tcp端口,默认是9300
transport.tcp.port: 9300
#是否允许跨域REST请求
http.cors.enabled: true
#允许 REST 请求来自何处
http.cors.allow-origin: "*"
#节点角色设置
node.master: true 
node.data: true
#有成为主节点资格的节点列表
discovery.zen.ping.unicast.hosts: ["192.168.1.21:9300","192.168.1.22:9300","192.168.1.23:9300"]
#集群中一直正常运行的,有成为master节点资格的最少节点数(默认为1)
# (totalnumber of master-eligible nodes / 2 + 1) 
discovery.zen.minimum_master_nodes: 2

1.4.配置ik中文分词器(三台虚拟机都执行)
去GitHub页面下载对应的ik分词zip包:https://github.com/medcl/elasticsearch-analysis-ik/releases
把ik压缩包复制到你的liunx系统/data/server/elasticsearch/plugins/路径下

cd /data/server/elasticsearch/plugins/
unzip -d /data/server/elasticsearch/plugins/ik/ elasticsearch-analysis-ik-6.7.0.zip 
最后把elasticsearch-analysis-ik-x.x.x.zip 删除
rm -rf /data/server/elasticsearch/plugins/elasticsearch-analysis-ik-6.7.0.zip 

1.5.创建容器并运行(三台虚拟机都执行)

docker run -m 8G --cpus 3 -d --name es --restart=always -v /etc/localtime:/etc/localtime:ro -p 9200:9200 -p 9300:9300 -v /data/server/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/server/elasticsearch/data:/usr/share/elasticsearch/data  -v /data/server/elasticsearch/plugins:/usr/share/elasticsearch/plugins  --privileged=true  elasticsearch:6.7.0
查看ik分词是否加载成功:docker logs es
测试Elasticsearch是否启动成功 http://ip:9200 (ip为自己虚拟机的ip)

1.6.安装head插件

docker pull mobz/elasticsearch-head:5
docker run -m 8G --cpus 3 --name eshead -p 9100:9100 -d docker.io/mobz/elasticsearch-head:5
head可视化界面 http://ip:9100/

2.安装Kibana
2.1准备镜像

docker pull kibana:6.7.0

2.2启动容器

docker run -m 8G --cpus 3 --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.1.21:9200 -p 5601:5601 -d kibana:6.7.0

2.3汉化界面(这个随意)

cd /root
wget https://github.com/anbai-inc/Kibana_Hanization/archive/master.zip
mv Kibana_Hanization-master/ master
docker cp  master kibana:/
rm -rf master*
docker exec -it kibana /bin/bash
cd /master
cp -r translations /opt/kibana/src/legacy/core_plugins/kibana/
chown -R kibana:kibana /opt/kibana/src/legacy/core_plugins/kibana/ 
vi /usr/share/kibana/config/kibana.yml,添加配置:i18n.locale: "zh-CN"
按Ctrl+P+Q退出容器,重启docker restart kibana,即可看到汉化好的界面

3.安装Logstash
3.1拉取镜像

docker pull logstash:6.7.0

3.2创建文件夹

mkdir -p   /data/server/logstash/config
chmod  777  /data/server/logstash/config
mkdir -p  /data/server/logstash/plugin
chmod  777  /data/server/logstash/plugin
mkdir -p  /data/server/logstash/pipeline
chmod  777  /data/server/logstash/pipeline

3.3创建配置文件logstash.yml
vim /data/server/logstash/config/logstash.yml

http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: http://192.168.1.21:9200

3.4准备GeoLite2-City.mmdb

cd /data/server/logstash/config/
wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
gunzip GeoLite2-City.mmdb.gz
chmod 777 -R /data/server/logstash

3.5创建配置文件logstash.conf
vim /data/server/logstash/pipeline/logstash.conf

input {
    file {
        path => "/data/server/logs/coep-rest/behavior.log"
        start_position => beginning
		type => restbehavior
		codec => json {
            charset => "UTF-8"
        }
		add_field => {"serverIp"=>"192.168.1.21"}
    }
	file {
        path => "/data/server/logs/coep-web/behavior.log"
        start_position => beginning
		type => adminbehavior
		codec => json {
            charset => "UTF-8"
        }
		add_field => {"serverIp"=>"192.168.1.21"}
    }
	file {
        path => "/data/server/logs/coew/behavior.log"
        start_position => beginning
		type => webbehavior
		codec => json {
            charset => "UTF-8"
        }
		add_field => {"serverIp"=>"192.168.1.21"}
    }
}
filter {
	if [type] == "restbehavior"{
        geoip {
			source => "sourceIp"
			target => "geoip"
			database => "/data/server/logstash/config/GeoLite2-City.mmdb"
			add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
			add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
	    }
        mutate {
            convert => [ "[geoip][coordinates]", "float"]
        }
    }
	if [type] == "adminbehavior"{
        geoip {
			source => "sourceIp"
			target => "geoip"
			database => "/data/server/logstash/config/GeoLite2-City.mmdb"
			add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
			add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
	    }
        mutate {
            convert => [ "[geoip][coordinates]", "float"]
        }
    }
	if [type] == "webbehavior"{
        geoip {
			source => "sourceIp"
			target => "geoip"
			database => "/data/server/logstash/config/GeoLite2-City.mmdb"
			add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
			add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
	    }
        mutate {
            convert => [ "[geoip][coordinates]", "float"]
        }
    }
}
output {
    if [type] == "restbehavior"{
        elasticsearch { 
		hosts => "192.168.1.21:9200"
		index => "logstash-restbehavior-%{+YYYY.MM.dd}"
		}
    }
	if [type] == "adminbehavior"{
        elasticsearch { 
		hosts => "192.168.1.21:9200"
		index => "logstash-adminbehavior-%{+YYYY.MM.dd}"
		}
    }
	if [type] == "webbehavior"{
        elasticsearch { 
		hosts => "192.168.1.21:9200"
		index => "logstash-webbehavior-%{+YYYY.MM.dd}"
		}
    }
}

3.6启动容器

docker run -m 8G --cpus 3 --name logstash -v /data/server/logs:/data/server/logs -v /data/server/logstash/pipeline:/usr/share/logstash/pipeline -v /data/server/logstash/config/GeoLite2-City.mmdb:/data/server/logstash/config/GeoLite2-City.mmdb -v /data/server/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/server/logstash/plugin:/plugin -p 5000:5000 -p 5044:5044 -p 9600:9600 --privileged=true -d logstash:6.7.0 -f /usr/share/logstash/pipeline/logstash.conf

参考资料:
logstash 6.x 模版 json 文件
https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/elasticsearch-template-es6x.json
logstash 6.x geoip 官方配置
https://www.elastic.co/cn/blog/geoip-in-the-elastic-stack

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐