Druid集群安装使用V0.1
1、版本及环境
Druid有开源版本,也有商业版本。
商业版本是Druid的主要创始人出来做的公司,旨在推动Druid社区的快速发展,目前来看Druid的社区非常活跃。
本文采用Druid的商业发行版本进行安装,即采用imply发布的包进行安装。
为了减少文章篇幅,我们假设您有三台物理机(虚拟机),且完成了以下准备工作:
- 安装CentOS7操作系统,配置网络、各节点间免密登录、各节点防火墙关闭
- 各节点JDK8安装、各节点zookeeper安装等。
- mysql安装
2、集群规划
由于Druid采用分布式设计,其中不同类的的节点各司其职,故在实际部署中首选需要对各类节点进行统一规划,从功能上可以分为3个部分。
- Master:管理节点,包含协调节点(coordinator)和统治节点(overlord),负责管理数据写入任务和容错相关处理。
- Data:数据节点,包含历史节点和中间管理者,负责数历史数据的加载和查询和据写入处理。
- Query:查询节点,包含查询节点和Pivot Web界面,负责提供数据查询接口和WEB交互式查询。
具体的规划如下:
服务器名称 | 服务器IP | 安装服务 | 本案配置 |
---|---|---|---|
node01 | 192.168.196.128 | overlord、coordinator、 | RAM 6GB |
node02 | 192.168.196.129 | middlemanager、historical | RAM 6GB |
node03 | 192.168.196.130 | broker、router、pivot | RAM 6GB |
3、配置概述
Druid集群服务众多,一套完整的Druid集群需要修改很多配置文件。我们对常用配置文件进行了整理。
配置描述 | 文件路径 (Root Dir为 {IMPLY_HOME}) | 修改事项 |
---|---|---|
公共配置 | conf/druid/_common/common.runtime.properties | 1.需要添加一些扩展信息Extensions 2.需要配置Zookeeper集群信息 3.需要修改Metadata storage,建议选用mysql 4.需要修改Deep storage,建议选用HDFS |
coordinator | conf/druid/coordinator/runtime.properties | 添加host(可选) |
overlord | conf/druid/overlord/runtime.properties | 添加host(可选) |
historical | conf/druid/historical/runtime.properties | 添加hosts(可选) |
middleManager | conf/druid/middleManager/runtime.properties | 添加host(可选) |
broker | conf/druid/broker/runtime.properties | 添加host(可选) |
router | conf/druid/router/runtime.properties | 添加host(可选) |
pivot | conf/pivot/config.yaml | 启动mysql存储元数据,记得mysql需要开启远程连接 |
正常情况下,如果我们采用的是真实的服务器,修改以上配置文件即可启动集群。不过如果使用虚拟机,需要修改下每个服务启动的JVM内存参数。主要是由大改小,改成1g即可,默认配置小于1gb的就可以不用修改。
-Xms1g
-Xmx1g
-XX:MaxDirectMemorySize=1g
复制代码
涉及到的文件列表:
- conf/druid/overlord/jvm.config
- conf/druid/coordinator/jvm.config
- conf/druid/historical/jvm.config
- conf/druid/middleManager/jvm.config
- conf/druid/broker/jvm.config
- conf/druid/router/jvm.config
在druid中,为了提高查询效率,Broker会缓存大量的数据到内存中,可以好不夸张的说Broker内存越大,实时查询的效率越高。对于虚拟机部署的同学来讲,除了以上jvm的配置,还需要修改broker的一些缓存配置。详见配置文件。
4、配置文件
4.1 common.runtime.properties
#
# Extensions
#
druid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-lookups-cached-global","druid-histogram","druid-datasketches","mysql-metadata-storage","druid-hdfs-storage","druid-kafka-indexing-service"]
#
# Logging
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=node01:2181,node02:2181,node03:2181
druid.zk.paths.base=/druid
#
# Metadata storage
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://node01:3306/druid
druid.metadata.storage.connector.user=root
druid.metadata.storage.connector.password=root
#
# Deep storage
# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://node01:9000/druid/segments
#
# Indexing service logs
# For HDFS:
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=/druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=debug
复制代码
4.2 coordinator/runtime.properties
druid.service=druid/coordinator
druid.host=node01
druid.port=8081
druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S
复制代码
4.3 overlord/runtime.properties
druid.service=druid/overlord
druid.host=node01
druid.port=8090
druid.indexer.queue.startDelay=PT30S
druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata
复制代码
4.4 historical/runtime.properties
druid.service=druid/historical
druid.host=node02
druid.port=8083
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing
# Segment storage
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.server.maxSize=130000000000
# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000
复制代码
4.5 middleManager/runtime.properties
druid.service=druid/middlemanager
druid.host=node02
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.8.3", "org.apache.hadoop:hadoop-aws:2.8.3"]
复制代码
4.6 broker/runtime.properties
druid.service=druid/broker
druid.host=node03
druid.port=8082
# HTTP server settings
druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=10
druid.broker.http.maxQueuedBytes=50000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=1048576
druid.processing.numMergeBuffers=2
druid.processing.numThreads=1
druid.processing.tmpDir=var/druid/processing
# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false
# SQL
druid.sql.enable=true
复制代码
4.7 router/runtime.properties
druid.service=druid/router
druid.host=node03
druid.port=8888
druid.processing.numThreads=1
druid.processing.buffer.sizeBytes=1000000
druid.router.defaultBrokerServiceName=druid/broker
druid.router.coordinatorServiceName=druid/coordinator
druid.router.http.numConnections=50
druid.router.http.readTimeout=PT5M
druid.router.http.numMaxThreads=100
druid.server.http.numThreads=100
druid.router.managementProxy.enabled=true
复制代码
4.8 conf/pivot/config.yaml
# The port on which the imply-ui server will listen on.
port: 9095
# runtime directory
varDir: var/pivot
initialSettings:
connections:
- name: druid
type: druid
title: My Druid
host: node03:8888
coordinatorHosts: ["node01"]
overlordHosts: ["node01"]
settingsLocation:
location: mysql
# 这里mysql server 需要开通远程连接权限
uri: 'mysql://root:root@node01:3306/druid'
table: 'pivot_state'
复制代码
5、集群启动
集群启动需要先启动zookeeper、hadoop hdfs。
5.1 启动命令
-
在node01上启动coordinator和overlord
/export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/master-no-zk.conf -daemon 复制代码
-
在node02上启动middlermanager和historical
/export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/data.conf -daemon 复制代码
-
在node03上启动broker、router和pivot
/export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/query.conf -daemon 复制代码
注意:初次启动,需要前台启动,没启动成功的任务会反复重试启动。
没有启动成功,可以去看日志。
5.2 一键启动
环境变量
export DRUID_HOME=/export/servers/imply-2.8.19
export PATH=${DRUID_HOME}/bin:$PATH
# source生效
复制代码
启动脚本
# 路径 /export/servers/imply-2.8.19/bin/start-druid.sh
# 赋权 chmod +x start-druid.sh
# content
nohup ssh node01 "source /etc/profile; /export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/master-no-zk.conf -daemon" &
nohup ssh node02 "source /etc/profile; /export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/data.conf -daemon" &
nohup ssh node03 "source /etc/profile; /export/servers/imply-2.8.19/bin/supervise -c /export/servers/imply-2.8.19/conf/supervise/query.conf -daemon" &
复制代码
5.3 集群端口
启动成功之后,可以访问以下三个界面。
- 集群信息管理页 http://node01:8081/#/
- 任务管理页 http://node01:8090/console.html
- pivot可视化页 http://node03:9095/pivot/home
5.4 管理界面
管理界面分别如下:
省略
6、Kafka集成测试
集群启动之后,需要导入数据进行测试,官网提供了测试操作步骤。地址
主要步骤如下:
- 创建topic
- 创建 DataSource schema
- 提交 DataSource schema
- 生产数据
6.1 创建topic
### 创建topic
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wikipedia
复制代码
6.2 创建DataSource schema
### DataSource schema
### 安装包中自带,不需要创建。不过需要修改Kafka的地址
vi /export/servers/imply-2.8.19/quickstart/wikipedia-kafka-supervisor.json
复制代码
6.3 提交DataSource schema
### 提交DataSource schema
curl -XPOST -H'Content-Type: application/json' -d @/export/servers/imply-2.8.19/quickstart/wikipedia-kafka-supervisor.json http://node01:8090/druid/indexer/v1/supervisor
复制代码
提交完毕之后,访问管理页面能够看到新创建了一个DataSource。
访问地址: http://node03:9095/pivot/home
6.4 生产数据
使用kafka api生产数据
kafka-console-producer.sh --broker-list node01:9092 --topic wikipedia < /export/servers/imply-2.8.19/quickstart/wikipedia-2016-06-27-sampled.json
复制代码
6.5 查看数据
SELECT FLOOR(__time TO DAY) AS "Day", count(*) AS Edits FROM "wikipedia-kafka" GROUP BY FLOOR(__time TO DAY);
复制代码
7、Hadoop集成测试
7.1 创建 DataSource schema
### DataSource schema
### 安装包中自带,不需要创建。不过需要修改Kafka的地址
/export/servers/imply-2.8.19/quickstart/wikipedia-index-hadoop.json
复制代码
7.2 提交 DataSource schema
cd /export/servers/imply-2.8.19
bin/post-index-task --file quickstart/wikipedia-index-hadoop.json
复制代码
7.3 查看数据
SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2016-06-27 00:00:00' AND TIMESTAMP '2016-06-28 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 5
复制代码
版本及勘误
新版本获取或者勘误,请发邮件到(毛祥溢) maoxiangyi@aliyun.com
所有评论(0)