spark读写clickhouse
官网文档:https://clickhouse.tech/docs/zh/spark读取clickhouse数据一:这种jdbc的连接加载的是全量表数据val prop = new java.util.Propertiesprop.setProperty("user", "default")prop.setProperty("password", "123456")prop.setProperty
·
官网文档:https://clickhouse.tech/docs/zh/
spark读取clickhouse数据
一:这种jdbc的连接加载的是全量表数据
val prop = new java.util.Properties
prop.setProperty("user", "default")
prop.setProperty("password", "123456")
prop.setProperty("driver", "ru.yandex.clickhouse.ClickHouseDriver")
val readDataDf = sparkSession
.read
.jdbc("jdbc:clickhouse://hadoop102:8123",
"table_op",
prop)
.where("LocationTime>='2021-09-21 09:00:00' AND LocationTime<='2021-09-21 18:00:00'")
二:这种是添加过滤条件加载部分数据(推荐这种,因为如果你的表很大的话spark任务driver启不来)
//将过滤查询提前存到临时表
val tablename = s"(select * from table_op where LocationTime between '$start_time' and '$end_time') temp"
val readDataDf = session.read
.format("jdbc")
.option("url", "jdbc:clickhouse://hadoop05:8123")
.option("fetchsize", "500000")
.option("driver", "ru.yandex.clickhouse.ClickHouseDriver")
.option("user", "default")
.option("password", "123456")
.option("dbtable", tablename)
.load()
spark写入clickhouse
val prop = new java.util.Properties
prop.setProperty("user", CommonConfig.CLICK_USER)
prop.setProperty("password", CommonConfig.CLICK_PASSWORD)
prop.setProperty("driver", CommonConfig.CLICK_DRIVER)
resultDataDf
.write
.mode(SaveMode.Append)
.option("batchsize", "500000")
.option("isolationLevel", "NONE")
.option("numPartitions", "1")
.jdbc("jdbc:clickhouse://hadoop102:8123", //CommonConfig.CLICK_URL
"table_op_2020", prop)
clickhouse建表
启动clickhouse客户端
clickhouse-client --port 9020 -h hadoop05 -d default -m -u default --password 123456
建表(本地表,分布式表)
一次性在集群每个机器上建立本地表(因为加了on cluster,所以只需在任意一台机器上执行一次建表,集群中每台都会生成本地表;如果不加on cluster,则需要去每台机器都执行一遍建表语句生成本地表)
CREATE TABLE `table_2021` ON CLUSTER fangteCluster (
`Id` String,
`Udid` String,
`LocationTime` String,
`Gcode` String,
`IP` String,
`IMEI` String,
`IDFA` String,
`UserId` String,
`Year` UInt16,
`Month` UInt8,
`Day` UInt8
)ENGINE = MergeTree()
PARTITION BY (Year,Month,Day)
ORDER BY (Gcode,Udid)
SETTINGS index_granularity=8192, enable_mixed_granularity_parts=1;
默认情况下主键跟排序键(由 ORDER BY 子句指定)相同。
因此,大部分情况下不需要再专门指定一个 PRIMARY KEY 子句
enable_mixed_granularity_parts=1 启用以字节为单位的索引粒度,有助于提升大表的查询性能
一次性在集群所有机器上建立分布式表
CREATE TABLE table_op_2021 ON CLUSTER mycluster AS table_2021 ENGINE = Distributed(mycluster, default, table_2021, rand());
只在一台机器上建分布式表
CREATE TABLE table_op_2021 AS table_2021 ENGINE = Distributed(mycluster, default, table_2021, rand());
更多推荐
已为社区贡献7条内容
所有评论(0)