TNND,我是真的服了,估计玩阿里云集群的兄弟,都会遇到这个个问题。

首先我们来看启动信息:

[root@hadoop101 ~]# clickhouse-client -m 
ClickHouse client version 20.4.5.36 (official build).
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000)

从上面看,完全看不出问题,别信网上说的什么端口占用、进程没关

直接开日志

[root@hadoop101 clickhouse-server]# tail -f clickhouse-server.log

我们截取ERROR日志信息

2022.02.14 17:32:43.642826 [ 2396 ] {} <Information> Application: starting up
2022.02.14 17:32:43.646257 [ 2396 ] {} <Debug> Application: rlimit on number of file descriptors is 500000
2022.02.14 17:32:43.646274 [ 2396 ] {} <Debug> Application: Initializing DateLUT.
2022.02.14 17:32:43.646282 [ 2396 ] {} <Trace> Application: Initialized DateLUT with time zone 'Asia/Shanghai'.
2022.02.14 17:32:43.646304 [ 2396 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2022.02.14 17:32:43.646484 [ 2396 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'hadoop101' as replica host.
2022.02.14 17:32:43.648321 [ 2396 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2022.02.14 17:32:43.649468 [ 2396 ] {} <Information> Application: Uncompressed cache size was lowered to 3.75 GiB because the system has low amount of memory
2022.02.14 17:32:43.649689 [ 2396 ] {} <Information> Application: Mark cache size was lowered to 3.75 GiB because the system has low amount of memory
2022.02.14 17:32:43.649726 [ 2396 ] {} <Information> Application: Setting max_server_memory_usage was set to 6.75 GiB
2022.02.14 17:32:43.649734 [ 2396 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2022.02.14 17:32:43.651423 [ 2396 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2022.02.14 17:32:43.656837 [ 2401 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2022.02.14 17:32:43.657703 [ 2401 ] {} <Debug> system.metric_log: Loading data parts
2022.02.14 17:32:43.663843 [ 2401 ] {} <Debug> system.metric_log: Loaded data parts (2 items)
2022.02.14 17:32:43.664195 [ 2401 ] {} <Debug> system.trace_log: Loading data parts
2022.02.14 17:32:43.664937 [ 2401 ] {} <Debug> system.trace_log: Loaded data parts (3 items)
2022.02.14 17:32:43.665026 [ 2396 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2022.02.14 17:32:43.665941 [ 2396 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2022.02.14 17:32:43.665958 [ 2396 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2022.02.14 17:32:43.666002 [ 2396 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2022.02.14 17:32:43.666800 [ 2396 ] {} <Debug> Application: Loaded metadata.
2022.02.14 17:32:43.666835 [ 2396 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2022.02.14 17:32:43.668407 [ 2396 ] {} <Information> Application: Shutting down storages.
2022.02.14 17:32:43.668460 [ 2420 ] {} <Trace> SystemLog (system.trace_log): Flushing system log
2022.02.14 17:32:43.668545 [ 2420 ] {} <Debug> SystemLog (system.trace_log): Will use existing table system.trace_log for TraceLog
2022.02.14 17:32:43.668793 [ 2420 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 26.30 GiB.
2022.02.14 17:32:43.669562 [ 2420 ] {} <Trace> system.trace_log: Renaming temporary part tmp_insert_202202_1_1_0 to 202202_19_19_0.
2022.02.14 17:32:44.665383 [ 2421 ] {} <Trace> SystemLog (system.metric_log): Flushing system log
2022.02.14 17:32:44.665754 [ 2421 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2022.02.14 17:32:44.667693 [ 2421 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 26.30 GiB.
2022.02.14 17:32:44.668659 [ 2421 ] {} <Trace> system.metric_log: Renaming temporary part tmp_insert_202202_1_1_0 to 202202_71_71_0.
2022.02.14 17:32:44.669585 [ 2396 ] {} <Trace> BackgroundSchedulePool/BgSchPool: Waiting for threads to finish.
2022.02.14 17:32:44.669831 [ 2396 ] {} <Debug> Application: Shut down storages.
2022.02.14 17:32:44.670359 [ 2396 ] {} <Debug> Application: Destroyed global context.
2022.02.14 17:32:44.670824 [ 2396 ] {} <Error> Application: DB::Exception: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.4.5.36 (official build))
2022.02.14 17:32:44.670847 [ 2396 ] {} <Information> Application: shutting down
2022.02.14 17:32:44.670853 [ 2396 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.02.14 17:32:44.670904 [ 2399 ] {} <Trace> BaseDaemon: Received signal -2
2022.02.14 17:32:44.670925 [ 2399 ] {} <Information> BaseDaemon: Stop SignalListener thread

提取关键信息:

2022.02.14 17:32:44.670824 [ 2396 ] {} <Error> Application: DB::Exception: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.4.5.36 (official build))

他说连接不上这个端口,好了,我们一般到这就是去查看端口了,结构netstat一看,没被占用,就迷茫了。

重点:我们看配置文件就知道了

 <!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IP
v4 and IPv6 from everywhere. -->

    <!-- <listen_host>::</listen_host> -->

    <!-- Same for hosts with disabled ipv6: -->
    <listen_host>0.0.0.0</listen_host>

    <!-- Default values - try listen localhost on ipv4 and ipv6: -->
    <!--
    <listen_host>::1</listen_host>
    <listen_host>127.0.0.1</listen_host>
    -->

从上面我们可以看得到,他说你要是用IPV4网络格式的话,就不能用

<listen_host>::</listen_host>

必须的是

<listen_host>0.0.0.0</listen_host>

改好之后,重启一下就好了,主要原因还是你云服务器没开IPV6,应该是这样吧。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐