博主在之前已经介绍过如何部署KafkaKafka的部署模式只有集群模式,Kafka的架构本就是天然的集群架构,因此单节点的部署和多节点的部署是类似的。

集群节点

节点地址
ZooKeeper192.168.1.5
Kafka0192.168.1.4
Kafka1192.168.1.6
Kafka2192.168.1.199

为了简单起见,这里不使用ZooKeeper集群,单机版的ZooKeeper即可满足要求。

4个节点都需要Java环境,以及Kafka集群的3个节点需要下载和上传Kafka相关文件,可参考下面这篇博客:

ZooKeeper

ZooKeeper的配置文件:
在这里插入图片描述

启动ZooKeeper

[root@192 ~]# cd /usr/local/apache-zookeeper-3.6.3-bin/bin/
[root@192 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@192 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Client port found: 9000. Client address: localhost. Client SSL: false.
Mode: standalone

ZooKeeper启动成功。

Kafka集群

[root@192 ~]# cd /usr/local/kafka_2.13-3.0.0
[root@192 kafka_2.13-3.0.0]# vim config/server.properties 
  • Kafka0
    在这里插入图片描述
    在这里插入图片描述
  • Kafka1
    在这里插入图片描述
    在这里插入图片描述
  • Kafka2
    在这里插入图片描述
    在这里插入图片描述

启动每个Kafka节点:

[root@192 kafka_2.13-3.0.0]# bin/kafka-server-start.sh config/server.properties &

在这里插入图片描述
Kafka节点上除了修改broker.id用于区别集群节点,就没有修改其他有关集群的配置信息了,比如没有添加Kafka集群所有节点的地址信息,而在搭建ZooKeeper集群时,是需要将ZooKeeper集群所有节点的地址信息在每个ZooKeeper节点上进行配置。那么Kafka节点是如何去发现集群中的其他节点?这其实就是靠ZooKeeper来维护Kafka的集群信息。

查询ZooKeeper中的数据:

[root@192 bin]# ./zkCli.sh -timeout 40000 -server 127.0.0.1:9000
...
[zk: 127.0.0.1:9000(CONNECTED) 4] ls -R /brokers 
/brokers
/brokers/ids
/brokers/seqid
/brokers/topics
/brokers/ids/0
/brokers/ids/1
/brokers/ids/2
[zk: 127.0.0.1:9000(CONNECTED) 5] get /brokers/ids/0
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.4:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.4","version":5,"timestamp":"1642677446098"}
[zk: 127.0.0.1:9000(CONNECTED) 6] get /brokers/ids/1
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.6:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.6","version":5,"timestamp":"1642677466209"}
[zk: 127.0.0.1:9000(CONNECTED) 7] get /brokers/ids/2
{"features":{},"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://192.168.1.199:9092"],"jmx_port":-1,"port":9092,"host":"192.168.1.199","version":5,"timestamp":"1642677476386"}

很显然ZooKeeper维护了Kafka的集群信息,将Kafka1节点上的服务停止。
在这里插入图片描述
ZooKeeper也能动态更新Kafka的集群信息,这其实是ZooKeeper临时节点的相关特性。

[zk: 127.0.0.1:9000(CONNECTED) 8] ls -R /brokers 
/brokers
/brokers/ids
/brokers/seqid
/brokers/topics
/brokers/ids/0
/brokers/ids/2

Kafka0节点上创建topic

[root@192 kafka_2.13-3.0.0]# bin/kafka-topics.sh --create --bootstrap-server 192.168.1.4:9092 --replication-factor 2 --partitions 2 --topic kaven-topic
[2022-01-20 19:43:51,408] INFO Creating topic kaven-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(0, 2), 1 -> ArrayBuffer(2, 0)) (kafka.zk.AdminZkClient)
[2022-01-20 19:43:51,530] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(kaven-topic-0) (kafka.server.ReplicaFetcherManager)
[2022-01-20 19:43:51,601] INFO [LogLoader partition=kaven-topic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log$)
[2022-01-20 19:43:51,611] INFO Created log for partition kaven-topic-0 in /tmp/kafka-logs/kaven-topic-0 with properties {} (kafka.log.LogManager)
[2022-01-20 19:43:51,612] INFO [Partition kaven-topic-0 broker=0] No checkpointed highwatermark is found for partition kaven-topic-0 (kafka.cluster.Partition)
[2022-01-20 19:43:51,613] INFO [Partition kaven-topic-0 broker=0] Log loaded for partition kaven-topic-0 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-20 19:43:51,703] INFO [LogLoader partition=kaven-topic-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log$)
[2022-01-20 19:43:51,704] INFO Created log for partition kaven-topic-1 in /tmp/kafka-logs/kaven-topic-1 with properties {} (kafka.log.LogManager)
[2022-01-20 19:43:51,704] INFO [Partition kaven-topic-1 broker=0] No checkpointed highwatermark is found for partition kaven-topic-1 (kafka.cluster.Partition)
[2022-01-20 19:43:51,704] INFO [Partition kaven-topic-1 broker=0] Log loaded for partition kaven-topic-1 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-20 19:43:51,704] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions HashSet(kaven-topic-1) (kafka.server.ReplicaFetcherManager)
[2022-01-20 19:43:51,730] INFO [ReplicaFetcherManager on broker 0] Added fetcher to broker 2 for partitions Map(kaven-topic-1 -> InitialFetchState(BrokerEndPoint(id=2, host=192.168.1.199:9092),0,0)) (kafka.server.ReplicaFetcherManager)
[2022-01-20 19:43:51,751] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2022-01-20 19:43:51,757] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Truncating partition kaven-topic-1 to local high watermark 0 (kafka.server.ReplicaFetcherThread)
[2022-01-20 19:43:51,758] INFO [Log partition=kaven-topic-1, dir=/tmp/kafka-logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
Created topic kaven-topic.

Kafka2节点上查询已经创建的topic

[root@192 kafka_2.13-3.0.0]# bin/kafka-topics.sh --list --bootstrap-server 192.168.1.199:9092
kaven-topic

很显然Kafka集群的数据已经同步了,ZooKeeper中也有这些信息:

[zk: 127.0.0.1:9000(CONNECTED) 9] ls -R /brokers 
/brokers
/brokers/ids
/brokers/seqid
/brokers/topics
/brokers/ids/0
/brokers/ids/2
/brokers/topics/kaven-topic
/brokers/topics/kaven-topic/partitions
/brokers/topics/kaven-topic/partitions/0
/brokers/topics/kaven-topic/partitions/1
/brokers/topics/kaven-topic/partitions/0/state
/brokers/topics/kaven-topic/partitions/1/state

创建的topic有两个partition以及每个partition有两个replication--replication-factor 2 --partitions 2),第0partition的信息如下所示,可见两个replication分布在Kafka0Kafka2节点上,而其中的leaderKafka0节点上。

[zk: 127.0.0.1:9000(CONNECTED) 11] get /brokers/topics/kaven-topic/partitions/0/state
{"controller_epoch":3,"leader":0,"version":1,"leader_epoch":0,"isr":[0,2]}

1partition的信息如下所示,可见两个replication分布在Kafka0Kafka2节点,而其中的leaderKafka1节点上。

[zk: 127.0.0.1:9000(CONNECTED) 12] get /brokers/topics/kaven-topic/partitions/1/state
{"controller_epoch":3,"leader":2,"version":1,"leader_epoch":0,"isr":[2,0]}

不了解Kafkatopic,可以阅读下面这篇博客。

搭建Kafka集群就介绍到这里,如果博主有说错的地方或者大家有不同的见解,欢迎大家评论补充。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐