SpringKafka无法提交offset问题:Group coordinator not available

在使用SpringKafka时,无法提交offset,提交时报错:

2022-05-28 17:24:32.078  INFO 14584 --- [umer_numb-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-consumer_numb-1, groupId=consumer_numb] Group coordinator xxxx:9092 (id: 2147483646 rack: null) is unavailable or invalid due to cause: error response COORDINATOR_NOT_AVAILABLE.isDisconnected: false. Rediscovery will be attempted.
2022-05-28 17:24:32.222  INFO 14584 --- [umer_numb-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-consumer_numb-1, groupId=consumer_numb] Discovered group coordinator xxxx:9092 (id: 2147483646 rack: null)
2022-05-28 17:24:32.222  INFO 14584 --- [umer_numb-0-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-consumer_numb-1, groupId=consumer_numb] Group coordinator xxxx:9092 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable.isDisconnected: false. Rediscovery will be attempted.
原因:

自己测试的时候,在单个机器上部署了3个节点的Kafka伪集群,kafka是使用内部topic__consume_offsets来保存offset提交信息的,有3个节点,所以应该使用3副本,可以使用kafka-topics.sh检查该topic的信息

bin/kafka-topics.sh --zookeeper zk1:2181,zk2:2182,zk3:2183 --topic __consumer_offsets --describe

如果只有一个ReplicasISR都只有一个,说明配置不对

Topic: __consumer_offsets       Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
        Topic: __consumer_offsets       Partition: 1    Leader: 1       Replicas: 1 Isr: 1
        Topic: __consumer_offsets       Partition: 2    Leader: 2       Replicas: 2 Isr: 2
        Topic: __consumer_offsets       Partition: 3    Leader: 3       Replicas: 3 Isr: 3
        Topic: __consumer_offsets       Partition: 4    Leader: 1       Replicas: 1 Isr: 1
        ....
        Topic: __consumer_offsets       Partition: 49    Leader: 1       Replicas: 1 Isr: 1

检查配置项offsets.topic.replication.factor

# kafka内部topic的副本数
offsets.topic.replication.factor=3
# topic的副本数
default.replication.factor=3

如果配置项不对,则修改配置项,删除配置文件的logs目录下的所有的__consumer_offsets-*文件,并删除zk下的brokers/topics/__consume_offsets,这会导致Kafka消息记录被清除,不过出现了这个问题应该在初始阶段,问题不大。之后重启kafka集群

bin/kafka-stop.sh server.properties
bin/kafka-start.sh -daemon server.properties

如果怕提交信息offset被清除,也可以试下不删除logs文件和zk的路径,直接重启

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐