kafka 发送、接收大消息解决办法 - Spring Cloud Stream + Kafka
kafka默认发送和接收的消息是1M大小,如果要发送或者接收超过1M大小的数据,则需要修改以下配置
kafka默认发送和接收的消息是1M大小,如果要发送或者接收超过1M大小的数据,则需要修改以下配置。
Broker: message.max.bytes and replica.fetch.max.bytes
Producer: max.request.size
Consumer: max.partition.fetch.bytes
注意: message.max.bytes必须小于等于replica.fetch.max.bytes
实例:
server.properties
broker.id=1
message.max.bytes=10485760
replica.fetch.max.bytes=10485760
Spring Cloud Stream
spring:
application:
name: spring-kafka
cloud:
stream:
kafka:
binder:
brokers: kafka-host:9092
auto-create-topics: true
auto-add-partitions: true
min-partition-count: 1
producer-properties:
acks: -1
retries: 1
batch.size: 16384 # Bytes,即16kB
linger.ms: 10 # 10ms的延迟
buffer.memory: 33554432 # 32M
max.request.size: 10485760 # 10M
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
consumer-properties:
max.partition.fetch.bytes: 10485760 # 10M
allow.auto.create.topics: true
auto.commit.interval.ms: 1000 # ms
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer
更多推荐
所有评论(0)