使用librdkafka 报错:“Local: Queue full” 问题解决

分析:该问题发生主要原因是由于生产者的生产能力大于了kafka server的接收落盘能力,导致生产者本地队列写满,默认情况下,如果生产者本地缓冲队列写满会抛出“Local: Queue full”异常。

如何解决这个异常呢?
上面将了,根因是生产者的活干的有点快。所以需要让本地队列里面的数据消化一会儿poll()。
直接上代码吧:

  retry:
    RdKafka::ErrorCode err = producer->produce(
        /* Topic name */
        topic,
        /* Any Partition: the builtin partitioner will be
         * used to assign the message to a topic based
         * on the message key, or random partition if
         * the key is not set. */
        RdKafka::Topic::PARTITION_UA,
        /* Make a copy of the value */
        RdKafka::Producer::RK_MSG_COPY /* Copy payload */,
        /* Value */
        const_cast<char *>(line.c_str()), line.size(),
        /* Key */
        NULL, 0,
        /* Timestamp (defaults to current time) */
        0,
        /* Message headers, if any */
        NULL,
        /* Per-message opaque value passed to
         * delivery report */
        NULL);

    if (err != RdKafka::ERR_NO_ERROR) {
      std::cerr << "% Failed to produce to topic " << topic << ": "
                << RdKafka::err2str(err) << std::endl;

      if (err == RdKafka::ERR__QUEUE_FULL) {
        /* If the internal queue is full, wait for
         * messages to be delivered and then retry.
         * The internal queue represents both
         * messages to be sent and messages that have
         * been sent or failed, awaiting their
         * delivery report callback to be called.
         *
         * The internal queue is limited by the
         * configuration property
         * queue.buffering.max.messages */
        producer->poll(1000 /*block for max 1000ms*/);
        goto retry;
      }

    } else {
      std::cerr << "% Enqueued message (" << line.size() << " bytes) "
                << "for topic " << topic << std::endl;
    }

注意:这里捕获异常ERR__QUEUE_FULL)后使用了poll(1000),就是让本地队列里面的数据向kafka发送一会儿(1000ms)。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐