Programing

Kafka (15MB 이상)로 대용량 메시지를 보내려면 어떻게해야합니까?

lottogame 2020. 8. 31. 08:22
반응형

Kafka (15MB 이상)로 대용량 메시지를 보내려면 어떻게해야합니까?


Java Producer API를 사용하여 문자열 메시지를 Kafka V. 0.8로 보냅니다. 메시지 크기가 약 15MB이면 MessageSizeTooLargeException. message.max.bytes40MB 로 설정하려고 했지만 여전히 예외가 발생합니다. 작은 메시지는 문제없이 작동했습니다.

(프로듀서에 예외가 나타납니다.이 애플리케이션에는 소비자가 없습니다.)

이 예외를 제거하려면 어떻게해야합니까?

내 예제 생산자 구성

private ProducerConfig kafkaConfig() {
    Properties props = new Properties();
    props.put("metadata.broker.list", BROKERS);
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    props.put("request.required.acks", "1");
    props.put("message.max.bytes", "" + 1024 * 1024 * 40);
    return new ProducerConfig(props);
}

오류 기록:

4709 [main] WARN  kafka.producer.async.DefaultEventHandler  - Produce request with correlation id 214 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
4869 [main] WARN  kafka.producer.async.DefaultEventHandler  - Produce request with    correlation id 217 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5035 [main] WARN  kafka.producer.async.DefaultEventHandler  - Produce request with   correlation id 220 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5198 [main] WARN  kafka.producer.async.DefaultEventHandler  - Produce request with correlation id 223 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5305 [main] ERROR kafka.producer.async.DefaultEventHandler  - Failed to send requests for topics datasift with correlation ids in [213,224]

kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
at kafka.producer.Producer.send(Unknown Source)
at kafka.javaapi.producer.Producer.send(Unknown Source)

세 가지 (또는 네 가지) 속성을 조정해야합니다.

  • 소비자 측 : fetch.message.max.bytes- 소비자 가 가져올 수있는 메시지의 최대 크기를 결정합니다.
  • 브로커 측 : replica.fetch.max.bytes- 브로커 의 복제본이 클러스터 내에서 메시지를 보내고 메시지가 올바르게 복제되는지 확인할 수 있습니다. 이 값이 너무 작 으면 메시지가 복제되지 않으므로 메시지가 커밋 (완전 복제)되지 않으므로 소비자는 메시지를 볼 수 없습니다.
  • 브로커 측 : message.max.bytes-생성자로부터 브로커가 수신 할 수있는 메시지의 최대 크기입니다.
  • 브로커 측 (토픽 당) : max.message.bytes-브로커가 토픽에 추가 할 수있는 메시지의 최대 크기입니다. 이 크기는 압축 전 검증되었습니다. (기본값은 브로커의 message.max.bytes.)

2 번에 대한 어려운 방법을 찾았습니다. Kafka에서 예외, 메시지 또는 경고를 전혀받지 못하므로 큰 메시지를 보낼 때이를 고려하십시오.


laughing_man의 대답 과 비교하여 Kafka 0.10새로운 소비자에 필요한 사소한 변경 사항 :

  • 브로커 : 변경 사항이 없습니다. 여전히 속성 message.max.bytesreplica.fetch.max.bytes. message.max.bytes보다 작거나 (*)이어야 replica.fetch.max.bytes합니다.
  • 생산자 : max.request.size더 큰 메시지를 보내려면 늘리십시오 .
  • 소비자 : max.partition.fetch.bytes더 큰 메시지를 받으려면 늘리십시오 .

(*) message.max.bytes<=에 대해 자세히 알아 보려면 주석을 읽으십시오.replica.fetch.max.bytes


다음 속성을 재정의해야합니다.

브로커 구성 ($ KAFKA_HOME / config / server.properties)

  • replica.fetch.max.bytes
  • message.max.bytes

Consumer Configs($KAFKA_HOME/config/consumer.properties)
This step didn't work for me. I add it to the consumer app and it was working fine

  • fetch.message.max.bytes

Restart the server.

look at this documentation for more info: http://kafka.apache.org/08/configuration.html


The idea is to have equal size of message being sent from Kafka Producer to Kafka Broker and then received by Kafka Consumer i.e.

Kafka producer --> Kafka Broker --> Kafka Consumer

Suppose if the requirement is to send 15MB of message, then the Producer, the Broker and the Consumer, all three, needs to be in sync.

Kafka Producer sends 15 MB --> Kafka Broker Allows/Stores 15 MB --> Kafka Consumer receives 15 MB

The setting therefore should be:

a) on Broker:

message.max.bytes=15728640 
replica.fetch.max.bytes=15728640

b) on Consumer:

fetch.message.max.bytes=15728640

One key thing to remember that message.max.bytes attribute must be in sync with the consumer's fetch.message.max.bytes property. the fetch size must be at least as large as the maximum message size otherwise there could be situation where producers can send messages larger than the consumer can consume/fetch. It might worth taking a look at it.
Which version of Kafka you are using? Also provide some more details trace that you are getting. is there some thing like ... payload size of xxxx larger than 1000000 coming up in the log?


The answer from @laughing_man is quite accurate. But still, I wanted to give a recommendation which I learned from Kafka expert Stephane Maarek from Quora.

Kafka isn’t meant to handle large messages.

Your API should use cloud storage (Ex AWS S3), and just push to Kafka or any message broker a reference of S3. You must find somewhere to persist your data, maybe it’s a network drive, maybe it’s whatever, but it shouldn't be message broker.

Now, if you don’t want to go with the above solution

The message max size is 1MB (the setting in your brokers is called message.max.bytes) Apache Kafka. If you really needed it badly, you could increase that size and make sure to increase the network buffers for your producers and consumers.

And if you really care about splitting your message, make sure each message split has the exact same key so that it gets pushed to the same partition, and your message content should report a “part id” so that your consumer can fully reconstruct the message.

You can also explore compression, if your message is text-based (gzip, snappy, lz4 compression) which may reduce the data size, but not magically.

Again, you have to use an external system to store that data and just push an external reference to Kafka. That is a very common architecture, and one you should go with and widely accepted.

Keep that in mind Kafka works best only if the messages are huge in amount but not in size.

Source: https://www.quora.com/How-do-I-send-Large-messages-80-MB-in-Kafka

참고URL : https://stackoverflow.com/questions/21020347/how-can-i-send-large-messages-with-kafka-over-15mb

반응형