Hi, all: I'm using flink 1.1.3 and kafka consumer 09. I read its code and it says that the kafka consumer will turn on auto offset commit if checkpoint is not enabled. I've turned off checkpoint and it seems that kafka client is not committing to offsets to kafka? The offset is important for helping us monitoring. Anyone has encountered this before? -- Liu, Renjie Software Engineer, MVAD |
I'm not a Kafka expert but maybe Gordon (in CC) knows more.
Timo Am 09/01/17 um 11:51 schrieb Renjie Liu: > Hi, all: > I'm using flink 1.1.3 and kafka consumer 09. I read its code and it > says that the kafka consumer will turn on auto offset commit if > checkpoint is not enabled. I've turned off checkpoint and it seems > that kafka client is not committing to offsets to kafka? The offset is > important for helping us monitoring. Anyone has encountered this before? > -- > Liu, Renjie > Software Engineer, MVAD |
Hi, Not sure what might be going on here. I’m pretty certain that for FlinkKafkaConsumer09 when checkpointing is turned off, the internally used KafkaConsumer client will auto commit offsets back to Kafka at a default interval of 5000ms (the default value for “auto.commit.interval.ms”). Could you perhaps provide the logs of your job (you can send them to me privately if you prefer to)? From the logs we should be able to see if the internal KafkaConsumer client is correctly configured to auto commit and also check if anything strange is going on. Also, how are you reading the committed offsets in Kafka? I recall there was a problem with the 08 consumer that resulted in the Kafka cli not correctly showing committed offsets of consumer groups. However, the 08 consumer had this problem only because we had to implement the auto offset committing ourselves. I don’t think this should be a issue for the 09 consumer, since we’re solely relying on the Kafka client’s own implementation to do the auto offset committing. Cheers, Gordon On January 9, 2017 at 7:55:33 PM, Timo Walther ([hidden email]) wrote:
|
Hi, We had the same problem when running 0.9 consumer against 0.10 Kafka. Upgrading Flink Kafka connector to 0.10 fixed our issue. Br, Henkka On Mon, Jan 9, 2017 at 5:39 PM, Tzu-Li (Gordon) Tai <[hidden email]> wrote:
|
Hi, all: I used kafka connector 0.10 and the problem is fixed. I think this maybe caused by incompatible between consumer 0.9 and broker 0.10. Thanks Henri and Gordon. On Tue, Jan 10, 2017 at 4:46 AM Henri Heiskanen <[hidden email]> wrote:
-- Liu, Renjie Software Engineer, MVAD |
Good to know! On January 10, 2017 at 1:06:29 PM, Renjie Liu ([hidden email]) wrote:
|
Free forum by Nabble | Edit this page |