I've noticed that this FLINK-11501 was implemented in flink-connector-kafka-0.10 [1], but it wasn't in the current version of the flink-connector-kafka. There is any reason for this, and why should be the best solution to implement a rate limit functionality in the current Kafka consumer?
Thanks, David [1] https://github.com/lyft/flink/blob/release-1.11-lyft/flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer010.java |
My two cents here,
- flink job already has back pressure so rate limit can be done via setting parallelism to proper number in some use cases. There is an open issue of checkpointing reliability when back pressure, community seems working on it. - rate limit can be abused easily and cause lot of confusions. Think about a use case where you have two streams do a simple interval join. Unless you were able to rate limit both with proper value dynamiclly, you might see timestamp and watermark gaps keep increasing causing checkpointing failure. So the question might be, instead of looking at rate limit of one source, how to slow down all sources without ever increasing time, wm gaps. It sounds complicated already. with what being said, if you really want to have rate limit on your own, you can try following code :) It works well for us. public class SynchronousKafkaConsumer<T> extends FlinkKafkaConsumer<T> { @Override @Override Thanks, Chen Pinterest Data
|
Thanks for the reply Chen. My use case is a "simple" get from Kafka into S3. The job can read very quickly from Kafka and S3 is having some issues keeping up. The backpressure don't have enough time to actuate in this case, and when it reaches the checkpoint time some errors like heartbeat timeout or task manager didn't reply back starts to happen. I will investigate further and try this example. On Mon, Jul 6, 2020 at 5:45 PM Chen Qin <[hidden email]> wrote:
|
Two quick comments: With unaligned checkpoints which are released with Flink 1.11.0, the problem of slow checkpoints under backpressure has been resolved/mitigated to a good extent. Moreover, the community wants to work on event time alignment for sources in the next release. This should prevent that different sources diverge too much wrt event time. Cheers, Till On Tue, Jul 7, 2020 at 2:48 AM David Magalhães <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |