Hi all,
we are struggling with RateLimitExceededExceptions with the Kinesis Producer. The Flink documentation claims that the Flink Producer overrides the RateLimit setting from Amazon's default of 150 to 100. I am wondering whether we'd need 100/($sink_parallelism) in order for it to work correctly. Since the shard partitioner works on a provided key, every parallel flink sink may use all the shards, right? And since the different parallel Flink sinks cannot coordinate this, every parallel sink will try to saturate every shard, thereby overestimating the capacity by $sink_parallelism. Does anyone else have experience with or knowledge about this? Best regards, Urs -- Urs Schönenberger - [hidden email] TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring Geschäftsführer: Henrik Klagges, Dr. Robert Dahlke, Gerhard Müller Sitz: Unterföhring * Amtsgericht München * HRB 135082 |
Are you sure rate limit is coming from KinesisProducer? If yes, Kinesis support 1000 record write per sec per shard. if you hit the limit, just increase your shard. On Fri, Apr 27, 2018 at 8:58 AM, Urs Schoenenberger <[hidden email]> wrote: Hi all, |
Hi
I encounter the same problem for the Kinesis producer and will try to play around with the config setting and look into the code base. If I figure it out l let you know and please do the same if you figure it out before me
Med venlig hilsen / Best regards Lasse Nedergaard
|
Free forum by Nabble | Edit this page |