Hi folks!
I have a couple of questions on Flink's behaviour when writing to more than one sink and overall things to look out when operating 500+ sinks. I am starting my Flink journey so want to get inputs to two questions from the community.
Challenge: Need to route incoming events to 500 different kafka topics. Main Kafka Stream->Flink->Kafka Topic1 |Kafka Topic 2, etc... 500 Fafka Sinks.
Questions:
1- Assuming flink can't deliver to one of the Kafka Sinks due a Kafka cluster issue for that sink, what will be Flink's behavior when it comes to the other 499 Kafka sinks?
2- Assuming one of the Kafka Sinks is not properly dimensioned, how will Flink behave in case of slowness in 1 of the Sinks. Can I expect that Flink willl what will apply backpressure. Is the only way to avoid one sink from impacting through backpressure (assuming enough memory to account for any needed buffering) all other sinks to have 500 individual jobs that consume from main kafka and then write to a single sink?
Thank you,
-MS