|
Hi Theo,
no, sorry, the Kafka partitions that each subtask is assigned to is only determined by the index of the subtask.
Best,
Konstantin
On Mon, Jun 17, 2019 at 2:57 PM Theo Diefenthal < [hidden email]> wrote: Hi, We have a Hadoop/YARN Cluster with Kafka and Flink/YARN running on the same machines. In Spark (Streaming), there is a PreferBrokers location strategy, so that the executors consume those kafka partitions which are served from the same machines kafka broker. ( https://spark.apache.org/docs/2.4.0/streaming-kafka-0-10-integration.html#locationstrategies ) I wonder if there is such thing in Flink as well? I didn’t find anything yet. Best regards Theo Diefenthal
-- Konstantin Knauf | Solutions Architect +49 160 91394525
Planned Absences: 20. - 21.06.2019, 10.08.2019 - 31.08.2019, 05.09. - 06.09.2010
--
Data Artisans GmbH | Invalidenstrasse 115, 10115 Berlin, Germany -- Data Artisans GmbH Registered at Amtsgericht Charlottenburg: HRB 158244 B Managing Directors: Dr. Kostas Tzoumas, Dr. Stephan Ewen
|