Is there any way to indicate flink not to allocate all parallel tasks on one node? We have a stateless flink job that reading from 10 partition topic and have a parallelism of 6. Flink job manager allocates all 6 parallel operators to one machine, causing all traffic from Kafka allocated to only one machine. We have a cluster of 6 nodes and ideal to spread one parallel operator to one machine. Is there a way to do than in Flink?
|
Hi,
How are your task managers deploy ? If you cluster only have one task manager with one slot in each node, then the job should be spread evenly. Regards, Kien On 10/24/2018 4:35 PM, Sayat Satybaldiyev wrote: > Is there any way to indicate flink not to allocate all parallel tasks > on one node? We have a stateless flink job that reading from 10 > partition topic and have a parallelism of 6. Flink job manager > allocates all 6 parallel operators to one machine, causing all traffic > from Kafka allocated to only one machine. We have a cluster of 6 nodes > and ideal to spread one parallel operator to one machine. Is there a > way to do than in Flink? |
Flink Cluster in standalone with HA configuration. It has 6 Task managers and each has 8 slots. Overall, 48 slots for the cluster. >>If you cluster only have one task manager with one slot in each node, then the job should be spread evenly. Agree, this will solve the issue. However, the cluster is running other jobs and in this case it won't have hardware resource for other jobs. On Wed, Oct 24, 2018 at 2:20 PM Kien Truong <[hidden email]> wrote: Hi, |
In reply to this post by Sayat Satybaldiyev
Thanks for the advice, Klein. Could you please share more details why it's best to allocate for each job a separate cluster? On Wed, Oct 24, 2018 at 3:23 PM Kien Truong <[hidden email]> wrote:
|
Hi all, the version is 1.4.2 Thanks for your assistance. Sayat Satybaldiyev <[hidden email]> 于2018年10月26日周五 下午3:50写道:
|
In reply to this post by Sayat Satybaldiyev
Hi, There are couple of reasons: - Easier resource allocation and isolation: one faulty job
doesn't affect another. - Mix and match of Flink version: you can leave the old stable jobs run with the old Flink version, and use the latest version of Flink for new jobs. - Faster metrics collection: Flink generates a lots of metrics, by keeping each cluster small, our Prometheus instance can scrape their metrics a lot faster.
Regards, Kien
On 10/26/2018 2:50 PM, Sayat
Satybaldiyev wrote:
|
Hi Sayat, at the moment it is not possible to control the scheduling behaviour of Flink. In the future, we plan to add some kind of hints which controls whether tasks of a job get spread out or will be packed on as few nodes as possible. Cheers, Till On Fri, Oct 26, 2018 at 2:06 PM Kien Truong <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |