Re: Task-manager kubernetes pods take a long time to terminate

Posted by Li Peng-2 on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Task-manager-kubernetes-pods-take-a-long-time-to-terminate-tp32479p32505.html

Hi Yun,

I'm currently specifying that specific RPC address in my kubernetes charts for conveniene, should I be generating a new one for every deployment?

And yes, I am deleting the pods using those commands, I'm just noticing that the task-manager termination process is short circuited by the registration timeout check, so that instead of terminating quickly, the task-manger would wait for 5 minutes to timeout before terminating. I'm expecting it to just terminate without doing that registration timeout, is there a way to configure that?

Thanks,
Li


On Thu, Jan 30, 2020 at 8:53 AM Yun Tang <[hidden email]> wrote:
Hi Li

Why you still use ’job-manager' as thejobmanager.rpc.address for the second new cluster? If you use another rpc address, previous task managers would not try to register with old one.

Take flink documentation [1] for k8s as example. You can list/delete all pods like:
kubectl get/delete pods -l app=flink

By the way, the default registration timeout is 5min [2], those taskmanager could not register to the JM will suicide after 5 minutes.


Best
Yun Tang


From: Li Peng <[hidden email]>
Sent: Thursday, January 30, 2020 9:24
To: user <[hidden email]>
Subject: Task-manager kubernetes pods take a long time to terminate
 
Hey folks, I'm deploying a Flink cluster via kubernetes, and starting each task manager with taskmanager.sh. I noticed that when I tell kubectl to delete the deployment, the job-manager pod usually terminates very quickly, but any task-manager that doesn't get terminated before the job-manager, usually gets stuck in this loop:

2020-01-29 09:18:47,867 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor            - Could not resolve ResourceManager address akka.tcp://flink@job-manager:6123/user/resourcemanager, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink@job-manager:6123/user/resourcemanager

It then does this for about 10 minutes(?), and then shuts down. If I'm deploying a new cluster, this pod will try to register itself with the new job manager before terminating lter. This isn't a troubling issue as far as I can tell, but I find it annoying that I sometimes have to force delete the pods. 

Any easy ways to just have the task managers terminate gracefully and quickly?

Thanks,
Li