Hi,
First time user , I'm just evaluating Flink at the moment, and I was reading https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html#deploy-job-cluster and I don't fully understand if a Job Cluster will autoterminate after the job is completed (for at batch job) ? The examples look to me like like the task manager pods will continue running as it's configured as Deployment. So is there any way to achieve "autotermination" or am I supposed to monitor the job status externally (like from airflow) and delete the JobManager and TaskManager kubernetes resources from there? -- /Rubén Laguna |
Hi Ruben, thanks for reaching out to us. Flink's native Kubernetes Application mode [1] might be what you're looking for. Best, Matthias On Wed, Oct 28, 2020 at 11:50 AM Ruben Laguna <[hidden email]> wrote: Hi, |
I second Matthias's suggestion. If you are using the "standalone Flink on K8s", then you need some external tools(e.g. K8s operator[1][2]) to help with the lifecycle management. Also we have the native Kubernetes integration, all the K8s resources will be cleaned up automatically when the Flink job finished/failed/cancelled. Best, Yang Matthias Pohl <[hidden email]> 于2020年10月29日周四 上午5:05写道:
|
Free forum by Nabble | Edit this page |