running flink job cluster on kubernetes with HA

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

running flink job cluster on kubernetes with HA

aviad
Hi,

I want to run several jobs under kubernetes using "flink job cluster" (see -
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html#flink-job-cluster-on-kubernetes),
meaning each job is running on a different flink cluster.

I want to configure the cluster with HA, meaning working with ZooKeeper.

do I need a different ZooKeeper cluster to each job cluster or can I use the
same cluster to all jobs?

I saw that there is parameter called "high-availability.zookeeper.path.root"
I can config each flink cluster with a different path, is it enough?

thanks, Aviad





--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: running flink job cluster on kubernetes with HA

miki haiat
Its looks like  in the next version  1.7 you can achieve HA on Kubernetes without zookeeper .
Anyway for now you can configure one zookeeper path  to save the data , the path should  be  some  distribute FS  like  HDFS ,S3 fs.

Thanks ,

Miki
   

On Tue, Nov 13, 2018 at 10:24 AM aviad <[hidden email]> wrote:
Hi,

I want to run several jobs under kubernetes using "flink job cluster" (see -
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html#flink-job-cluster-on-kubernetes),
meaning each job is running on a different flink cluster.

I want to configure the cluster with HA, meaning working with ZooKeeper.

do I need a different ZooKeeper cluster to each job cluster or can I use the
same cluster to all jobs?

I saw that there is parameter called "high-availability.zookeeper.path.root"
I can config each flink cluster with a different path, is it enough?

thanks, Aviad





--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/