Hi,
I have a question regarding whether the current running job will restart if I stop and start the flink cluster? 1. Let’s say I am just having a Standalone one node cluster. 2. I have several Flink jobs already running on the cluster. 3. If I do a bin/cluster-stop.sh and then do a bin/cluster-start.sh, will be previously running job restart again? OR Before I do bin/cluster-stop.sh, I have to do Savepoints for each of the job. After bin/cluster-start.sh is finished, I have to do Start Job based on Savepoints triggered before for each of the job I want to restart. Many thanks in advance :) Best regards/祝好, Chang Liu 刘畅 |
Thanks!
If I have a cluster more than one node (standalone or YRAN), can I stop and start any single node among them and keep the job running? Best regards/祝好, Chang Liu 刘畅
|
Or to say, how can I keep the jobs for system patching, server restart, etc. Is it related to Standalone vs YARN? Or is it related to whether to use Zookeeper?
Many thanks!
Best regards/祝好, Chang Liu 刘畅
|
Hi,
by default all the metadata is lost when shutting down the JobManager in a non high available setup. Flink uses Zookeeper together with a distributed filesystem to store the required metadata [1] in a persistent and distributed manner. A single node setup is rather uncommon, but you can also start Zookeeper locally as it is done in our end-to-end tests [2]. I hope this helps. Regards, Timo [1] https://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availability.html [2] https://github.com/apache/flink/blob/master/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh Am 08.11.18 um 14:15 schrieb Chang Liu: Or to say, how can I keep the jobs for system patching, server restart, etc. Is it related to Standalone vs YARN? Or is it related to whether to use Zookeeper?
|
Free forum by Nabble | Edit this page |