Hi Alexandru,
1. You can either create a Flink cluster per job (preferred), or use one big cluster to run all your jobs. This depends a bit on the resource manager you are using, and the workloads you are planning to process. If you are using Kubernetes, it makes sense to deploy each job separately.
2. That is something that you can decide yourself (as described in the previous answer)
3. I don't think you can start a job from s3 as you describe.
4. For upgrading a Flink job jar, preserving your state, you need to first create a savepoint, then start the job again with that savepoint.
If you need to upgrade the Flink version in your cluster, you also stop with a savepoint, upgrade Flink and then restore from the savepoint.
Rolling updates of TaskManagers is not supported yet.
On Mon, Nov 2, 2020 at 5:20 PM Alexandru Vasiu <
[hidden email]> wrote:
Hi,
I have some questions:
1. How can you manage multiple jars (jobs) easily using Flink?
2. All jobs should run on the same task manager or do we need to use one for each job?
3. Can we store the jars in some persistent storage (such as S3) and start a job for each jar from that storage?
4. Also, how we can upgrade a jar (job)? Do we need to stop all jobs and update the task manager for that? Or if using that persistent storage for jars, it's enough to update the jar in the storage and to restart just the corresponding job?
For now, we have flink deployed in Kubernetes, but with only one job running. We upgrade it by stopping the job, upgrading the task manager and job manager with the new docker image containing the updated jar, and then we start the job again with the latest savepoint/checkpoint.
Thank you,
Alex