Hi,
the answer to this question depends on how you are starting the jobs. Do you have Java program that submits jobs in a loop that repeatedly calls StreamExecutionEnvironment.execute() or a shell script that submits jobs through the CLI? In both cases, the process should block (either on StreamExecutionEnvironment.execute() or on the CLI) until one job has terminated and then your loop can simply continue to submit the next job.
Best,
Stefan
> Am 20.07.2016 um 23:15 schrieb Biplob Biswas <
[hidden email]>:
>
> Hi,
>
> I want to run test my flink streaming code, and thus I want to run flink
> streaming jobs with different parameters one by one.
>
> So, when one job finishes after it doesn't receive new data points for
> sometime , the next job with a different set of parameter should start.
>
> For this, I am already stopping my iteration automatically if it doesn't
> receive data points for 5 seconds, but when the job terminates how can I
> know it has terminated and the next job should start automatically?
>
> Can this be done?
>
> Thanks
> Biplob
>
>
>
> --
> View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Running-multiple-Flink-Streaming-Jobs-one-by-one-tp8075.html> Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.