Is there a way to connect 2 workflows such that one triggers the other if certain condition is met? However, the workaround may be to insert a notification in a topic to trigger another workflow. The problem is that the addSink ends the flow so if we need to add a trigger after addSink there doesn't seem to be any good way of sending a notification to a queue that the batch processing is complete. Any suggestions? One option could be track the progress of a job and on a successful completion add a notification. Is there such a mechanism available?
|
Hi Mohit, I'm afraid there is nothing like this in Flink yet. As you mentioned you probably have to manually track the completion of one job and then trigger execution of the next one. Best, Aljoscha On Fri, 24 Feb 2017 at 19:16 Mohit Anchlia <[hidden email]> wrote:
|
What's the best way to track the progress of the job? On Mon, Feb 27, 2017 at 7:56 AM, Aljoscha Krettek <[hidden email]> wrote:
|
I think right now the best option is the JobManager REST interface: https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html
You would have to know the ID of your job and then you can poll the status of your running jobs. On Mon, 27 Feb 2017 at 18:15 Mohit Anchlia <[hidden email]> wrote:
|
It looks like JobExecutionResult can be used here by using the accumulators? On Wed, Mar 1, 2017 at 8:37 AM, Aljoscha Krettek <[hidden email]> wrote:
|
Hi Mohit,
Cheers, On Thu, Mar 2, 2017 at 2:33 AM, Mohit Anchlia <[hidden email]> wrote:
|
Does it mean that for streaming jobs it never returns? On Thu, Mar 2, 2017 at 6:21 AM, Till Rohrmann <[hidden email]> wrote:
|
Yes, right now that call never returns for a long-running streaming job. We will (in the future) provide a way for that call to return so that the result can be used for checking aggregators and other things.
On Thu, Mar 2, 2017, at 19:14, Mohit Anchlia wrote:
|
It may not return for batch jobs, either. See my post http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Job-completion-or-failure-callback-td12123.html
In short, if Flink returned an OptimizerPlanEnvironment from your call to ExecutionEnvironment.getExecutionEnvironment, when you call execute() it will only generate the job plan (the job hasn't been submitted/isn't executing yet), and if no exceptions
are thrown during creation of the job plan, then a ProgramAbortException is always thrown, and none of your code after execute() would run, and as a result you're definitely not able to use any JobExecutionResult in your main method, even though the code makes
it looks like you will.
-Shannon
From: Aljoscha Krettek <[hidden email]>
Date: Friday, March 3, 2017 at 9:36 AM To: <[hidden email]> Subject: Re: Connecting workflows in batch Yes, right now that call never returns for a long-running streaming job. We will (in the future) provide a way for that call to return so that the result can be used for checking aggregators and other things.
On Thu, Mar 2, 2017, at 19:14, Mohit Anchlia wrote:
|
Free forum by Nabble | Edit this page |