Hi Flinksters, as far as I know, there is still no support for nested iterations planned. Am I right? |
Hi Max, You are right, there is no support for nested iterations yet. As far as I know, there are no concrete plans to add support for it. So it is up to debate how the support for resuming from intermediate results will look like. Intermediate results are not produced within the iterations cycles. Same would be true for nested iterations. So the behavior for resuming from intermediate results should be alike for nested iterations.On Fri, Jul 17, 2015 at 4:26 PM, Maximilian Alber <[hidden email]> wrote:
|
Thanks for the answer! But I need some clarification: Cheers,"So it is up to debate how the support for resuming from intermediate results will look like." -> What's the current state of that debate? "Intermediate results are not produced within the iterations cycles." -> Ok, if there are none, what does it have to do with that debate? :-) On Mon, Jul 20, 2015 at 10:50 AM, Maximilian Michels <[hidden email]> wrote:
|
"So
it is up to debate how the support for resuming from intermediate
results will look like." -> What's the current state of that debate? Since
there is no support for nested iterations that I know of, the debate
how intermediate results are integrated has not started yet.
I
was referring to the existing support for intermediate results within
iterations. If we were to implement nested iterations, this could
(possibly) change. This is all very theoretical because there are no plans
to support nested iterations. Hope this clarifies. Otherwise, please restate your question because I might have misunderstood.On Mon, Jul 20, 2015 at 12:11 PM, Maximilian Alber <[hidden email]> wrote:
|
Oh sorry, my fault. When I wrote it, I had iterations in mind. What I actually wanted to say, how "resuming from intermediate
results" will work with (non-nested) "non-Flink" iterations? And with iterations I mean something like this:I might got something wrong, because in our group we mentioned caching a lá Spark for Flink and someone came up that "pinning" will do that. Is that somewhat right? On Mon, Jul 20, 2015 at 1:06 PM, Maximilian Michels <[hidden email]> wrote:
|
Now that makes more sense :) I thought by "nested iterations" you meant iterations in Flink that can be nested, i.e. starting an iteration inside an iteration. The caching/pinning of intermediate results is still a work in progress in Flink. It is actually in a state where it could be merged but some pending pull requests got delayed because priorities changed a bit.https://github.com/apache/flink/pull/858 Implements the actual backtracking and caching of the results. Once these are in, we can change the Java/Scala API to support backtracking. I don't exactly know how Spark's API does it but, essentially it should work then by just creating new operations on an existing DataSet and submit to the cluster again. Cheers, Max On Mon, Jul 20, 2015 at 3:31 PM, Maximilian Alber <[hidden email]> wrote:
|
Thanks! Ok, cool. If I would like to test it, I just need to merge those two pull requests into my current branch?On Mon, Jul 20, 2015 at 4:02 PM, Maximilian Michels <[hidden email]> wrote:
|
You could do that but you might run into merge conflicts. Also keep in mind that it is work in progress :) On Mon, Jul 20, 2015 at 4:15 PM, Maximilian Alber <[hidden email]> wrote:
|
The two pull requests do not go all the way, unfortunately. They cover only the runtime, the API integration part is missing still, unfortunately... On Mon, Jul 20, 2015 at 5:53 PM, Maximilian Michels <[hidden email]> wrote:
|
I mentioned that. @Max: you should only try it out if you want to experiment/work with the changes. On Wed, Jul 22, 2015 at 2:20 PM, Stephan Ewen <[hidden email]> wrote:
|
Thanks. Yes, I got that.On Wed, Jul 22, 2015 at 2:46 PM, Maximilian Michels <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |