Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi,
We are getting a ConcurrentModificationException, the complete stack trace is as follows: org.apache.flink.optimizer.CompilerException: Error translating node 'Data Source "at compute(ArpackSVD.java:367) (org.apache.flink.api.java.io.CollectionInputFormat)" : NONE [[ GlobalProperties [partitioning=RANDOM_PARTITIONED] ]] [[ LocalProperties [ordering=null, grouped=null, unique=null] ]]': java.util.ConcurrentModificationException ... [show rest of quote] Can anyone enlighten us as why is it like this or how to fix this issue? We did a bit of google search, but all we get is some problem with serializing broadcast variable. We use flink bulk iterations and this variable is broadcasted to both map and reduce in one dataflow! Thanks & Regards Biplob Biswas |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi, This stacktrace looks really suspicious. It includes classes from the submission client (CLIClient), optimizer (JobGraphGenerator), and runtime (KryoSerializer). Is it possible that you try to start a new Flink job inside another job? |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
But isn't that a normal stack trace which you see when you submit a job to the cluster via the CLI and somewhere in the compilation process something fails? Anyway, it would be helpful to see the program which causes this problem. Cheers, Till On Mon, Feb 15, 2016 at 12:25 PM, Fabian Hueske <[hidden email]> wrote:
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by Fabian Hueske-2
Hi, No, we don't start a flink job inside another job, although the job creation was done in a loop, but only when one job is finished the next job started after cleanup. And we didn't get this exception on my local flink installation, it appears when i run on the cluster. Thanks & Regards Biplob Biswas On Mon, Feb 15, 2016 at 12:25 PM, Fabian Hueske <[hidden email]> wrote:
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
HI Biplob,
Could you please supply some sample code? Otherwise it is tough to debug this problem. Cheers, Max On Tue, Feb 16, 2016 at 2:46 PM, Biplob Biswas <[hidden email]> wrote: > Hi, > > No, we don't start a flink job inside another job, although the job creation > was done in a loop, but only when one job is finished the next job started > after cleanup. And we didn't get this exception on my local flink > installation, it appears when i run on the cluster. > > Thanks & Regards > Biplob Biswas > > On Mon, Feb 15, 2016 at 12:25 PM, Fabian Hueske <[hidden email]> wrote: >> >> Hi, >> >> This stacktrace looks really suspicious. >> It includes classes from the submission client (CLIClient), optimizer >> (JobGraphGenerator), and runtime (KryoSerializer). >> >> Is it possible that you try to start a new Flink job inside another job? >> This would not work. >> >> Best, Fabian > > ... [show rest of quote]
|
Free forum by Nabble | Edit this page |