Hello everyone, I am getting this weird exception while running some simple counting jobs in Flink.Exception in thread "main" org.apache.flink.runtime.client.JobTimeoutException: Lost connection to JobManager at org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164) at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198) at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188) at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179) at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54) at trackers.preprocessing.ExtractInfoFromLogs.main(ExtractInfoFromLogs.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at scala.concurrent.Await.result(package.scala) at org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143) ... 10 more This
exception comes when dealing with largish files (>10GB). No
exception is thrown when I am working with a smaller subset of my data. Also
I would swear that it was working fine until a few days ago, and the
code has not been changed :S Only change was a re-import of maven
dependencies. |
You are on the latest snapshot version? I think there is an inconsistency in there. Will try to fix that toning. Can you actually use the milestone1 version? That one should be good. Greetings, Am 14.04.2015 20:31 schrieb "Fotis P" <[hidden email]>:
|
Hello,
Once I got the message, few seconds, I received your email. Well, this just to cast a need for a fix. Happy to feel the dynamism of the work. Great work. On 14.04.2015 21:50, Stephan Ewen
wrote:
|
I pushed a fix to the master. The problem should now be gone. Please let us know if you experience other issues! Greetings, Stephan On Tue, Apr 14, 2015 at 9:57 PM, Mohamed Nadjib MAMI <[hidden email]> wrote:
|
Hello,
I'm still facing the problem with 0.9-SNAPSHOT version. Tried to remove the libraries and download them again but same issue. Greetings, Mohamed Exception in thread "main" org.apache.flink.runtime.client.JobTimeoutException: Lost connection to JobManager at org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:164) at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:198) at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:188) at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:179) at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:54) at Main.main(Main.java:142) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at scala.concurrent.Await.result(package.scala) at org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:143) ... 5 more On 15.04.2015 01:02, Stephan Ewen
wrote:
|
The exception indicates that you're still using the old version. It takes some time for the new Maven artifact to get deployed to the snapshot repository. Apparently, a artifact has already been deployed this morning. Did you delete the jar files in your .m2 folder? On Wed, Apr 15, 2015 at 1:38 PM, Mohamed Nadjib MAMI <[hidden email]> wrote:
|
On 15 Apr 2015, at 14:18, Maximilian Michels <[hidden email]> wrote:
> The exception indicates that you're still using the old version. It takes some time for the new Maven artifact to get deployed to the snapshot repository. Apparently, a artifact has already been deployed this morning. Did you delete the jar files in your .m2 folder? I think that's what he meant. The problem is that the snapshot repositories take some time to synchronize. Please 1. git clone https://github.com/apache/flink.git 2. cd flink 3. mvn clean install -DskipTests This way you build Flink yourself and are guaranteed to work on a version with the fix. Sorry for the inconvenience. Does this solve it? – Ufuk |
Hello all, I am glad to report that the problem has been resolved. 2015-04-15 15:02 GMT+02:00 Ufuk Celebi <[hidden email]>: On 15 Apr 2015, at 14:18, Maximilian Michels <[hidden email]> wrote: |
Free forum by Nabble | Edit this page |