Memory exception

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Memory exception

Michele Bertoni
Hi everybody,
I am running a flink instance on my IDE
sometimes (totally random) I start it i getting this error

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:306)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:91)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
        at akka.dispatch.Mailbox.run(Mailbox.scala:221)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'CHAIN Join(Join at org.apache.flink.api.scala.UnfinishedJoinOperation.finish(joinDataSet.scala:224)) -> Map (Map at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:33)) -> Combine(Distinct at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:34))' , caused an error: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:472)
        at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
        at org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:217)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:373)
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:359)
        at org.apache.flink.runtime.operators.hash.HashMatchIteratorBase.getHashJoin(HashMatchIteratorBase.java:48)
        at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.<init>(NonReusingBuildFirstHashMatchIterator.java:78)
        at org.apache.flink.runtime.operators.MatchDriver.prepare(MatchDriver.java:148)
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:466)
        ... 3 more




when this happens to solve it i need either to restart the computer or run it from the downloaded flink instance instead of the IDE

sometime the error is different: it says something like “you must start the memory manager with a positive value now is 0MB” but also this one of 33 memory segment is quite common

it is quite annoying to restart everything and it happens quite often

do you have any idea why it happens?



thank you
Reply | Threaded
Open this post in threaded view
|

Re: Memory exception

Fabian Hueske-2
Hi Michele,

Flink manages its own memory using the MemoryManager. When a Flink worker (TaskManager) starts, the MemoryManager allocates a certain share of the JVM's free memory as MemorySegments: Default size of a MemorySegment is 32KB. When a Flink program is executed, the MemoryManager hands a certain number of MemorySegments to each processing operator such as HashJoin or Sort. These operators have a lower bound of MemorySegments which they need for operations, e.g., 33 MemorySegments for a HashJoin.

To solve your problem, you should try to give more memory to the Flink JVM. You can also configure a fixed amount of memory for the MemoryManager via the taskmanager.memory.size configuration parameter [1].

Cheers, Fabian


2015-05-10 19:18 GMT+02:00 Michele Bertoni <[hidden email]>:
Hi everybody,
I am running a flink instance on my IDE
sometimes (totally random) I start it i getting this error

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:306)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:91)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
        at akka.dispatch.Mailbox.run(Mailbox.scala:221)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'CHAIN Join(Join at org.apache.flink.api.scala.UnfinishedJoinOperation.finish(joinDataSet.scala:224)) -> Map (Map at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:33)) -> Combine(Distinct at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:34))' , caused an error: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:472)
        at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
        at org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:217)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:373)
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:359)
        at org.apache.flink.runtime.operators.hash.HashMatchIteratorBase.getHashJoin(HashMatchIteratorBase.java:48)
        at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.<init>(NonReusingBuildFirstHashMatchIterator.java:78)
        at org.apache.flink.runtime.operators.MatchDriver.prepare(MatchDriver.java:148)
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:466)
        ... 3 more




when this happens to solve it i need either to restart the computer or run it from the downloaded flink instance instead of the IDE

sometime the error is different: it says something like “you must start the memory manager with a positive value now is 0MB” but also this one of 33 memory segment is quite common

it is quite annoying to restart everything and it happens quite often

do you have any idea why it happens?



thank you

Reply | Threaded
Open this post in threaded view
|

Re: Memory exception

Michele Bertoni
Hi,
thank you for your fast answer

next time it happens i will try

Michele


Il giorno 10/mag/2015, alle ore 19:45, Fabian Hueske <[hidden email]> ha scritto:

Hi Michele,

Flink manages its own memory using the MemoryManager. When a Flink worker (TaskManager) starts, the MemoryManager allocates a certain share of the JVM's free memory as MemorySegments: Default size of a MemorySegment is 32KB. When a Flink program is executed, the MemoryManager hands a certain number of MemorySegments to each processing operator such as HashJoin or Sort. These operators have a lower bound of MemorySegments which they need for operations, e.g., 33 MemorySegments for a HashJoin.

To solve your problem, you should try to give more memory to the Flink JVM. You can also configure a fixed amount of memory for the MemoryManager via the taskmanager.memory.size configuration parameter [1].

Cheers, Fabian


2015-05-10 19:18 GMT+02:00 Michele Bertoni <[hidden email]>:
Hi everybody,
I am running a flink instance on my IDE
sometimes (totally random) I start it i getting this error

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:306)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:91)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
        at akka.dispatch.Mailbox.run(Mailbox.scala:221)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'CHAIN Join(Join at org.apache.flink.api.scala.UnfinishedJoinOperation.finish(joinDataSet.scala:224)) -> Map (Map at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:33)) -> Combine(Distinct at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:34))' , caused an error: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:472)
        at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
        at org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:217)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:373)
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:359)
        at org.apache.flink.runtime.operators.hash.HashMatchIteratorBase.getHashJoin(HashMatchIteratorBase.java:48)
        at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.<init>(NonReusingBuildFirstHashMatchIterator.java:78)
        at org.apache.flink.runtime.operators.MatchDriver.prepare(MatchDriver.java:148)
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:466)
        ... 3 more




when this happens to solve it i need either to restart the computer or run it from the downloaded flink instance instead of the IDE

sometime the error is different: it says something like “you must start the memory manager with a positive value now is 0MB” but also this one of 33 memory segment is quite common

it is quite annoying to restart everything and it happens quite often

do you have any idea why it happens?



thank you


Reply | Threaded
Open this post in threaded view
|

Re: Memory exception

Stephan Ewen
What you see is that the effect of extremely little memory arriving at the memory manager. 

Given that you are saying that restarting the machine helps (or using a downloaded version of Flink), the reason is probably that you do not configure the JVM memory (in which case Java grabs a bit of the system's free memory). That is also why the downloaded installation works - it always starts Java with a fix memory size.

Try adding "-Xmx500m" or so to your "lauch" options in the IDE. 



On Sun, May 10, 2015 at 9:19 PM, Michele Bertoni <[hidden email]> wrote:
Hi,
thank you for your fast answer

next time it happens i will try

Michele



Il giorno 10/mag/2015, alle ore 19:45, Fabian Hueske <[hidden email]> ha scritto:

Hi Michele,

Flink manages its own memory using the MemoryManager. When a Flink worker (TaskManager) starts, the MemoryManager allocates a certain share of the JVM's free memory as MemorySegments: Default size of a MemorySegment is 32KB. When a Flink program is executed, the MemoryManager hands a certain number of MemorySegments to each processing operator such as HashJoin or Sort. These operators have a lower bound of MemorySegments which they need for operations, e.g., 33 MemorySegments for a HashJoin.

To solve your problem, you should try to give more memory to the Flink JVM. You can also configure a fixed amount of memory for the MemoryManager via the taskmanager.memory.size configuration parameter [1].

Cheers, Fabian


2015-05-10 19:18 GMT+02:00 Michele Bertoni <[hidden email]>:
Hi everybody,
I am running a flink instance on my IDE
sometimes (totally random) I start it i getting this error

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:306)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:91)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
        at akka.dispatch.Mailbox.run(Mailbox.scala:221)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'CHAIN Join(Join at org.apache.flink.api.scala.UnfinishedJoinOperation.finish(joinDataSet.scala:224)) -> Map (Map at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:33)) -> Combine(Distinct at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:34))' , caused an error: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:472)
        at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
        at org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:217)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:373)
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:359)
        at org.apache.flink.runtime.operators.hash.HashMatchIteratorBase.getHashJoin(HashMatchIteratorBase.java:48)
        at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.<init>(NonReusingBuildFirstHashMatchIterator.java:78)
        at org.apache.flink.runtime.operators.MatchDriver.prepare(MatchDriver.java:148)
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:466)
        ... 3 more




when this happens to solve it i need either to restart the computer or run it from the downloaded flink instance instead of the IDE

sometime the error is different: it says something like “you must start the memory manager with a positive value now is 0MB” but also this one of 33 memory segment is quite common

it is quite annoying to restart everything and it happens quite often

do you have any idea why it happens?



thank you



Reply | Threaded
Open this post in threaded view
|

Re: Memory exception

Michele Bertoni
It just happened again and i solved with the -X parameter to the JVM

Thank you!


Il giorno 11/mag/2015, alle ore 21:50, Stephan Ewen <[hidden email]> ha scritto:

What you see is that the effect of extremely little memory arriving at the memory manager. 

Given that you are saying that restarting the machine helps (or using a downloaded version of Flink), the reason is probably that you do not configure the JVM memory (in which case Java grabs a bit of the system's free memory). That is also why the downloaded installation works - it always starts Java with a fix memory size.

Try adding "-Xmx500m" or so to your "lauch" options in the IDE. 



On Sun, May 10, 2015 at 9:19 PM, Michele Bertoni <[hidden email]> wrote:
Hi,
thank you for your fast answer

next time it happens i will try

Michele



Il giorno 10/mag/2015, alle ore 19:45, Fabian Hueske <[hidden email]> ha scritto:

Hi Michele,

Flink manages its own memory using the MemoryManager. When a Flink worker (TaskManager) starts, the MemoryManager allocates a certain share of the JVM's free memory as MemorySegments: Default size of a MemorySegment is 32KB. When a Flink program is executed, the MemoryManager hands a certain number of MemorySegments to each processing operator such as HashJoin or Sort. These operators have a lower bound of MemorySegments which they need for operations, e.g., 33 MemorySegments for a HashJoin.

To solve your problem, you should try to give more memory to the Flink JVM. You can also configure a fixed amount of memory for the MemoryManager via the taskmanager.memory.size configuration parameter [1].

Cheers, Fabian


2015-05-10 19:18 GMT+02:00 Michele Bertoni <[hidden email]>:
Hi everybody,
I am running a flink instance on my IDE
sometimes (totally random) I start it i getting this error

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:306)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:91)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
        at akka.dispatch.Mailbox.run(Mailbox.scala:221)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.Exception: The data preparation for task 'CHAIN Join(Join at org.apache.flink.api.scala.UnfinishedJoinOperation.finish(joinDataSet.scala:224)) -> Map (Map at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:33)) -> Combine(Distinct at LowLevel.FlinkImplementation.metaOperator.SemiJoinMD$.apply(SemiJoinMD.scala:34))' , caused an error: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:472)
        at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
        at org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:217)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:373)
        at org.apache.flink.runtime.operators.hash.MutableHashTable.<init>(MutableHashTable.java:359)
        at org.apache.flink.runtime.operators.hash.HashMatchIteratorBase.getHashJoin(HashMatchIteratorBase.java:48)
        at org.apache.flink.runtime.operators.hash.NonReusingBuildFirstHashMatchIterator.<init>(NonReusingBuildFirstHashMatchIterator.java:78)
        at org.apache.flink.runtime.operators.MatchDriver.prepare(MatchDriver.java:148)
        at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:466)
        ... 3 more




when this happens to solve it i need either to restart the computer or run it from the downloaded flink instance instead of the IDE

sometime the error is different: it says something like “you must start the memory manager with a positive value now is 0MB” but also this one of 33 memory segment is quite common

it is quite annoying to restart everything and it happens quite often

do you have any idea why it happens?



thank you