Hi,
Flink has memory problems when I run an algorithm from my local IDE on a 2GB graph. Is there any way that I can increase the memory given to Flink? Best, Sebastian Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 at org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) at org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) at java.lang.Thread.run(Thread.java:745) |
Hi.
You can increase the memory given to Flink by increasing JVM Heap memory in local. If you are using Eclipse as IDE, add “-Xmx<HEAPSIZE>” option in run configuration. [1]. Although you are using IntelliJ IDEA as IDE, you can increase JVM Heap using the same way. [2] [1] http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-java-local-configuration.htm [2] https://www.jetbrains.com/idea/help/creating-and-editing-run-debug-configurations.html Regards, Chiwan Park > On Jun 17, 2015, at 2:01 PM, Sebastian <[hidden email]> wrote: > > Hi, > > Flink has memory problems when I run an algorithm from my local IDE on a 2GB graph. Is there any way that I can increase the memory given to Flink? > > Best, > Sebastian > > Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 > at org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) > at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) > at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) > at org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) > at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) > at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) > at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) > at java.lang.Thread.run(Thread.java:745) |
Hey Sebastian,
with "taskmanager.memory.fraction" you can give more memory to the Flink runtime. Current default is to give 70% to Flink and leave 30% for the user code. taskmanager.memory.fraction: 0.9 will increase this to 90%. Does this help? [1] http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html On 17 Jun 2015, at 09:29, Chiwan Park <[hidden email]> wrote: > Hi. > > You can increase the memory given to Flink by increasing JVM Heap memory in local. > If you are using Eclipse as IDE, add “-Xmx<HEAPSIZE>” option in run configuration. [1]. > Although you are using IntelliJ IDEA as IDE, you can increase JVM Heap using the same way. [2] > > [1] http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-java-local-configuration.htm > [2] https://www.jetbrains.com/idea/help/creating-and-editing-run-debug-configurations.html > > Regards, > Chiwan Park > >> On Jun 17, 2015, at 2:01 PM, Sebastian <[hidden email]> wrote: >> >> Hi, >> >> Flink has memory problems when I run an algorithm from my local IDE on a 2GB graph. Is there any way that I can increase the memory given to Flink? >> >> Best, >> Sebastian >> >> Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) >> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) >> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) >> at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) >> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) >> at java.lang.Thread.run(Thread.java:745) |
In reply to this post by Sebastian Schelter-2
Hi,
I had the same problem and setting the solution set to unmanaged helped: VertexCentricConfiguration parameters = new VertexCentricConfiguration(); parameters.setSolutionSetUnmanagedMemory(false); runVertexCentricIteration(..., parameters); Best, Mihail On 17.06.2015 07:01, Sebastian wrote: > Hi, > > Flink has memory problems when I run an algorithm from my local IDE on > a 2GB graph. Is there any way that I can increase the memory given to > Flink? > > Best, > Sebastian > > Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: > 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 > bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 > at > org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) > at > org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) > at > org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) > at > org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) > at > org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) > at > org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) > at > org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) > at java.lang.Thread.run(Thread.java:745) |
In reply to this post by Chiwan Park
Hi,
look at slide 35 for more details about memory configuration: http://www.slideshare.net/robertmetzger1/apache-flink-hands-on -Matthias On 06/17/2015 09:29 AM, Chiwan Park wrote: > Hi. > > You can increase the memory given to Flink by increasing JVM Heap memory in local. > If you are using Eclipse as IDE, add “-Xmx<HEAPSIZE>” option in run configuration. [1]. > Although you are using IntelliJ IDEA as IDE, you can increase JVM Heap using the same way. [2] > > [1] http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-java-local-configuration.htm > [2] https://www.jetbrains.com/idea/help/creating-and-editing-run-debug-configurations.html > > Regards, > Chiwan Park > >> On Jun 17, 2015, at 2:01 PM, Sebastian <[hidden email]> wrote: >> >> Hi, >> >> Flink has memory problems when I run an algorithm from my local IDE on a 2GB graph. Is there any way that I can increase the memory given to Flink? >> >> Best, >> Sebastian >> >> Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) >> at org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) >> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) >> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) >> at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) >> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) >> at java.lang.Thread.run(Thread.java:745) > > > > > > > signature.asc (836 bytes) Download Attachment |
In reply to this post by Mihail Vieru
On 17 Jun 2015, at 09:35, Mihail Vieru <[hidden email]> wrote: > Hi, > > I had the same problem and setting the solution set to unmanaged helped: > > VertexCentricConfiguration parameters = new VertexCentricConfiguration(); > parameters.setSolutionSetUnmanagedMemory(false); > > runVertexCentricIteration(..., parameters); That's indeed a very good point, Mihal. Thanks for the pointer. The compacting hash table used in iterations cannot spill at the moment. |
In reply to this post by Ufuk Celebi
Hi Ufuk,
Can I configure this when running locally in the IDE or do I have to install Flink for that? Best, Sebastian On 17.06.2015 09:34, Ufuk Celebi wrote: > Hey Sebastian, > > with "taskmanager.memory.fraction" you can give more memory to the Flink runtime. Current default is to give 70% to Flink and leave 30% for the user code. > > taskmanager.memory.fraction: 0.9 > > will increase this to 90%. > > > Does this help? > > > [1] http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html > > On 17 Jun 2015, at 09:29, Chiwan Park <[hidden email]> wrote: > >> Hi. >> >> You can increase the memory given to Flink by increasing JVM Heap memory in local. >> If you are using Eclipse as IDE, add “-Xmx<HEAPSIZE>” option in run configuration. [1]. >> Although you are using IntelliJ IDEA as IDE, you can increase JVM Heap using the same way. [2] >> >> [1] http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-java-local-configuration.htm >> [2] https://www.jetbrains.com/idea/help/creating-and-editing-run-debug-configurations.html >> >> Regards, >> Chiwan Park >> >>> On Jun 17, 2015, at 2:01 PM, Sebastian <[hidden email]> wrote: >>> >>> Hi, >>> >>> Flink has memory problems when I run an algorithm from my local IDE on a 2GB graph. Is there any way that I can increase the memory given to Flink? >>> >>> Best, >>> Sebastian >>> >>> Caused by: java.lang.RuntimeException: Memory ran out. numPartitions: 32 minPartition: 4 maxPartition: 4 number of overflow segments: 151 bucketSize: 146 Overall memory: 14024704 Partition memory: 4194304 >>> at org.apache.flink.runtime.operators.hash.CompactingHashTable.getNextBuffer(CompactingHashTable.java:784) >>> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertBucketEntryFromSearch(CompactingHashTable.java:668) >>> at org.apache.flink.runtime.operators.hash.CompactingHashTable.insertOrReplaceRecord(CompactingHashTable.java:538) >>> at org.apache.flink.runtime.operators.hash.CompactingHashTable.buildTableWithUniqueKey(CompactingHashTable.java:347) >>> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.readInitialSolutionSet(IterationHeadPactTask.java:209) >>> at org.apache.flink.runtime.iterative.task.IterationHeadPactTask.run(IterationHeadPactTask.java:270) >>> at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362) >>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) >>> at java.lang.Thread.run(Thread.java:745) |
On 17 Jun 2015, at 10:10, Sebastian <[hidden email]> wrote: > Hi Ufuk, > > Can I configure this when running locally in the IDE or do I have to install Flink for that? Yes. org.apache.flink.configuration.Configuration conf = new Configuration(); conf.setDouble(ConfigConstants.TASK_MANAGER_MEMORY_FRACTION_KEY, 0.7); LocalEnvironment env = LocalEnvironment.createLocalEnvironment(conf); You can check the size of size of Flink's managed memory in the logs of the task manager: 11:56:28,061 INFO org.apache.flink.runtime.taskmanager.TaskManager - Using 1227 MB for Flink managed memory. |
In reply to this post by Ufuk Celebi
Is there a way to configure this setting for a delta iteration in the
scala API? Best, Sebastian On 17.06.2015 10:04, Ufuk Celebi wrote: > > On 17 Jun 2015, at 09:35, Mihail Vieru <[hidden email]> wrote: > >> Hi, >> >> I had the same problem and setting the solution set to unmanaged helped: >> >> VertexCentricConfiguration parameters = new VertexCentricConfiguration(); >> parameters.setSolutionSetUnmanagedMemory(false); >> >> runVertexCentricIteration(..., parameters); > > That's indeed a very good point, Mihal. Thanks for the pointer. The compacting hash table used in iterations cannot spill at the moment. > |
Not yet, no. I created a Jira issue: https://issues.apache.org/jira/browse/FLINK-2277 On Thu, 25 Jun 2015 at 14:48 Sebastian <[hidden email]> wrote: Is there a way to configure this setting for a delta iteration in the |
Free forum by Nabble | Edit this page |