Hi to all, running a job that writes parquet-thrift files I had this exception (in a Task Manager):io.netty.channel.nio.NioEventLoop - Unexpected exception in the selector loop. java.lang.OutOfMemoryError: Java heap space 2016-05-12 18:49:11,302 WARN org.jboss.netty.channel.socket.nio.AbstractNioSelector - Unexpected exception in the selector loop. java.lang.OutOfMemoryError: Java heap space 2016-05-12 18:49:11,302 ERROR org.apache.flink.runtime.io.disk.iomanager.IOManager - The handler of the request-complete-callback threw an exception: Java heap space java.lang.OutOfMemoryError: Java heap space 2016-05-12 18:49:11,303 ERROR org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading thread encountered an error: segment has been freed java.lang.IllegalStateException: segment has been freed at org.apache.flink.core.memory.HeapMemorySegment.wrap(HeapMemorySegment.java:85) at org.apache.flink.runtime.io.disk.iomanager.SegmentReadRequest.read(AsynchronousFileIOChannel.java:310) at org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$ReaderThread.run(IOManagerAsync.java:396) 2016-05-12 18:49:11,303 ERROR org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading thread encountered an error: segment has been freed |
The job is running out of heap memory, probably because a user
function needs a lot of it (the parquet thrift sink?). You can try to work around it by reducing the amount of managed memory in order to leave more heap space available. On Thu, May 12, 2016 at 6:55 PM, Flavio Pompermaier <[hidden email]> wrote: > Hi to all, > running a job that writes parquet-thrift files I had this exception (in a > Task Manager): > > io.netty.channel.nio.NioEventLoop - Unexpected > exception in the selector loop. > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,302 WARN > org.jboss.netty.channel.socket.nio.AbstractNioSelector - Unexpected > exception in the selector loop. > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,302 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - The handler > of the request-complete-callback threw an exception: Java heap space > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,303 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading > thread encountered an error: segment has been freed > java.lang.IllegalStateException: segment has been freed > at > org.apache.flink.core.memory.HeapMemorySegment.wrap(HeapMemorySegment.java:85) > at > org.apache.flink.runtime.io.disk.iomanager.SegmentReadRequest.read(AsynchronousFileIOChannel.java:310) > at > org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$ReaderThread.run(IOManagerAsync.java:396) > 2016-05-12 18:49:11,303 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading > thread encountered an error: segment has been freed > > > Any idea of what could be the cause? > > Best, > Flavio |
Indeed I can confirm that I resolved this problem reducing the number of slots per Task Manager (and thus incrementing the available memory of each task)!
However from time to time I have serialization issue that I don't know where they come from..it looks like the PjoSerialization has some issue somewhere Thanks anyway Ufuk! On Fri, May 20, 2016 at 12:04 PM, Ufuk Celebi <[hidden email]> wrote: The job is running out of heap memory, probably because a user |
Free forum by Nabble | Edit this page |