|
Hi,
we are using rabbitmq queue as streaming source. Sometimes (if the queue contains a lot of messages) we get the follow ERROR: ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl - Failed to stop Container container_1541828054499_0284_01_000004when stopping NMClientImpl
and sometimes: Uncaught error from thread [flink-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[flink] java.lang.OutOfMemoryError: GC overhead limit exceeded
We think that the problem is that too many messages are consumed by flink. Therefore, the question of whether there is a way to limit this.
Thanks! Marke
|