Flink stream with RabbitMQ source: Set max "request" message amount

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink stream with RabbitMQ source: Set max "request" message amount

Marke Builder
Hi,

we are using rabbitmq queue as streaming source.
Sometimes (if the queue contains a lot of messages) we get the follow ERROR:
ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl           - Failed to stop Container container_1541828054499_0284_01_000004when stopping NMClientImpl

and sometimes:
Uncaught error from thread [flink-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[flink]
java.lang.OutOfMemoryError: GC overhead limit exceeded

We think that the problem is that too many messages are consumed by flink. Therefore, the question of whether there is a way to limit this.

Thanks!
Marke
Reply | Threaded
Open this post in threaded view
|

Re: Flink stream with RabbitMQ source: Set max "request" message amount

vino yang
Hi Marke,

AFAIK, you can set basic.qos to limit the consumption rate, please read this answer.[1]
I am not sure if Flink RabbitMQ connector lets you set this property. You can check it.

Thanks, vino.


Marke Builder <[hidden email]> 于2018年11月21日周三 下午5:10写道:
Hi,

we are using rabbitmq queue as streaming source.
Sometimes (if the queue contains a lot of messages) we get the follow ERROR:
ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl           - Failed to stop Container container_1541828054499_0284_01_000004when stopping NMClientImpl

and sometimes:
Uncaught error from thread [flink-scheduler-1]: GC overhead limit exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for ActorSystem[flink]
java.lang.OutOfMemoryError: GC overhead limit exceeded

We think that the problem is that too many messages are consumed by flink. Therefore, the question of whether there is a way to limit this.

Thanks!
Marke