Re: Out off memory when catching up

Posted by Timo Walther on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Out-off-memory-when-catching-up-tp19108p19162.html

Hi Lasse,

in order to avoid OOM exception you should analyze your Flink job
implementation. Are you creating a lot of objects within your Flink
functions? Which state backend are you using? Maybe you can tell us a
little bit more about your pipeline?

Usually, there should be enough memory for the network buffers and
state. Once the processing is not fast enough and the network buffers
are filled up the input is limited anyway which results in back-pressure.

Regards,
Timo


Am 21.03.18 um 21:21 schrieb Lasse Nedergaard:
> Hi.
>
> When our jobs are catching up they read with a factor 10-20 times normal rate but then we loose our task managers with OOM. We could increase the memory allocation but is there a way to figure out how high rate we can consume with the current memory and slot allocation and a way to limit the input to avoid OOM
>
> Med venlig hilsen / Best regards
> Lasse Nedergaard