Hi, this should be posted on the user mailing list not the dev. Apart from that, this looks like normal/standard behaviour of JVM, and has very little to do with Flink. Garbage Collector (GC) is kicking in when memory usage is approaching some threshold: Piotrek pon., 5 kwi 2021 o 22:54 Lu Niu <[hidden email]> napisał(a): Hi, |
Hi, Piotr Thanks for replying. I asked this because such a pattern might imply memory oversubscription. For example, I tuned down the memory of one app from heap 2.63GB to 367MB and the job still runs fine: before: https://drive.google.com/file/d/1o8k9Vv3yb5gXITi4GvmlXMteQcRfmOhr/view?usp=sharing after: https://drive.google.com/file/d/1wNTHBT8aSJaAmL1rVY8jUkdp-G5znnMN/view?usp=sharing What's the best practice for tuning Flink job memory? 1. What’s a good start point users should try first? 2. How to make progress? e.g. flink application Foo currently encountered error OOM: java heap space. Where to move next? simply bump up taskmananger.memory? or just increase heap? 3. What’s the final state? Job running fine and ensuring XYZ headroom in each memory component? Best Lu On Tue, Apr 6, 2021 at 12:26 AM Piotr Nowojski <[hidden email]> wrote: Hi, |
Hi, I don't think there is a Flink specific answer to this question. Just do what you would normally do with a normal Java application running inside a JVM. If there is an OOM on heap space, you can either try to bump the heap space, or reduce usage of it. The only Flink specific part is probably that you need to leave enough memory for the framework itself, and that there are a couple of different memory pools. You can read about those things in the docs: Piotrek czw., 8 kwi 2021 o 02:19 Lu Niu <[hidden email]> napisał(a): Hi, Piotr |
Free forum by Nabble | Edit this page |