Hi,
I am trying to generate a heap dump to debug a GC overhead OOM. For that I added the below java options in flink-conf.yaml, however after adding this the yarn is not able to launch the containers. The job logs show it goes on requesting for containers from yarn and it gets them, again releases it. then again the same cycle continues. If I remove the option from flink-conf.yaml then the containers are launched and the job starts processing. env.java.opts.taskmanager: "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof" If I try this then yarn client does not comes up - env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof" Am I doing anything wrong here? PS: I am using EMR. Thanks, Hemant |
Hi, I tried with the standalone session (sorry I do not have a yarn cluster in hand) and it seems that the flink cluster could startup normally. Could you check the log of NodeManager to see the detail reason that the container does not get launched? Also have you check if there are some spell error or some unexpected special white space character for the configuration ? For the case of configuring `env.java.opts`, it seems the JobManager also could not be launched with this configuration. Best, Yun
|
Issue was with double quotes around the Java options. This worked - env.java.opts: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof On Mon, 8 Mar 2021 at 12:02 PM, Yun Gao <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |