I've set up cluster(stand alone).
Taskmanager consumes memory over the Xmx property and it grows up continuously. I saw this link(http://mail-archives.apache.org/mod_mbox/flink-dev/201606.mbox/%3CCAK2vtervsw4muBOc4SWix0mR6Y9biJznjuYpF6_f9f0g9-_6LA@...%3E). So i set the taskmanager.memory.preallocation value to true but that is not solution. my java version is and my flink-conf.yaml
i need help. what shall i do? thx in advance. |
oh. my flink version is 1.0.3.
---------- Forwarded message ---------- From: 김동일 <[hidden email]> Date: Thu, Jul 21, 2016 at 12:52 AM Subject: taskmanager memory leak To: [hidden email] I've set up cluster(stand alone).
Taskmanager consumes memory over the Xmx property and it grows up continuously. I saw this link(http://mail-archives.apache.org/mod_mbox/flink-dev/201606.mbox/%3CCAK2vtervsw4muBOc4SWix0mR6Y9biJznjuYpF6_f9f0g9-_6LA@...%3E). So i set the taskmanager.memory.preallocation value to true but that is not solution. my java version is and my flink-conf.yaml
i need help. what shall i do? thx in advance. <A HREF="http://www.kiva.org" TARGET="_top">
<IMG SRC="http://www.kiva.org/images/bannerlong.png" WIDTH="460" HEIGHT="60" ALT="Kiva - loans that change lives" BORDER="0" ALIGN="BOTTOM"></A> |
Hi! In order to answer this, we need a bit more information. Here are some followup questions: - Did you submit any job to the cluster, or is the memory just growing even on an idle TaskManager? - If you are running a job, do you use the RocksDB state backend, of the FileSystem state backend? - Does it grow infinitely, or simply up a certain point and then goes down again? Greetings, Stephan On Wed, Jul 20, 2016 at 5:58 PM, 김동일 <[hidden email]> wrote:
|
hi. stephan.
- Did you submit any job to the cluster, or is the memory just growing even on an idle TaskManager? I have some stream job. - If you are running a job, do you use the RocksDB state backend, of the FileSystem state backend? file state backend. i use s3. - Does it grow infinitely, or simply up a certain point and then goes down again? I think it infinitely. kernel kills the process , oom. On Thu, Jul 21, 2016 at 3:52 AM Stephan Ewen <[hidden email]> wrote:
|
Hi! There is a memory debugging logger, you can activate it like that: It will print which parts of the memory are growing. What you can also try is to deactivate checkpointing for one run and see if that solves it. If yes, then I suspect there is a memory leak in the s3 library (are you using s3, s3a, or s3n?). Can you also check what libraries you are using? We have seen cases of memory leaks in the libraries people used. Greetings, Stephan On Thu, Jul 21, 2016 at 5:13 AM, 김동일 <[hidden email]> wrote:
|
Dear Stephan. I also suspect the s3. I’ve tried s3n, s3a. what is suitable library? I’m using aws-java-sdk-1.7.4 and hadoop-aws-2.7.2. Best regards.
|
I don't know that answer, sorry. Maybe one of the others can chime in here. Did you deactivate checkpointing (then it should not write to S3) and did that resolve the leak? Best, Stephan On Thu, Jul 21, 2016 at 12:52 PM, 김동일 <[hidden email]> wrote:
|
I think so. I’ll test it on EMR and then reply. I am truly grateful for your support.
|
Free forum by Nabble | Edit this page |