Memory does not be released after job cancellation

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Memory does not be released after job cancellation

Nastaran

Hi,
I have a simple java application uses flink 1.6.2.
When I run the jar file, I can see that the job consumes a part of the host's main memory. If I cancel the job, the consumed memory does not be released until I stop the whole cluster. How can I release the memory after cancellation?
I have followed the conversation around this issue at the mailing list archive[1] but still need more explanations.



Kind regards,

Nastaran Motavalli



Reply | Threaded
Open this post in threaded view
|

Re: Memory does not be released after job cancellation

Kostas Kloudas
Hi Nastaran,

Can you specify what more information do you need?

From the discussion that you posted:
1) If you have batch jobs, then Flink does its own memory management (outside the heap, so it is not subject to JVM's GC) 
    and although when you cancel the job, you do not see the memory being de-allocated, 
    this memory is available to other jobs and you do not have to worry about de-allocating manually.
2) if you use streaming, then you should use one of the provided state backends and they will do the memory management
    for you (see [1] and [2]).

Cheers,
Kostas


On Wed, Nov 28, 2018 at 7:11 AM Nastaran Motavali <[hidden email]> wrote:

Hi,
I have a simple java application uses flink 1.6.2.
When I run the jar file, I can see that the job consumes a part of the host's main memory. If I cancel the job, the consumed memory does not be released until I stop the whole cluster. How can I release the memory after cancellation?
I have followed the conversation around this issue at the mailing list archive[1] but still need more explanations.



Kind regards,

Nastaran Motavalli



Reply | Threaded
Open this post in threaded view
|

Re: Memory does not be released after job cancellation

Nastaran

Thanks for your attention,

I have streaming jobs and use RocksDB state backends. Do you mean that I don't need to be worry about memory management even if the allocated memory not be released after cancellation?



Kind regards,

Nastaran Motavalli




From: Kostas Kloudas <[hidden email]>
Sent: Thursday, November 29, 2018 1:22:12 PM
To: Nastaran Motavali
Cc: user
Subject: Re: Memory does not be released after job cancellation
 
Hi Nastaran,

Can you specify what more information do you need?

From the discussion that you posted:
1) If you have batch jobs, then Flink does its own memory management (outside the heap, so it is not subject to JVM's GC) 
    and although when you cancel the job, you do not see the memory being de-allocated, 
    this memory is available to other jobs and you do not have to worry about de-allocating manually.
2) if you use streaming, then you should use one of the provided state backends and they will do the memory management
    for you (see [1] and [2]).

Cheers,
Kostas


On Wed, Nov 28, 2018 at 7:11 AM Nastaran Motavali <[hidden email]> wrote:

Hi,
I have a simple java application uses flink 1.6.2.
When I run the jar file, I can see that the job consumes a part of the host's main memory. If I cancel the job, the consumed memory does not be released until I stop the whole cluster. How can I release the memory after cancellation?
I have followed the conversation around this issue at the mailing list archive[1] but still need more explanations.



Kind regards,

Nastaran Motavalli