Job Manager minimum memory hard coded to 768

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Job Manager minimum memory hard coded to 768

Dan Circelli

In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:

 

java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB

at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)

 

Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)

 

 

Why is this hardcoded?

Why not let value be set via the Yarn Site Configuration xml?

Why such a high minimum?

 

 

Thanks,

Dan

Reply | Threaded
Open this post in threaded view
|

Re: Job Manager minimum memory hard coded to 768

Aljoscha Krettek
I believe this could be from a time when there was not yet the setting "containerized.heap-cutoff-min" since this part of the code is quite old.

I think we could be able to remove that restriction but I'm not sure so I'm cc'ing Till who knows those parts best.

@Till, what do you think?

On 28. Sep 2017, at 17:47, Dan Circelli <[hidden email]> wrote:

In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:
 
java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
 
Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
 
 
Why is this hardcoded? 
Why not let value be set via the Yarn Site Configuration xml?
Why such a high minimum?
 
 
Thanks,
Dan

Reply | Threaded
Open this post in threaded view
|

Re: Job Manager minimum memory hard coded to 768

Till Rohrmann
Hi Dan,

I think Aljoscha is right and the 768 MB minimum JM memory is more of a legacy artifact which was never properly refactored. If I remember correctly, then we had problems when starting Flink in a container with a lower memory limit. Therefore this limit was introduced. But I'm actually not sure whether this is still valid and should definitely be verified again.

Cheers,
Till

On Thu, Sep 28, 2017 at 10:52 PM, Aljoscha Krettek <[hidden email]> wrote:
I believe this could be from a time when there was not yet the setting "containerized.heap-cutoff-min" since this part of the code is quite old.

I think we could be able to remove that restriction but I'm not sure so I'm cc'ing Till who knows those parts best.

@Till, what do you think?

On 28. Sep 2017, at 17:47, Dan Circelli <[hidden email]> wrote:

In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:
 
java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
 
Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
 
 
Why is this hardcoded? 
Why not let value be set via the Yarn Site Configuration xml?
Why such a high minimum?
 
 
Thanks,
Dan


Reply | Threaded
Open this post in threaded view
|

Re: Job Manager minimum memory hard coded to 768

Haohui Mai
We have observed the same issue in our production cluster. Filed FLINK-7743 for the fix.

~Haohui

On Fri, Sep 29, 2017 at 1:18 AM Till Rohrmann <[hidden email]> wrote:
Hi Dan,

I think Aljoscha is right and the 768 MB minimum JM memory is more of a legacy artifact which was never properly refactored. If I remember correctly, then we had problems when starting Flink in a container with a lower memory limit. Therefore this limit was introduced. But I'm actually not sure whether this is still valid and should definitely be verified again.

Cheers,
Till

On Thu, Sep 28, 2017 at 10:52 PM, Aljoscha Krettek <[hidden email]> wrote:
I believe this could be from a time when there was not yet the setting "containerized.heap-cutoff-min" since this part of the code is quite old.

I think we could be able to remove that restriction but I'm not sure so I'm cc'ing Till who knows those parts best.

@Till, what do you think?

On 28. Sep 2017, at 17:47, Dan Circelli <[hidden email]> wrote:

In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:
 
java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
 
Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
 
 
Why is this hardcoded? 
Why not let value be set via the Yarn Site Configuration xml?
Why such a high minimum?
 
 
Thanks,
Dan