Flink Metrics Naming

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink Metrics Naming

Mason Chen
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.


Reply | Threaded
Open this post in threaded view
|

Re: Flink Metrics Naming

Till Rohrmann
Hi Mason,

The idea is that a metric is not uniquely identified by its name alone but instead by its path. The groups in which it is defined specify this path (similar to directories). That's why it is valid to specify two metrics with the same name if they reside in different groups.

I think Prometheus does not support such a tree structure and that's why the path is exposed via labels if I am not mistaken. So long story short, what you are seeing is a combination of how Flink organizes metrics and what can be reported to Prometheus. 

I am also pulling in Chesnay who is more familiar with this part of the code.

Cheers,
Till

On Fri, May 28, 2021 at 7:33 PM Mason Chen <[hidden email]> wrote:
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.


Reply | Threaded
Open this post in threaded view
|

Re: Flink Metrics Naming

Chesnay Schepler
The uniqueness of metrics and the naming of the Prometheus reporter are somewhat related but also somewhat orthogonal.

Prometheus works similar to JMX in that the metric name (e.g., taskmanager.job.task.operator.numRecordsIn) is more or less a _class_ of metrics, with tags/labels allowing you to select a specific instance of that metric.

Restricting metric names to 1 level of the hierarchy would present a few issues:
a) Effectively, all metric names that Flink uses effectively become reserved keywords that users must not use, which will lead to headaches when adding more metrics or forwarding metrics from libraries (e.g., kafka), because we could always break existing user-defined metrics.
b) You'd need a cluster-wide lookup that is aware of all hierarchies to ensure consistency across all processes.

In the end, there are significantly easier ways to solve the issue of the metric name being too long, i.e., give the user more control over the logical scope (taskmanager.job.task.operator), be it shortening the names (t.j.t.o), limiting the depth (e.g, operator.numRecordsIn), removing it outright (but I'd prefer some context to be present for clarity) or supporting something similar to scope formats.
I'm reasonably certain there are some tickets already in this direction, we just don't get around to doing them because for the most part the metric system works good enough and there are bigger fish to fry.

On 6/1/2021 3:39 PM, Till Rohrmann wrote:
Hi Mason,

The idea is that a metric is not uniquely identified by its name alone but instead by its path. The groups in which it is defined specify this path (similar to directories). That's why it is valid to specify two metrics with the same name if they reside in different groups.

I think Prometheus does not support such a tree structure and that's why the path is exposed via labels if I am not mistaken. So long story short, what you are seeing is a combination of how Flink organizes metrics and what can be reported to Prometheus. 

I am also pulling in Chesnay who is more familiar with this part of the code.

Cheers,
Till

On Fri, May 28, 2021 at 7:33 PM Mason Chen <[hidden email]> wrote:
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.



Reply | Threaded
Open this post in threaded view
|

Re: Flink Metrics Naming

Mason Chen
Makes sense. We are primarily concerned with removing the metric labels from the names as the user metrics get too long. i.e. the groups from `addGroup` are concatenated in the metric name.

Do you think there would be any issues with removing the group information in the metric name and putting them into a label instead? In seems like most metrics internally, don’t use `addGroup` to create group information but rather by creating another subclass of metric group.

Perhaps, I should ONLY apply this custom logic to metrics with the “user” scope? Other scoped metrics (e.g. operator, task operator, etc.) shouldn’t have these group names in the metric names in my experience...

An example just for clarity, flink_<system_scope>_group1_group2_metricName{group1=…, group2=…, flink tags}

=>

flink_<system_scope>_metricName{group_info=group1_group2, group1=…, group2=…, flink tags}

On Jun 1, 2021, at 9:57 AM, Chesnay Schepler <[hidden email]> wrote:

The uniqueness of metrics and the naming of the Prometheus reporter are somewhat related but also somewhat orthogonal.

Prometheus works similar to JMX in that the metric name (e.g., taskmanager.job.task.operator.numRecordsIn) is more or less a _class_ of metrics, with tags/labels allowing you to select a specific instance of that metric.

Restricting metric names to 1 level of the hierarchy would present a few issues:
a) Effectively, all metric names that Flink uses effectively become reserved keywords that users must not use, which will lead to headaches when adding more metrics or forwarding metrics from libraries (e.g., kafka), because we could always break existing user-defined metrics.
b) You'd need a cluster-wide lookup that is aware of all hierarchies to ensure consistency across all processes.

In the end, there are significantly easier ways to solve the issue of the metric name being too long, i.e., give the user more control over the logical scope (taskmanager.job.task.operator), be it shortening the names (t.j.t.o), limiting the depth (e.g, operator.numRecordsIn), removing it outright (but I'd prefer some context to be present for clarity) or supporting something similar to scope formats.
I'm reasonably certain there are some tickets already in this direction, we just don't get around to doing them because for the most part the metric system works good enough and there are bigger fish to fry.

On 6/1/2021 3:39 PM, Till Rohrmann wrote:
Hi Mason,

The idea is that a metric is not uniquely identified by its name alone but instead by its path. The groups in which it is defined specify this path (similar to directories). That's why it is valid to specify two metrics with the same name if they reside in different groups.

I think Prometheus does not support such a tree structure and that's why the path is exposed via labels if I am not mistaken. So long story short, what you are seeing is a combination of how Flink organizes metrics and what can be reported to Prometheus. 

I am also pulling in Chesnay who is more familiar with this part of the code.

Cheers,
Till

On Fri, May 28, 2021 at 7:33 PM Mason Chen <[hidden email]> wrote:
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.




Reply | Threaded
Open this post in threaded view
|

Re: Flink Metrics Naming

Mason Chen
Upon further inspection, it seems like the user scope is not universal (i.e. comes through the connectors and not UDFs (like rich map function)), but the question still stands if the process makes sense.

On Jun 1, 2021, at 10:38 AM, Mason Chen <[hidden email]> wrote:

Makes sense. We are primarily concerned with removing the metric labels from the names as the user metrics get too long. i.e. the groups from `addGroup` are concatenated in the metric name.

Do you think there would be any issues with removing the group information in the metric name and putting them into a label instead? In seems like most metrics internally, don’t use `addGroup` to create group information but rather by creating another subclass of metric group.

Perhaps, I should ONLY apply this custom logic to metrics with the “user” scope? Other scoped metrics (e.g. operator, task operator, etc.) shouldn’t have these group names in the metric names in my experience...

An example just for clarity, flink_<system_scope>_group1_group2_metricName{group1=…, group2=…, flink tags}

=>

flink_<system_scope>_metricName{group_info=group1_group2, group1=…, group2=…, flink tags}

On Jun 1, 2021, at 9:57 AM, Chesnay Schepler <[hidden email]> wrote:

The uniqueness of metrics and the naming of the Prometheus reporter are somewhat related but also somewhat orthogonal.

Prometheus works similar to JMX in that the metric name (e.g., taskmanager.job.task.operator.numRecordsIn) is more or less a _class_ of metrics, with tags/labels allowing you to select a specific instance of that metric.

Restricting metric names to 1 level of the hierarchy would present a few issues:
a) Effectively, all metric names that Flink uses effectively become reserved keywords that users must not use, which will lead to headaches when adding more metrics or forwarding metrics from libraries (e.g., kafka), because we could always break existing user-defined metrics.
b) You'd need a cluster-wide lookup that is aware of all hierarchies to ensure consistency across all processes.

In the end, there are significantly easier ways to solve the issue of the metric name being too long, i.e., give the user more control over the logical scope (taskmanager.job.task.operator), be it shortening the names (t.j.t.o), limiting the depth (e.g, operator.numRecordsIn), removing it outright (but I'd prefer some context to be present for clarity) or supporting something similar to scope formats.
I'm reasonably certain there are some tickets already in this direction, we just don't get around to doing them because for the most part the metric system works good enough and there are bigger fish to fry.

On 6/1/2021 3:39 PM, Till Rohrmann wrote:
Hi Mason,

The idea is that a metric is not uniquely identified by its name alone but instead by its path. The groups in which it is defined specify this path (similar to directories). That's why it is valid to specify two metrics with the same name if they reside in different groups.

I think Prometheus does not support such a tree structure and that's why the path is exposed via labels if I am not mistaken. So long story short, what you are seeing is a combination of how Flink organizes metrics and what can be reported to Prometheus. 

I am also pulling in Chesnay who is more familiar with this part of the code.

Cheers,
Till

On Fri, May 28, 2021 at 7:33 PM Mason Chen <[hidden email]> wrote:
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.





Reply | Threaded
Open this post in threaded view
|

Re: Flink Metrics Naming

Chesnay Schepler
Some more background on MetricGroups:
Internally there (mostly) 3 types of metric groups:
On the one hand we have the ComponentMetricGroups (like TaskManagerMetricGroup) that describe a high-level Flink entity, which just add a constant expression to the logical scope(like taskmanager, task etc.). These exist to support scope formats (although this should've been implemented differently, but that's a another story).

On the other hand we have groups created via addGroup(String), which are added to the logical scope as is; this is sometimes good(e.g., addGroup("KafkaConsumer"), and sometimes isn't (e.g., addGroup(<some-topic-id>).
Finally, there is a addGroup(String, String) variant, which behaves like a key-value pair (and similarly to the ComponentMetricGroup). The key part is added to the logical scope, and a label is usually added as well.

Due to historical reasons some parts in Flink use addGroup(String) despite the key-value pair variant being more appropriate; the latter was only added later, as was the logical scope as a whole for that matter.

With that said, the logical scope and labels suffer a bit due to being retrofitted on an existing design and some early mistakes in the metric structuring.
Ideally (imo), things would work like this (bold parts signify changes to the current behavior):
- addGroup(String) is sparsely used and only for high-level hierarchies (job, operator, source, kafka). It is added as is to the logical scope, creates no label, and is excluded from the metric identifier.
- addGroup(String, String) has no effect on the logical scope, creates a label, and is added as <key>.<value> to the metric identifier.

The core issue with these kind of changes however is backwards compatibility. We would have to do a sweep over the code-base to migrate inappropriate usages of addGroup(String) to the key-pair variant, probably remove some unnecessary groups (e.g., "Status" that is used for CPU metrics and whatnot) and finally make changes to the metric system internals, all of which need a codepath that retain the current behavior.

Simply put, for immediate needs I would probably encourage you do create a modified PrometheusReporter which determines the logical scope as you see fit; it could just ignore the logical scope entirely (although I'm not sure how well prometheus handles 1 metric having multiple instances with different label sets (e.g., numRecordsIn for operators/tasks), or exclude user-defined groups with something hacky like only using the first 4 parts of the logical scope.

On 6/1/2021 4:56 PM, Mason Chen wrote:
Upon further inspection, it seems like the user scope is not universal (i.e. comes through the connectors and not UDFs (like rich map function)), but the question still stands if the process makes sense.

On Jun 1, 2021, at 10:38 AM, Mason Chen <[hidden email]> wrote:

Makes sense. We are primarily concerned with removing the metric labels from the names as the user metrics get too long. i.e. the groups from `addGroup` are concatenated in the metric name.

Do you think there would be any issues with removing the group information in the metric name and putting them into a label instead? In seems like most metrics internally, don’t use `addGroup` to create group information but rather by creating another subclass of metric group.

Perhaps, I should ONLY apply this custom logic to metrics with the “user” scope? Other scoped metrics (e.g. operator, task operator, etc.) shouldn’t have these group names in the metric names in my experience...

An example just for clarity, flink_<system_scope>_group1_group2_metricName{group1=…, group2=…, flink tags}

=>
flink_<system_scope>_metricName{group_info=group1_group2, group1=…, group2=…, flink tags}

On Jun 1, 2021, at 9:57 AM, Chesnay Schepler <[hidden email]> wrote:

The uniqueness of metrics and the naming of the Prometheus reporter are somewhat related but also somewhat orthogonal.

Prometheus works similar to JMX in that the metric name (e.g., taskmanager.job.task.operator.numRecordsIn) is more or less a _class_ of metrics, with tags/labels allowing you to select a specific instance of that metric.

Restricting metric names to 1 level of the hierarchy would present a few issues:
a) Effectively, all metric names that Flink uses effectively become reserved keywords that users must not use, which will lead to headaches when adding more metrics or forwarding metrics from libraries (e.g., kafka), because we could always break existing user-defined metrics.
b) You'd need a cluster-wide lookup that is aware of all hierarchies to ensure consistency across all processes.

In the end, there are significantly easier ways to solve the issue of the metric name being too long, i.e., give the user more control over the logical scope (taskmanager.job.task.operator), be it shortening the names (t.j.t.o), limiting the depth (e.g, operator.numRecordsIn), removing it outright (but I'd prefer some context to be present for clarity) or supporting something similar to scope formats.
I'm reasonably certain there are some tickets already in this direction, we just don't get around to doing them because for the most part the metric system works good enough and there are bigger fish to fry.

On 6/1/2021 3:39 PM, Till Rohrmann wrote:
Hi Mason,

The idea is that a metric is not uniquely identified by its name alone but instead by its path. The groups in which it is defined specify this path (similar to directories). That's why it is valid to specify two metrics with the same name if they reside in different groups.

I think Prometheus does not support such a tree structure and that's why the path is exposed via labels if I am not mistaken. So long story short, what you are seeing is a combination of how Flink organizes metrics and what can be reported to Prometheus. 

I am also pulling in Chesnay who is more familiar with this part of the code.

Cheers,
Till

On Fri, May 28, 2021 at 7:33 PM Mason Chen <[hidden email]> wrote:
Can anyone give insight as to why Flink allows 2 metrics with the same “name”?

For example,

getRuntimeContext.addGroup(“group”, “group1”).counter(“myMetricName”);

And

getRuntimeContext.addGroup(“other_group”, “other_group1”).counter(“myMetricName”);

Are totally valid.


It seems that it has lead to some not-so-great implementations—the prometheus reporter and attaching the labels to the metric name, making the name quite verbose.