|
Hey Morgan,
For example: GET http://localhost:8082/jobs/9a6748889bf24987495eead247aeb1ffReturns: - {jid: "9a6748889bf24987495eead247aeb1ff", name: "CarTopSpeedWindowingExample", isStoppable: false,…}
- jid: "9a6748889bf24987495eead247aeb1ff"
- name: "CarTopSpeedWindowingExample"
- isStoppable: false
- state: "RUNNING"
- start-time: 1582192403413
- end-time: -1
- duration: 18533
- now: 1582192421946
- timestamps: {FINISHED: 0, FAILING: 0, CANCELED: 0, SUSPENDED: 0, RUNNING: 1582192403550, RECONCILING: 0, FAILED: 0,…}
- vertices: [{id: "cbc357ccb763df2852fee8c4fc7d55f2", name: "Source: Custom Source -> Timestamps/Watermarks",…},…]
- 0: {id: "cbc357ccb763df2852fee8c4fc7d55f2", name: "Source: Custom Source -> Timestamps/Watermarks",…}
- id: "cbc357ccb763df2852fee8c4fc7d55f2"
- name: "Source: Custom Source -> Timestamps/Watermarks"
- parallelism: 1
- status: "RUNNING"
- start-time: 1582192403754
- end-time: -1
- duration: 18192
- tasks: {CREATED: 0, CANCELED: 0, RECONCILING: 0, FAILED: 0, CANCELING: 0, DEPLOYING: 0, RUNNING: 1,…}
- metrics: {read-bytes: 0, read-bytes-complete: true, write-bytes: 0, write-bytes-complete: true, read-records: 0,…}
- 1: {id: "90bea66de1c231edf33913ecd54406c1",…}
- id: "90bea66de1c231edf33913ecd54406c1"
- name: "Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, PassThroughWindowFunction) -> Sink: Print to Std. Out"
- parallelism: 1
- status: "RUNNING"
- start-time: 1582192403759
- end-time: -1
- duration: 18187
- tasks: {CREATED: 0, CANCELED: 0, RECONCILING: 0, FAILED: 0, CANCELING: 0, DEPLOYING: 0, RUNNING: 1,…}
- metrics: {read-bytes: 4669, read-bytes-complete: true, write-bytes: 0, write-bytes-complete: true,…}
- status-counts: {CREATED: 0, CANCELED: 0, RECONCILING: 0, FAILED: 0, CANCELING: 0, DEPLOYING: 0, RUNNING: 2,…}
- plan: {jid: "9a6748889bf24987495eead247aeb1ff", name: "CarTopSpeedWindowingExample",…}
On Tue, Feb 18, 2020 at 5:01 PM Morgan Geldenhuys < [hidden email]> wrote:
Hi All,
I have setup monitoring for Flink (1.9.2) via Prometheus and am
interested in viewing the end-to-end latency at the sink operators
for the 95 percentile. I have enabled latency markers at the
operator level and can see the results, one of the entries looks as
follows:
flink_taskmanager_job_latency_source_id_operator_id_operator_subtask_index_latency{app="flink",component="taskmanager",host="flink_taskmanager_6bdc8fc49_kr4bs",instance="10.244.18.2:9999",job="kubernetes-pods",job_id="96d32d8e380dc267bd69403fd7e20adf",job_name="Traffic",kubernetes_namespace="default",kubernetes_pod_name="flink-taskmanager-6bdc8fc49-kr4bs",operator_id="2e32dc82f03b1df764824a4773219c97",operator_subtask_index="7",pod_template_hash="6bdc8fc49",quantile="0.95",source_id="cbc357ccb763df2852fee8c4fc7d55f2",tm_id="7fb02c0ed734ed1815fa51373457434f"}
That is great, however... I am unable to determine which of the
operators is the sink operator I'm looking for based solely on the
operator_id. Is there a way of determining this?
Regards,
M.
|