Hi,
I am using a sliding window to monitor server performance. I need to keep track of number of HTTP requests generated and alert the user when the requests gets too high(Sliding window of 6 hours which slides every 15mins). Aggregate count of the number of http requests is evaluated in the 15mins sliding window. I need to keep track of running average of these aggregate count over the different sliding window of 15mins to create an alert when the load is over the average+1std deviation. How can we achieve this ? How can we keep track of running average for the all the sliding windows ? P |
Hi, I would first compute the 15 minute counts. Based on these counts, you compute the threshold by computing average and std-dev and then you compare the counts with the threshold.2017-07-28 18:31 GMT+02:00 Raj Kumar <[hidden email]>: Hi, |
This post was updated on .
Thanks Fabian.
Can you provide more details about the implementation for step 2 and step 3. How to calculate the average and standard deviation in Sliding Window ? How does the coprocess function work ? Can you provide details about these two. |
You can compute the average and std-dev in a WindowFunction that iterates over all records in the window (6h / 15min = 24). WIndowFunction [1] and CoProcessFunction [2] are described in the docs.[1] https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/windows.html#windowfunction---the-generic-case [2] https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/process_function.html#low-level-joins 2017-07-31 19:23 GMT+02:00 Raj Kumar <[hidden email]>: Thanks Fabian. |
Thanks Fabian. That helps.
I have one more question. In the second step since I am using window function apply, The average calculated will be a running average or it will be computed at the end of 6hrs window ?? |
The average would be computed over the aggregated 15-minute count values. The sliding window would emit every 15 minutes the average of all records that arrived within the last 6 hours. Since the preceding 15-minute tumbling window emits 1 record every 15 mins, this would be the avg over 24 records. 2017-08-01 4:48 GMT+02:00 Raj Kumar <[hidden email]>: Thanks Fabian. That helps. |
Thanks Fabian. Your suggestion helped. But, I am stuck at 3rd step
1. I didn't completely understand the step 3. What the process function should look like ? Why does it needs to be stateful. Can you please provide more details on this. 2. In the stateful, function, we need to have a value state ? what details we need to store would be helpful to implement the use case. 3. Moreover, I see that RichProcessFunction is deprecated. What else can we use in place of RichProcessFunction ? |
Hi Raj, you have to combine two streams. The first stream has the running avg + std-dev over the last 6 hours, the second stream has the 15 minute counts.Both streams emit one record every 15 minutes. What you wan to do is to join the two records of both streams with the same timestamp. You do that by connecting the streams and implementing a CoProcessFunction. btw. CoProcessFunction implements RichFunction. The function must be stateful, because you need to collect the first record that is received from either input and wait for the record from the other input in order to be able to join them, i.e. compare the count against the avg + std-dev. So which ever record you receive first, you put into state and wait for the other record with the same timestamp to arrive. After the join, you clear the state. Hope that helps, Fabian 2017-08-04 7:49 GMT+02:00 Raj Kumar <[hidden email]>: Thanks Fabian. Your suggestion helped. But, I am stuck at 3rd step |
Thanks Fabian.
The incoming events have the timestamps. Once I aggregate in the first stream to get counts and calculate the mean/standard deviation in the second the new timestamps should be window start time ? How to tackle this issue ? |
TimeWindow.getStart() or TimeWindow.getEnd() -> https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/windows.html#incremental-window-aggregation-with-reducefunction 2017-08-04 22:43 GMT+02:00 Raj Kumar <[hidden email]>: Thanks Fabian. |
This post was updated on .
Thanks Fabian.Here are the final set of questions I have.
1. I see that we cannot use ValueState<T> since this is not a keyed stream. So we need to use Operator state ? 2. I see that the window endtime of both streams is same but not the start time(Is this the expected behavior). So the streams need to be joined over this timestamp. 3. When we connect the two streams and perform coprocess function, how can we perform join of the streams. There are two separate methods for each streams. Which stream state we need to store in the state and Will the coprocess function automatically trigger once the other stream data or should we set some timer to trigger. I am really confused how to implement this. Any example code will of great help. The documentation does not give details about how to perform join. Need help. CoProcessFunction<ProcessedLogData, String, String>() { public void processElement1(LogData logData, Context context, Collector<FinalResult> collector) throws Exception { } @Override public void processElement2(AvgStdDev AvgSD, Context context, Collector<FinalResult> collector) throws Exception { } }) |
In reply to this post by Fabian Hueske-2
Hi Fabian,
Can you please answer my last set of questions I have posted on the Forum. Thanks.
On Friday, August 4, 2017, Fabian Hueske-2 [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |