Hi All,
Has anyone done any stream processing driven by a user request? What's the recommended way of doing this? Or is this completely wrong direction to go for applications running on top of Flink? Basically we need to tweak the stream processing based on parameters provided by a user, e.g. show me the total # of application failures due to "ABC", which is provided by the user. We are thinking of starting a flink job with "ABC" as a parameter but this would result in a huge number of flink jobs, is there a better way for this? Can we trigger the calculation on a running job? Thanks in advance. KZ |
Hi KZ, This article seems to be a good example to trigger a new calculation on a running job. Maybe you can get some help from it. Best Regards, Tony Wei 2017-11-29 4:53 GMT+08:00 zanqing zhang <[hidden email]>:
|
Another example is King's RBEA platform [1] which was built on Flink. In a nutshell, RBEA runs a single large Flink job, to which users can add queries that should be computed. Of course, the query language is restricted because they queries must match on the structure of the running job. Hope this helps, Fabian 2017-11-29 3:32 GMT+01:00 Tony Wei <[hidden email]>:
|
Thanks Fabian and Tony for the info. It's very helpful. Looks like the general approach is to implement a job topology containing parameterized (CoXXXMapFunction) operators. The user defined parameters will be ingested using the extra input the CoXXXMapFunction take. Ken On Wed, Nov 29, 2017 at 5:27 AM, Fabian Hueske <[hidden email]> wrote:
Ken Zhang
----------------------------------------------------------- Smart && Creative && Open == Innovative |
Free forum by Nabble | Edit this page |