Custom service configs in flink

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Custom service configs in flink

jaswin.shah@outlook.com
I have multiple flink jobs and have custom business configs which are shared between the job. Is it possible if one flink job loads configs in memory and all the flink jobs share the same configs? Basically, I am thinking to fetch configs in one flink job in memory via rest call which is one time and share those with all the jobs if possible. With this, I want to have an ability to update configs dynamically via kafka.
Reply | Threaded
Open this post in threaded view
|

Re: Custom service configs in flink

rmetzger0
(oops, I accidentally responded to you personally only. The emails are supposed to go onto the list. I added the thread back to the list)

But is the config so big that memory usage is a concern here?

Also note, that the stuff that runs in main() is just generating a streaming execution plan, which will be sent to the server. If the config is really large, you might face timeouts during job submission.


On Fri, Jul 3, 2020 at 5:17 PM Jaswin Shah <[hidden email]> wrote:
It just results in more memory usage since, configs fetched by each flink job and they are going to store them in memory.

From: Robert Metzger <[hidden email]>
Sent: 03 July 2020 20:31
To: Jaswin Shah <[hidden email]>
Subject: Re: Custom service configs in flink
 
Hi Jaswin,

Usually, you have one Flink job per main() method (at least in our examples). However, you can use one (Stream)ExecutionEnvironment to submit multiple streaming jobs.

Basically, the structure of your multi-job class would be 
public static void main(args) {
 // 1. do rest call(s) to get business configs
// 2. create execution environment
// 3. assemble job topology
 // 4. submit topology with env.execute() call. This special call clears the existing transformations, so that you can create another job in the same environment.
env.execute(env.getStreamGraph("my job", true));
// 5. assemble next job topology
// ... repeat ...
}

However, I'm not sure if this approach isn't making things more complicated than necessary. Maybe you should just extract the REST-call logic into some separate class, that you are calling in each of your jobs?

Best,
Robert

On Fri, Jul 3, 2020 at 12:06 PM Jaswin Shah <[hidden email]> wrote:
I have multiple flink jobs and have custom business configs which are shared between the job. Is it possible if one flink job loads configs in memory and all the flink jobs share the same configs? Basically, I am thinking to fetch configs in one flink job in memory via rest call which is one time and share those with all the jobs if possible. With this, I want to have an ability to update configs dynamically via kafka.