changing flink/kafka configs for stateful flink streaming applications

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

changing flink/kafka configs for stateful flink streaming applications

Abrar Sheikh
Hey all,

One of the known things with Spark Stateful Streaming application is that we cannot alter Spark Configurations or Kafka Configurations after the first run of the stateful streaming application, this has been explained well in https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/

Is this also something Stateful Flink Application share in common with Spark?

Thanks,

--
Abrar Sheikh
Reply | Threaded
Open this post in threaded view
|

Re: changing flink/kafka configs for stateful flink streaming applications

Fabian Hueske-2
Hi,

It depends.

There are many things that can be changed. A savepoint in Flink contains only the state of the application and not the configuration of the system.
So an application can be migrated to another cluster that runs with a different configuration.
There are some exceptions like the configuration of the default state backend (in case it is not configured in the application itself) and the checkpoint techniques.

If it is about the configuration of the application itself (and not the system), you can do a lot of things in Flink.
You can even implement the application in a way that it reconfigures itself while it is running.

Since the last release (Flink 1.9), Flink features the Savepoint Processor API which allows to create or modify savepoints with a batch program.
This can be used to adjust or bootstrap savepoints.

Best, Fabian


Am Mi., 18. Sept. 2019 um 18:56 Uhr schrieb Abrar Sheikh <[hidden email]>:
Hey all,

One of the known things with Spark Stateful Streaming application is that we cannot alter Spark Configurations or Kafka Configurations after the first run of the stateful streaming application, this has been explained well in https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/

Is this also something Stateful Flink Application share in common with Spark?

Thanks,

--
Abrar Sheikh
Reply | Threaded
Open this post in threaded view
|

Re: changing flink/kafka configs for stateful flink streaming applications

Abrar Sheikh
Thank you for the clarification. 

On Fri, Sep 20, 2019 at 6:59 AM Fabian Hueske <[hidden email]> wrote:
Hi,

It depends.

There are many things that can be changed. A savepoint in Flink contains only the state of the application and not the configuration of the system.
So an application can be migrated to another cluster that runs with a different configuration.
There are some exceptions like the configuration of the default state backend (in case it is not configured in the application itself) and the checkpoint techniques.

If it is about the configuration of the application itself (and not the system), you can do a lot of things in Flink.
You can even implement the application in a way that it reconfigures itself while it is running.

Since the last release (Flink 1.9), Flink features the Savepoint Processor API which allows to create or modify savepoints with a batch program.
This can be used to adjust or bootstrap savepoints.

Best, Fabian


Am Mi., 18. Sept. 2019 um 18:56 Uhr schrieb Abrar Sheikh <[hidden email]>:
Hey all,

One of the known things with Spark Stateful Streaming application is that we cannot alter Spark Configurations or Kafka Configurations after the first run of the stateful streaming application, this has been explained well in https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/

Is this also something Stateful Flink Application share in common with Spark?

Thanks,

--
Abrar Sheikh


--
Abrar Sheikh