Upgrading Flink

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Upgrading Flink

Stephen Connolly
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster
Reply | Threaded
Open this post in threaded view
|

Re: Upgrading Flink

rmetzger0
Hey Stephen,


2. Yes, you need to recompile (but ideally you don't need to change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <[hidden email]> wrote:
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster
Reply | Threaded
Open this post in threaded view
|

Re: Upgrading Flink

Chesnay Schepler
@Robert Why would he have to recompile the jobs? Shouldn't he be fine soo long as he isn't using any API for which we broke binary-compatibility?

On 09/04/2020 09:55, Robert Metzger wrote:
Hey Stephen,


2. Yes, you need to recompile (but ideally you don't need to change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <[hidden email]> wrote:
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster


Reply | Threaded
Open this post in threaded view
|

Re: Upgrading Flink

Sivaprasanna
Ideally if the underlying cluster where the job is being deployed changes (1.8.x to 1.10.x ), it is better to update your project dependencies to the new version (1.10.x), and hence you need to recompile the jobs.

On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler <[hidden email]> wrote:
@Robert Why would he have to recompile the jobs? Shouldn't he be fine soo long as he isn't using any API for which we broke binary-compatibility?

On 09/04/2020 09:55, Robert Metzger wrote:
Hey Stephen,


2. Yes, you need to recompile (but ideally you don't need to change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <[hidden email]> wrote:
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster


Reply | Threaded
Open this post in threaded view
|

Re: Upgrading Flink

David Anderson-2
@Chesnay Flink doesn't seem to guarantee client-jobmanager compability, even for bug-fix releases. For example, some jobs compiled with 1.9.0 don't work with a cluster running 1.9.2. See https://github.com/ververica/sql-training/issues/8#issuecomment-590966210 for an example of a case when recompiling was necessary.

Does the Flink project have an explicit policy as to when recompiling can be required?


On Tue, Apr 14, 2020 at 2:38 PM Sivaprasanna <[hidden email]> wrote:
Ideally if the underlying cluster where the job is being deployed changes (1.8.x to 1.10.x ), it is better to update your project dependencies to the new version (1.10.x), and hence you need to recompile the jobs.

On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler <[hidden email]> wrote:
@Robert Why would he have to recompile the jobs? Shouldn't he be fine soo long as he isn't using any API for which we broke binary-compatibility?

On 09/04/2020 09:55, Robert Metzger wrote:
Hey Stephen,


2. Yes, you need to recompile (but ideally you don't need to change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <[hidden email]> wrote:
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster


Reply | Threaded
Open this post in threaded view
|

Re: Upgrading Flink

Chesnay Schepler
The only guarantee that Flink provides is that any jar working against Public API's will continue to work without recompilation.

There are no compatibility guarantees between clients<->server of different versions.

On 14/04/2020 20:02, David Anderson wrote:
@Chesnay Flink doesn't seem to guarantee client-jobmanager compability, even for bug-fix releases. For example, some jobs compiled with 1.9.0 don't work with a cluster running 1.9.2. See https://github.com/ververica/sql-training/issues/8#issuecomment-590966210 for an example of a case when recompiling was necessary.

Does the Flink project have an explicit policy as to when recompiling can be required?


On Tue, Apr 14, 2020 at 2:38 PM Sivaprasanna <[hidden email]> wrote:
Ideally if the underlying cluster where the job is being deployed changes (1.8.x to 1.10.x ), it is better to update your project dependencies to the new version (1.10.x), and hence you need to recompile the jobs.

On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler <[hidden email]> wrote:
@Robert Why would he have to recompile the jobs? Shouldn't he be fine soo long as he isn't using any API for which we broke binary-compatibility?

On 09/04/2020 09:55, Robert Metzger wrote:
Hey Stephen,


2. Yes, you need to recompile (but ideally you don't need to change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <[hidden email]> wrote:
Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink version before upgrading? Coordinating multiple teams can be tricky, so - short of spinning up a second flink cluster - our continuous deployment infrastructure will try to deploy the topologies compiled against 1.8.x for an hour or two after we have upgraded the cluster