PartitionNotFoundException when restarting from checkpoint

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

PartitionNotFoundException when restarting from checkpoint

swiesman

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)       Are partitions a part of state or are the ephemeral to the job

2)       If they are not part of state, where would the task managers be getting that partition id to begin with

3)       Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

swiesman

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

Fabian Hueske-2
Hi Seth,

Thanks for sharing how you resolved the problem!

The problem might have been related to Flink's key groups which are used to assign key ranges to tasks.
Not sure why this would be related to ZooKeeper being in a bad state. Maybe Stefan (in CC) has an idea about the cause.

Also, it would be helpful if you could share the stacktrace of the exception (in case you still have it).

Best, Fabian

2018-03-13 14:35 GMT+01:00 Seth Wiesman <[hidden email]>:

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 


Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

swiesman

Unfortunately the stack trace was swallowed by the java timer in the LocalInputChannel[1], the real error is forwarded out to the main thread but I couldn’t figure out how to see that in my logs.

 

However, I believe I am close to having a reproducible example. Run a 1.4 DataStream, sinking to kafka 0.11 and cancel with a savepoint. If you then shut down the kafka daemon on a single broker but keep the rest proxy up you should see this error when you resume.

 

[1] https://github.com/apache/flink/blob/fa024726bb801fc71cec5cc303cac1d4a03f555e/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L151

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Fabian Hueske <[hidden email]>
Date: Tuesday, March 13, 2018 at 8:02 PM
To: Seth Wiesman <[hidden email]>, Stefan Richter <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Hi Seth,

Thanks for sharing how you resolved the problem!

The problem might have been related to Flink's key groups which are used to assign key ranges to tasks.

Not sure why this would be related to ZooKeeper being in a bad state. Maybe Stefan (in CC) has an idea about the cause.

Also, it would be helpful if you could share the stacktrace of the exception (in case you still have it).

Best, Fabian

 

2018-03-13 14:35 GMT+01:00 Seth Wiesman <[hidden email]>:

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

swiesman

Hit send too soon.

 

Having spent some more time with this, it appears that zookeeper being in a bad state was unable to track a downed kafka broker. This investigation has been very much trial and error up to this point please let me know if I seem way off base

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Wednesday, March 14, 2018 at 10:14 AM
To: Fabian Hueske <[hidden email]>, Stefan Richter <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Unfortunately the stack trace was swallowed by the java timer in the LocalInputChannel[1], the real error is forwarded out to the main thread but I couldn’t figure out how to see that in my logs.

 

However, I believe I am close to having a reproducible example. Run a 1.4 DataStream, sinking to kafka 0.11 and cancel with a savepoint. If you then shut down the kafka daemon on a single broker but keep the rest proxy up you should see this error when you resume.

 

[1] https://github.com/apache/flink/blob/fa024726bb801fc71cec5cc303cac1d4a03f555e/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L151

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Fabian Hueske <[hidden email]>
Date: Tuesday, March 13, 2018 at 8:02 PM
To: Seth Wiesman <[hidden email]>, Stefan Richter <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Hi Seth,

Thanks for sharing how you resolved the problem!

The problem might have been related to Flink's key groups which are used to assign key ranges to tasks.

Not sure why this would be related to ZooKeeper being in a bad state. Maybe Stefan (in CC) has an idea about the cause.

Also, it would be helpful if you could share the stacktrace of the exception (in case you still have it).

Best, Fabian

 

2018-03-13 14:35 GMT+01:00 Seth Wiesman <[hidden email]>:

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

Stephan Ewen
Just to double check: We are talking about a Flink PartitionNotFoundException, I assume?

The split brain situation is a good hint - the minority partition should stop its work, though, and the TaskManager should cleanly re-join the majority once the split brain is resolved.

This sounds almost like the TaskManagers rejoining where still assuming a stale version of the ExecutionGraph (ExecutionGraph should have new IDs when recovery of the lost nodes into majority partition happened).

Logs would definitely be helpful to debug this further...


On Wed, Mar 14, 2018 at 3:25 PM, Seth Wiesman <[hidden email]> wrote:

Hit send too soon.

 

Having spent some more time with this, it appears that zookeeper being in a bad state was unable to track a downed kafka broker. This investigation has been very much trial and error up to this point please let me know if I seem way off base

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Wednesday, March 14, 2018 at 10:14 AM
To: Fabian Hueske <[hidden email]>, Stefan Richter <[hidden email]>


Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Unfortunately the stack trace was swallowed by the java timer in the LocalInputChannel[1], the real error is forwarded out to the main thread but I couldn’t figure out how to see that in my logs.

 

However, I believe I am close to having a reproducible example. Run a 1.4 DataStream, sinking to kafka 0.11 and cancel with a savepoint. If you then shut down the kafka daemon on a single broker but keep the rest proxy up you should see this error when you resume.

 

[1] https://github.com/apache/flink/blob/fa024726bb801fc71cec5cc303cac1d4a03f555e/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L151

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Fabian Hueske <[hidden email]>
Date: Tuesday, March 13, 2018 at 8:02 PM
To: Seth Wiesman <[hidden email]>, Stefan Richter <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Hi Seth,

Thanks for sharing how you resolved the problem!

The problem might have been related to Flink's key groups which are used to assign key ranges to tasks.

Not sure why this would be related to ZooKeeper being in a bad state. Maybe Stefan (in CC) has an idea about the cause.

Also, it would be helpful if you could share the stacktrace of the exception (in case you still have it).

Best, Fabian

 

2018-03-13 14:35 GMT+01:00 Seth Wiesman <[hidden email]>:

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 


Reply | Threaded
Open this post in threaded view
|

Re: PartitionNotFoundException when restarting from checkpoint

swiesman

Yes that exception. Attached are logs.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Stephan Ewen <[hidden email]>
Date: Thursday, March 15, 2018 at 11:21 AM
To: Seth Wiesman <[hidden email]>
Cc: Fabian Hueske <[hidden email]>, Stefan Richter <[hidden email]>, "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Just to double check: We are talking about a Flink PartitionNotFoundException, I assume?

 

The split brain situation is a good hint - the minority partition should stop its work, though, and the TaskManager should cleanly re-join the majority once the split brain is resolved.

 

This sounds almost like the TaskManagers rejoining where still assuming a stale version of the ExecutionGraph (ExecutionGraph should have new IDs when recovery of the lost nodes into majority partition happened).

 

Logs would definitely be helpful to debug this further...

 

 

On Wed, Mar 14, 2018 at 3:25 PM, Seth Wiesman <[hidden email]> wrote:

Hit send too soon.

 

Having spent some more time with this, it appears that zookeeper being in a bad state was unable to track a downed kafka broker. This investigation has been very much trial and error up to this point please let me know if I seem way off base

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Wednesday, March 14, 2018 at 10:14 AM
To: Fabian Hueske <[hidden email]>, Stefan Richter <[hidden email]>


Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Unfortunately the stack trace was swallowed by the java timer in the LocalInputChannel[1], the real error is forwarded out to the main thread but I couldn’t figure out how to see that in my logs.

 

However, I believe I am close to having a reproducible example. Run a 1.4 DataStream, sinking to kafka 0.11 and cancel with a savepoint. If you then shut down the kafka daemon on a single broker but keep the rest proxy up you should see this error when you resume.

 

[1] https://github.com/apache/flink/blob/fa024726bb801fc71cec5cc303cac1d4a03f555e/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L151

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Fabian Hueske <[hidden email]>
Date: Tuesday, March 13, 2018 at 8:02 PM
To: Seth Wiesman <[hidden email]>, Stefan Richter <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: PartitionNotFoundException when restarting from checkpoint

 

Hi Seth,

Thanks for sharing how you resolved the problem!

The problem might have been related to Flink's key groups which are used to assign key ranges to tasks.

Not sure why this would be related to ZooKeeper being in a bad state. Maybe Stefan (in CC) has an idea about the cause.

Also, it would be helpful if you could share the stacktrace of the exception (in case you still have it).

Best, Fabian

 

2018-03-13 14:35 GMT+01:00 Seth Wiesman <[hidden email]>:

It turns out the issue was due to our zookeeper installation being in a bad state. I am not clear enough on flink’s networking internals to explain how this manifested as a partition not found exception, but hopefully this can serve as a starting point for other’s who run into the same issue.

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

From: Seth Wiesman <[hidden email]>
Date: Friday, March 9, 2018 at 11:53 AM
To: "[hidden email]" <[hidden email]>
Subject: PartitionNotFoundException when restarting from checkpoint

 

Hi,

 

We are running Flink 1.4.0 with a yarn deployment on ec2 instances, rocks dB and incremental checkpointing, last night a job failed and became stuck in a restart cycle with a PartitionNotFound. We tried restarting the checkpoint on a fresh Flink session with no luck. Looking through the logs we can see that the specified partition is never registered with the ResultPartitionManager.

 

My questions are:

1)      Are partitions a part of state or are the ephemeral to the job

2)      If they are not part of state, where would the task managers be getting that partition id to begin with

3)      Right now we are logging everything under org.apache.flink.runtime.io.network, is there anywhere else to look

 

Thank you,   

 

Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY 10007swiesman[hidden email] 

 

 

 


partition.log.gz (3M) Download Attachment