Re: Checkpointing with RocksDB as statebackend

classic Classic list List threaded Threaded
58 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

swiesman

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.




Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
Hi Seth!

Wow, that is an awesome approach.

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

Do you think your patch would be a good starting point for that and would you be willing to share it?

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

Greetings,
Stephan



On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.




Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.


Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
In reply to this post by swiesman
Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,
Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.
This issue gets worst when checkpointing is disabled  as nothing is written.



Regards,
Vinay Patil

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Seth!

Wow, that is an awesome approach.

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

Do you think your patch would be a good starting point for that and would you be willing to share it?

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

Greetings,
Stephan



On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.




Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
In reply to this post by Stephan Ewen
Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,
Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.
This issue gets worst when checkpointing is disabled  as nothing is written.



Regards,
Vinay Patil

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Seth!

Wow, that is an awesome approach.

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

Do you think your patch would be a good starting point for that and would you be willing to share it?

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

Greetings,
Stephan



On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.




Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

swiesman
In reply to this post by Vinay Patil

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.


Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.



Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

vishnuviswanath
Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.




Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stefan Richter
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.





Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.






Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM
To: "[hidden email]" <[hidden email]>


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML


Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML


Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

rmetzger0
Yes, you can change the GC using the env.java.opts parameter.
We are not setting any GC on YARN.

On Thu, Mar 16, 2017 at 1:50 PM, Stephan Ewen <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML



Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
In reply to this post by Stephan Ewen
@ Stephan,

I am not using explicit Evictor in my code. I will try using the Fold function if it does not break my existing functionality :)

@Robert : Thank you for your answer, yes I have already tried to set G1GC  this morning using env.java.opts, it works.
Which is the recommended GC for Streaming application (running on YARN - EMR ) ?

Regards,
Vinay Patil

On Thu, Mar 16, 2017 at 6:36 PM, rmetzger0 [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Yes, you can change the GC using the env.java.opts parameter.
We are not setting any GC on YARN.

On Thu, Mar 16, 2017 at 1:50 PM, Stephan Ewen <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML






To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
In reply to this post by rmetzger0
@ Stephan,

I am not using explicit Evictor in my code. I will try using the Fold function if it does not break my existing functionality :)

@Robert : Thank you for your answer, yes I have already tried to set G1GC  this morning using env.java.opts, it works.
Which is the recommended GC for Streaming application (running on YARN - EMR ) ?

Regards,
Vinay Patil

On Thu, Mar 16, 2017 at 6:36 PM, rmetzger0 [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Yes, you can change the GC using the env.java.opts parameter.
We are not setting any GC on YARN.

On Thu, Mar 16, 2017 at 1:50 PM, Stephan Ewen <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML






To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
In reply to this post by Stephan Ewen
Hi Stephan,

Is the performance related change  of RocksDB going to be part of Flink 1.2.1 ?

Regards,
Vinay Patil

On Thu, Mar 16, 2017 at 6:13 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Stephan Ewen
@vinay Let's see how fast we get this fix in - I hope yes. It may depend also a bit on the RocksDB community.

In any case, if it does not make it in, we can do a 1.2.2 release immediately after (I think the problem is big enough to warrant that), or at least release a custom version of the RocksDB state backend that includes the fix.

Stephan


On Fri, Mar 17, 2017 at 5:51 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Is the performance related change  of RocksDB going to be part of Flink 1.2.1 ?

Regards,
Vinay Patil

On Thu, Mar 16, 2017 at 6:13 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML


Reply | Threaded
Open this post in threaded view
|

Re: Checkpointing with RocksDB as statebackend

Vinay Patil
Hi Stephan,

Just an update, last week I did a run with state size close to 18GB, I did not observe the pipeline getting stopped in between with G1GC enabled.

I had observed checkpoint failures when the state size was close to 38GB (but in this case G1GC was not enabled)

Is it possible to get the RocksDB fix in 1.2.1 so that I can test it out.


Regards,
Vinay Patil

On Sat, Mar 18, 2017 at 12:25 AM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
@vinay Let's see how fast we get this fix in - I hope yes. It may depend also a bit on the RocksDB community.

In any case, if it does not make it in, we can do a 1.2.2 release immediately after (I think the problem is big enough to warrant that), or at least release a custom version of the RocksDB state backend that includes the fix.

Stephan


On Fri, Mar 17, 2017 at 5:51 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Is the performance related change  of RocksDB going to be part of Flink 1.2.1 ?

Regards,
Vinay Patil

On Thu, Mar 16, 2017 at 6:13 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The only immediate workaround is to use windows with "reduce" or "fold" or "aggregate" and not "apply". And to not use an evictor.

The good news is that I think we have a good way of fixing this soon, making an adjustment in RocksDB.

For the Yarn / g1gc question: Not 100% sure about that - you can check if it used g1gc. If not, you may be able to pass this through the "env.java.opts" parameter. (cc robert for confirmation)

Stephan



On Thu, Mar 16, 2017 at 8:31 AM, vinay patil <[hidden email]> wrote:
Hi Stephan,

What can be the workaround for this ?

Also need one confirmation : Is G1 GC used by default when running the pipeline on YARN. (I see a thread of 2015 where G1 is used by default for JAVA8)



Regards,
Vinay Patil

On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi Vinay!

Savepoints also call the same problematic RocksDB function, unfortunately.

We will have a fix next month. We either (1) get a patched RocksDB version or we (2) implement a different pattern for ListState in Flink.

(1) would be the better solution, so we are waiting for a response from the RocksDB folks. (2) is always possible if we cannot get a fix from RocksDB.

Stephan


On Wed, Mar 15, 2017 at 5:53 PM, vinay patil <[hidden email]> wrote:
Hi Stephan,

Thank you for making me aware of this.

Yes I am using a window without reduce function (Apply function). The discussion happening on JIRA is exactly what I am observing, consistent failure of checkpoints after some time and the stream halts.

We want to go live in next month, not sure how this will affect in production as we are going to get above 200 million data.

As a workaround can I take the savepoint while the pipeline is running ? Let's say if I take savepoint after every 30minutes, will it work ?



Regards,
Vinay Patil

On Tue, Mar 14, 2017 at 10:02 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
The issue in Flink is https://issues.apache.org/jira/browse/FLINK-5756

On Tue, Mar 14, 2017 at 3:40 PM, Stefan Richter <[hidden email]> wrote:
Hi Vinay,

I think the issue is tracked here: https://github.com/facebook/rocksdb/issues/1988.

Best,
Stefan

Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]>:

Hi Stephan,

Is there a ticket number/link to track this, My job has all the conditions you mentioned.

Thanks,
Vishnu

On Tue, Mar 14, 2017 at 7:13 AM, Stephan Ewen <[hidden email]> wrote:
Hi Vinay!

We just discovered a bug in RocksDB. The bug affects windows without reduce() or fold(), windows with evictors, and ListState.

A certain access pattern in RocksDB starts being so slow after a certain size-per-key that it basically brings down the streaming program and the snapshots.

We are reaching out to the RocksDB folks and looking for workarounds in Flink.

Greetings,
Stephan


On Wed, Mar 1, 2017 at 12:10 PM, Stephan Ewen <[hidden email]> wrote:
@vinay  Can you try to not set the buffer timeout at all? I am actually not sure what would be the effect of setting it to a negative value, that can be a cause of problems...


On Mon, Feb 27, 2017 at 7:44 PM, Seth Wiesman <[hidden email]> wrote:

Vinay,

 

The bucketing sink performs rename operations during the checkpoint and if it tries to rename a file that is not yet consistent that would cause a FileNotFound exception which would fail the checkpoint.

 

Stephan,

 

Currently my aws fork contains some very specific assumptions about the pipeline that will in general only hold for my pipeline. This is because there were still some open questions that  I had about how to solve consistency issues in the general case. I will comment on the Jira issue with more specific.

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Monday, February 27, 2017 at 1:05 PM


Subject: Re: Checkpointing with RocksDB as statebackend

 

Hi Seth,

Thank you for your suggestion.

But if the issue is only related to S3, then why does this happen when I replace the S3 sink  to HDFS as well (for checkpointing I am using HDFS only )

Stephan,

Another issue I see is when I set env.setBufferTimeout(-1) , and keep the checkpoint interval to 10minutes, I have observed that nothing gets written to sink (tried with S3 as well as HDFS), atleast I was expecting pending files here.

This issue gets worst when checkpointing is disabled  as nothing is written.

 


Regards,

Vinay Patil

 

On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Hi Seth!

 

Wow, that is an awesome approach.

 

We have actually seen these issues as well and we are looking to eventually implement our own S3 file system (and circumvent Hadoop's S3 connector that Flink currently relies on): https://issues.apache.org/jira/browse/FLINK-5706

 

Do you think your patch would be a good starting point for that and would you be willing to share it?

 

The Amazon AWS SDK for Java is Apache 2 licensed, so that is possible to fork officially, if necessary...

 

Greetings,

Stephan

 

 

 

On Mon, Feb 27, 2017 at 5:15 PM, Seth Wiesman <[hidden email]> wrote:

Just wanted to throw in my 2cts.  

 

I’ve been running pipelines with similar state size using rocksdb which externalize to S3 and bucket to S3. I was getting stalls like this and ended up tracing the problem to S3 and the bucketing sink. The solution was two fold:

 

1)       I forked hadoop-aws and have it treat flink as a source of truth. Emr uses a dynamodb table to determine if S3 is inconsistent. Instead I say that if flink believes that a file exists on S3 and we don’t see it then I am going to trust that flink is in a consistent state and S3 is not. In this case, various operations will perform a back off and retry up to a certain number of times.

 

2)       The bucketing sink performs multiple renames over the lifetime of a file, occurring when a checkpoint starts and then again on notification after it completes. Due to S3’s consistency guarantees the second rename of file can never be assured to work and will eventually fail either during or after a checkpoint. Because there is no upper bound on the time it will take for a file on S3 to become consistent, retries cannot solve this specific problem as it could take upwards of many minutes to rename which would stall the entire pipeline. The only viable solution I could find was to write a custom sink which understands S3. Each writer will write file locally and then copy it to S3 on checkpoint. By only interacting with S3 once per file it can circumvent consistency issues all together.

 

Hope this helps,

 

Seth Wiesman

 

From: vinay patil <[hidden email]>
Reply-To: "[hidden email]" <[hidden email]>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "[hidden email]" <[hidden email]>
Subject: Re: Checkpointing with RocksDB as statebackend

 

HI Stephan,

Just to avoid the confusion here, I am using S3 sink for writing the data, and using HDFS for storing checkpoints.

There are 2 core nodes (HDFS) and two task nodes on EMR


I replaced s3 sink with HDFS for writing data in my last test.

Let's say the checkpoint interval is 5 minutes, now within 5minutes of run the state size grows to 30GB ,  after checkpointing the 30GB state that is maintained in rocksDB has to be copied to HDFS, right ?  is this causing the pipeline to stall ?


Regards,

Vinay Patil

 

On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil <[hidden email]> wrote:

Hi Stephan,

To verify if S3 is making teh pipeline stall, I have replaced the S3 sink with HDFS and kept minimum pause between checkpoints to 5minutes, still I see the same issue with checkpoints getting failed.

If I keep the  pause time to 20 seconds, all checkpoints are completed , however there is a hit in overall throughput.

 


Regards,

Vinay Patil

 

On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:

Flink's state backends currently do a good number of "make sure this exists" operations on the file systems. Through Hadoop's S3 filesystem, that translates to S3 bucket list operations, where there is a limit in how many operation may happen per time interval. After that, S3 blocks.

 

It seems that operations that are totally cheap on HDFS are hellishly expensive (and limited) on S3. It may be that you are affected by that.

 

We are gradually trying to improve the behavior there and be more S3 aware.

 

Both 1.3-SNAPSHOT and 1.2-SNAPSHOT already contain improvements there.

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 4:42 PM, vinay patil <[hidden email]> wrote:

Hi Stephan,

So do you mean that S3 is causing the stall , as I have mentioned in my previous mail, I could not see any progress for 16minutes as checkpoints were getting failed continuously.

 

On Feb 24, 2017 8:30 PM, "Stephan Ewen [via Apache Flink User Mailing List archive.]" <[hidden email]> wrote:

Hi Vinay!

 

True, the operator state (like Kafka) is currently not asynchronously checkpointed.

 

While it is rather small state, we have seen before that on S3 it can cause trouble, because S3 frequently stalls uploads of even data amounts as low as kilobytes due to its throttling policies.

 

That would be a super important fix to add!

 

Best,

Stephan

 

 

On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]> wrote:

Hi,

I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )

Within 10minutes 15M records were processed, and for the next 16minutes the
pipeline is stuck , I don't see any progress beyond 15M because of
checkpoints getting failed consistently.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n11882/Checkpointing_Failed.png>



--
View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 


View this message in context: Re: Checkpointing with RocksDB as statebackend

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

 

 


To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

 

 


View this message in context: Re: Checkpointing with RocksDB as statebackend
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.









To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML





To start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML

123