|
Hi,
During high volumes, cassandra sink fails with the following error: com.datastax.driver.core.exceptions.WriteTimeoutException:
Cassandra timeout during write query at consistency SERIAL (2 replica were required
but only 1 acknowledged the write) Is there a way to configure the sink to ignore/handle this error? Jayant |
|
Hi Jayant, afaik it is currently not possible to control how failures are handled in the Cassandra Sink. What would be the desired behaviour? The best thing is to open a JIRA issue to discuss potential improvements. Cheers, Till On Thu, Aug 30, 2018 at 12:15 PM Jayant Ameta <[hidden email]> wrote:
|
|
I have never used the Flink Cassandra Sink so this may or may not work, but
have you tried creating your own custom retry policy? https://docs.datastax.com/en/developer/java-driver/3.4/manual/retries/ Returning an ignore when onWriteTimeout -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
|
In reply to this post by Till Rohrmann
Hi Till, I've opened a JIRA issue: https://issues.apache.org/jira/browse/FLINK-10310. Can we discuss it? Jayant Ameta On Thu, Aug 30, 2018 at 4:35 PM Till Rohrmann <[hidden email]> wrote:
|
|
Have you configured checkpointing in your job. If enabled, the job should
revert back to the last stored checkpoint in case of a failure and process the failed record again. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
|
In reply to this post by Jayant Ameta
On Tue, Sep 11, 2018 at 10:15 AM Jayant Ameta <[hidden email]> wrote:
|
| Free forum by Nabble | Edit this page |
