Re: Kakfa batches

Posted by vprabhu@gmail.com on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Kakfa-batches-tp8296p8304.html

Thanks for the reply.

Yeah I mean a manual commit in #3, this is because in this case the offsets would accurately reflect the number of messages processed.

My understanding is that the current checkpointing process commits the state of all the operators separately, the kafka connector will commit offset X while the window operator will checkpoint the messages with offset rangeĀ  (X - N upto X). If the job fails now (yarn application failed due to some network issue, someone accidentally killed my yarn job etc) the restarting the job will start processing from offset X and messages that were checkpointed by the window operator are lost.

The reason we need the accuracy is because the down stream processes are batch oriented (typically process a 15 min bucket of data) and are triggered based on message timestamps exceeding a certain watermark. We have a separate store that maintains the relation between partition-offset and message-timestamp (these are ever increasing values). A check happens against this store to see if the offsets processed by flink job has crossed a certain timestamp.



On Wed, Aug 3, 2016 at 6:19 AM, Ufuk Celebi <[hidden email]> wrote:
On Wed, Aug 3, 2016 at 2:07 PM, Prabhu V <[hidden email]> wrote:
> Obeservations with Streaming.
>
> 1) Long running kerberos fails in 7 days (the data that is held in the
> window buffer is lost and restart results in event loss)

This is a known issue I think. Looping in Max who knows the details I think.

> 2) I hold on to the resouces/container in the cluster irrespective of volume
> of events for all time

Correct. There are plans for Flink 1.2 to make this dynamic.

> Is there a way the kafkaconnector can take a start and stop values for
> offsets that would be ideal for my scenario. The design in this scenario
> would be to...

This is not possible at the moment. What do you mean with "3) commit
the offsets after job is successful"? Do you want to manually do this?