Re: Kakfa batches
Posted by
Ufuk Celebi on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Kakfa-batches-tp8296p8300.html
On Wed, Aug 3, 2016 at 2:07 PM, Prabhu V <
[hidden email]> wrote:
> Obeservations with Streaming.
>
> 1) Long running kerberos fails in 7 days (the data that is held in the
> window buffer is lost and restart results in event loss)
This is a known issue I think. Looping in Max who knows the details I think.
> 2) I hold on to the resouces/container in the cluster irrespective of volume
> of events for all time
Correct. There are plans for Flink 1.2 to make this dynamic.
> Is there a way the kafkaconnector can take a start and stop values for
> offsets that would be ideal for my scenario. The design in this scenario
> would be to...
This is not possible at the moment. What do you mean with "3) commit
the offsets after job is successful"? Do you want to manually do this?