Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi, Does Flink-Kafka connector allow job graph to consume topoics/partitions from a specific timestamp? https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java#L469
seems to suggest that a job graph can only start from an earliest, latest or a set of offsets. KafkaConsumer API,
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L1598, gives us a way to find partition offsets based on a timestamp. Thanks Connie |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi Connie, We do have a pull request for the feature, that should almost be ready after rebasing: https://github.com/ This means, of course, that the feature isn't part of any release yet. We can try to make sure this happens for Flink 1.5, for which the proposed release date is around February 2018. Cheers, Gordon On Tue, Dec 12, 2017 at 3:53 AM, Yang, Connie <[hidden email]> wrote:
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Thanks, Gordan! Will keep an eye on that! Connie From:
"Tzu-Li (Gordon) Tai" <[hidden email]> Hi Connie, On Tue, Dec 12, 2017 at 3:53 AM, Yang, Connie <[hidden email]> wrote:
... [show rest of quote]
|
Free forum by Nabble | Edit this page |