Hi Vaibhav,
I am not sure if there are specific documentation parts about
state handling in Table API. There are just a few important facts
that you must be aware of:
* in a failover scenario, everything should work just fine,
internally Table API uses Flink's state and all intermediate
results should be successfully restored after failure.
* in case of query update or Flink's version update there is no
guarantee that the resulting execution plan will remain the same
(or have the same uids), therefore there are no guarantees that a
previous state can be mapped to the new plan
I hope this clarifies a bit how Table API interacts with Flink's
state.
Best,
Dawid
On 08/10/2019 12:09, Vaibhav Singh
wrote:
Hi,
We are looking into a
production use case of using Flink, to process multiple
streams of data from Kafka topics.
We plan to perform joins on
these streams and then output aggregations on that data. We
plan to use the Table API and SQL capabilities for this.
We need to prepare a plan to
productionize this flow, and were looking into how Flink
features like Checkpoints and Savepoints and state management
are being utilized here (In case of Table API).
Can you point me towards any
documentation/articles/tutorials
regarding how Flink is doing these in case of the Table API
and SQL?
Thanks and regards!
Vaibhav