| Hi, it is highly recommended that we assign the uid to the operator for the sake of savepoint. How do we do this for Flink SQL? According to https://stackoverflow.com/questions/55464955/how-to-add-uid-to-operator-in-flink-table-api, it is not possible. Does that mean, I can't use savepoint to restart my program if I use Flink SQL? Thanks, Fanbin | 
| Hi Fanbin If you do not change the parallelism or add and remove operators, you could still use savepoint to resume your jobs with Flink SQL. However, as far as I know, Flink SQL might not configure the uid currently and I’m pretty sure blink branch contains this part of setting uid to stream node. [1] Already CC Kurt as he could provide more detail information of this. Best Yun Tang From: Fanbin Bu <[hidden email]> Hi, it is highly recommended that we assign the uid to the operator for the sake of savepoint. How do we do this for Flink SQL? According to https://stackoverflow.com/questions/55464955/how-to-add-uid-to-operator-in-flink-table-api,
 it is not possible. Does that mean, I can't use savepoint to restart my program if I use Flink SQL? Thanks, Fanbin | 
| Kurt, What do you recommend for Flink SQL to use savepoints? On Thu, Oct 31, 2019 at 12:03 AM Yun Tang <[hidden email]> wrote: 
 | 
| It's not possible for SQL and Table API jobs playing with savepoints yet, but I  think this is a popular requirement and we should definitely discuss the solutions in the following versions.  Best, Kurt On Sat, Nov 2, 2019 at 7:24 AM Fanbin Bu <[hidden email]> wrote: 
 | 
| Kurt, Is there any update on this or roadmap that supports savepoints with Flink SQL? On Sun, Nov 3, 2019 at 11:25 PM Kurt Young <[hidden email]> wrote: 
 | 
| On Tue, Dec 31, 2019 at 8:00 AM Fanbin Bu <[hidden email]> wrote: 
 | 
| Free forum by Nabble | Edit this page | 
 
	

 
	
	
		

