Hi, everyone! Currently experiencing a problem with the bucketing policy sink to hdfs using BucketAssigner of Streaming File Sink after consuming Kafka data with FLink -1.11.2, the _SUCCESS tag file is not generated by default. I have added the following to the configuration val hadoopConf = new Configuration() hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true") But there is still no _SUCCESS file in the output directory, so why not support generating _SUCCESS files? Thank you. Best, Yang |
Hi, Yang, "SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink. You can take a look to partition commit feature [1], Best, Jingsong Lee On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <[hidden email]> wrote:
Best, Jingsong Lee |
In reply to this post by highfei2011
Hi, Jingsong Lee
Thanks for taking the time to respond to the email, I will try following your suggestion. Best, Yang 在 2020年10月19日 11:56,Jingsong Li<[hidden email]> 写道: Hi, Yang, "SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink. You can take a look to partition commit feature [1], Best, Jingsong Lee On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <[hidden email]> wrote:
Best, Jingsong Lee Hi, Yang, "SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink. You can take a look to partition commit feature [1], Best, Jingsong Lee On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <[hidden email]> wrote:
Best, Jingsong Lee Hi, Yang, "SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink. You can take a look to partition commit feature [1], Best, Jingsong Lee On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <[hidden email]> wrote:
Best, Jingsong Lee |
Free forum by Nabble | Edit this page |