Re: Streaming File Sink cannot generate _SUCCESS tag files

Posted by Jingsong Li on
URL: http://deprecated-apache-flink-user-mailing-list-archive.369.s1.nabble.com/Streaming-File-Sink-cannot-generate-SUCCESS-tag-files-tp38741p38791.html

Hi, Yang,

"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.

You can take a look to partition commit feature [1], 

[1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit

Best,
Jingsong Lee

On Thu, Oct 15, 2020 at 3:11 PM highfei2011 <[hidden email]> wrote:
Hi, everyone!
      Currently experiencing a problem with the bucketing policy sink to hdfs using BucketAssigner of Streaming File Sink after consuming Kafka data with FLink -1.11.2, the _SUCCESS tag file is not generated by default.
      I have added the following to the configuration 

val hadoopConf = new Configuration()
hadoopConf.set(FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER, "true")    

But there is still no _SUCCESS file in the output directory, so why not support generating _SUCCESS files?

Thank you.


Best,
Yang


--
Best, Jingsong Lee