Hi Till,Can you please let us know the configurations that we need to set for Profile based credential provider in flink-conf.yamlExporting AWS_PROFILE property on EMR did not work.Regards,Vinay PatilOn Wed, Jan 16, 2019 at 3:05 PM Till Rohrmann <[hidden email]> wrote:The old BucketingSink was using Hadoop's S3 filesystem directly whereas the new StreamingFileSink uses Flink's own FileSystem which need to be configured via the flink-conf.yaml.Cheers,TillOn Wed, Jan 16, 2019 at 10:31 AM Vinay Patil <[hidden email]> wrote:Hi Till,We are not providing `fs.s3a.access.key: access_key`, `fs.s3a.secret.key: secret_key` in flink-conf.yaml as we are using Profile based credentials provider. The older BucketingSink code is able to get the credentials and write to S3. We are facing this issue only with StreamingFileSink. We tried adding fs.s3a.impl to core-site.xml when the default configurations were not working.Regards,Vinay PatilOn Wed, Jan 16, 2019 at 2:55 PM Till Rohrmann <[hidden email]> wrote:Hi Vinay,Flink's file systems are self contained and won't respect the core-site.xml if I'm not mistaken. Instead you have to set the credentials in the flink configuration flink-conf.yaml via `fs.s3a.access.key: access_key`, `fs.s3a.secret.key: secret_key` and so on [1]. Have you tried this out?This has been fixed with Flink 1.6.2 and 1.7.0 [2].[1] https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems.html#built-in-file-systemsCheers,TillOn Wed, Jan 16, 2019 at 10:10 AM Kostas Kloudas <[hidden email]> wrote:Hi Taher,So you are using the same configuration files and everything and the only thing you change is the "s3://" to "s3a://" and the sink cannot find the credentials?Could you please provide the logs of the Task Managers?Cheers,KostasOn Wed, Jan 16, 2019 at 9:13 AM Dawid Wysakowicz <[hidden email]> wrote:Forgot to cc ;)
On 16/01/2019 08:51, Vinay Patil wrote:
Hi,
Can someone please help on this issue. We have even tried to set fs.s3a.impl in core-site.xml, still its not working.
Regards,Vinay Patil
On Fri, Jan 11, 2019 at 5:03 PM Taher Koitawala [via Apache Flink User Mailing List archive.] <[hidden email]> wrote:
Hi All,We have implemented S3 sink in the following way:
StreamingFileSink sink= StreamingFileSink.forBulkFormat(new Path("s3a://mybucket/myfolder/output/"), ParquetAvroWriters.forGenericRecord(schema)).withBucketCheckInterval(50l).withBucketAssigner(new CustomBucketAssigner()).build();
The problem we are facing is that StreamingFileSink is initializing S3AFileSystem class to write to s3 and is not able to find the s3 credentials to write data, However other flink application on the same cluster use "s3://" paths are able to write data to the same s3 bucket and folders, we are only facing this issue with StreamingFileSink.
Regards,
Taher KoitawalaGS Lab Pune+91 8407979163
If you reply to this email, your message will be added to the discussion below:http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/StreamingFileSink-cannot-get-AWS-S3-credentials-tp25464.htmlTo start a new topic under Apache Flink User Mailing List archive., email [hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML
Free forum by Nabble | Edit this page |