adding core-site xml to flink1.11

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

adding core-site xml to flink1.11

scarmeli
Hi,
I'm trying to define filesystem to flink 1.11 using core-site.xml
I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it is added to the classpath
also adding environment variable HADOOP_CONF_DIR didn't help

The flink 1.11.2 is running on docker using kubernetes

I added hadoop using plugin as mentioned in https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins

when configure the parameters manually I can connect to the local s3a server
So it looks like the flink is not reading the core-site.xml file

please advise
 
Thanks,
Shachar
Reply | Threaded
Open this post in threaded view
|

Re: adding core-site xml to flink1.11

rmetzger0
Hi Shachar,

Why do you want to use the core-site.xml to configure the file system?

Since we are adding the file systems as plugins, their initialization is customized. It might be the case that we are intentionally ignoring xml configurations from the classpath.
You can configure the filesystem in the flink-conf.yaml file.


On Sun, Oct 25, 2020 at 7:56 AM Shachar Carmeli <[hidden email]> wrote:
Hi,
I'm trying to define filesystem to flink 1.11 using core-site.xml
I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it is added to the classpath
also adding environment variable HADOOP_CONF_DIR didn't help

The flink 1.11.2 is running on docker using kubernetes

I added hadoop using plugin as mentioned in https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins

when configure the parameters manually I can connect to the local s3a server
So it looks like the flink is not reading the core-site.xml file

please advise

Thanks,
Shachar
Reply | Threaded
Open this post in threaded view
|

Re: adding core-site xml to flink1.11

scarmeli
Hi,
Thank you for your reply,
WE are deploying on kubernetes and the xml is part of the  common config map to all flink jobs we have(or at least was for previous versions)

This means that we need to duplicate the configuration in the flink-conf.yaml for each job
instead of having a common configmap

Thanks,
Shachar

On 2020/10/27 08:48:17, Robert Metzger <[hidden email]> wrote:

> Hi Shachar,
>
> Why do you want to use the core-site.xml to configure the file system?
>
> Since we are adding the file systems as plugins, their initialization is
> customized. It might be the case that we are intentionally ignoring xml
> configurations from the classpath.
> You can configure the filesystem in the flink-conf.yaml file.
>
>
> On Sun, Oct 25, 2020 at 7:56 AM Shachar Carmeli <[hidden email]>
> wrote:
>
> > Hi,
> > I'm trying to define filesystem to flink 1.11 using core-site.xml
> > I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it is
> > added to the classpath
> > also adding environment variable HADOOP_CONF_DIR didn't help
> >
> > The flink 1.11.2 is running on docker using kubernetes
> >
> > I added hadoop using plugin as mentioned in
> > https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins
> >
> > when configure the parameters manually I can connect to the local s3a
> > server
> > So it looks like the flink is not reading the core-site.xml file
> >
> > please advise
> >
> > Thanks,
> > Shachar
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: adding core-site xml to flink1.11

rmetzger0
Hi,

it seems that this is what you have to do for now. However, I see that it would be nice if Flink would allow reading from multiple configuration files, so that you can have a "common configuration" and a "per cluster" configuration.

I filed a JIRA ticket for a feature request: https://issues.apache.org/jira/browse/FLINK-19828


On Tue, Oct 27, 2020 at 10:54 AM Shachar Carmeli <[hidden email]> wrote:
Hi,
Thank you for your reply,
WE are deploying on kubernetes and the xml is part of the  common config map to all flink jobs we have(or at least was for previous versions)

This means that we need to duplicate the configuration in the flink-conf.yaml for each job
instead of having a common configmap

Thanks,
Shachar

On 2020/10/27 08:48:17, Robert Metzger <[hidden email]> wrote:
> Hi Shachar,
>
> Why do you want to use the core-site.xml to configure the file system?
>
> Since we are adding the file systems as plugins, their initialization is
> customized. It might be the case that we are intentionally ignoring xml
> configurations from the classpath.
> You can configure the filesystem in the flink-conf.yaml file.
>
>
> On Sun, Oct 25, 2020 at 7:56 AM Shachar Carmeli <[hidden email]>
> wrote:
>
> > Hi,
> > I'm trying to define filesystem to flink 1.11 using core-site.xml
> > I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it is
> > added to the classpath
> > also adding environment variable HADOOP_CONF_DIR didn't help
> >
> > The flink 1.11.2 is running on docker using kubernetes
> >
> > I added hadoop using plugin as mentioned in
> > https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins
> >
> > when configure the parameters manually I can connect to the local s3a
> > server
> > So it looks like the flink is not reading the core-site.xml file
> >
> > please advise
> >
> > Thanks,
> > Shachar
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: adding core-site xml to flink1.11

scarmeli
10x

On 2020/10/27 10:42:40, Robert Metzger <[hidden email]> wrote:

> Hi,
>
> it seems that this is what you have to do for now. However, I see that it
> would be nice if Flink would allow reading from multiple configuration
> files, so that you can have a "common configuration" and a "per cluster"
> configuration.
>
> I filed a JIRA ticket for a feature request:
> https://issues.apache.org/jira/browse/FLINK-19828
>
>
> On Tue, Oct 27, 2020 at 10:54 AM Shachar Carmeli <[hidden email]>
> wrote:
>
> > Hi,
> > Thank you for your reply,
> > WE are deploying on kubernetes and the xml is part of the  common config
> > map to all flink jobs we have(or at least was for previous versions)
> >
> > This means that we need to duplicate the configuration in the
> > flink-conf.yaml for each job
> > instead of having a common configmap
> >
> > Thanks,
> > Shachar
> >
> > On 2020/10/27 08:48:17, Robert Metzger <[hidden email]> wrote:
> > > Hi Shachar,
> > >
> > > Why do you want to use the core-site.xml to configure the file system?
> > >
> > > Since we are adding the file systems as plugins, their initialization is
> > > customized. It might be the case that we are intentionally ignoring xml
> > > configurations from the classpath.
> > > You can configure the filesystem in the flink-conf.yaml file.
> > >
> > >
> > > On Sun, Oct 25, 2020 at 7:56 AM Shachar Carmeli <[hidden email]>
> > > wrote:
> > >
> > > > Hi,
> > > > I'm trying to define filesystem to flink 1.11 using core-site.xml
> > > > I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it
> > is
> > > > added to the classpath
> > > > also adding environment variable HADOOP_CONF_DIR didn't help
> > > >
> > > > The flink 1.11.2 is running on docker using kubernetes
> > > >
> > > > I added hadoop using plugin as mentioned in
> > > >
> > https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins
> > > >
> > > > when configure the parameters manually I can connect to the local s3a
> > > > server
> > > > So it looks like the flink is not reading the core-site.xml file
> > > >
> > > > please advise
> > > >
> > > > Thanks,
> > > > Shachar
> > > >
> > >
> >
>