How to config the flink to load libs in myself path

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

How to config the flink to load libs in myself path

cxydevelop

Hi all, I deployed the flink in K8S by session cluster [1]
the default plugin path is /opt/flink/plugins,

the default lib path is /opt/flink/lib,
the default usrlib path is /opt/flink/usrlib,
I wonder if it is possible for change the default path.
For example, I wish flink don't load libs from /opt/flink/lib , and my want it to load libs files from /data/flink/lib.  and I can't move  /data/flink/lib to /opt/flink/lib
So how to config the flink to load lib in myself path

[1]: https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/kubernetes.html#session-cluster-resource-definitions



 

Reply | Threaded
Open this post in threaded view
|

Re: How to config the flink to load libs in myself path

Guowei Ma
Hi, chenxuying
There is currently no official support for this. 
What I am curious about is why you have this requirement. In theory, you can always build your own image.
Best,
Guowei


On Mon, Apr 19, 2021 at 9:58 PM chenxuying <[hidden email]> wrote:

Hi all, I deployed the flink in K8S by session cluster [1]
the default plugin path is /opt/flink/plugins,

the default lib path is /opt/flink/lib,
the default usrlib path is /opt/flink/usrlib,
I wonder if it is possible for change the default path.
For example, I wish flink don't load libs from /opt/flink/lib , and my want it to load libs files from /data/flink/lib.  and I can't move  /data/flink/lib to /opt/flink/lib
So how to config the flink to load lib in myself path



 

Reply | Threaded
Open this post in threaded view
|

Re: How to config the flink to load libs in myself path

cxydevelop
For example, now I had my custom table source or sink which were builed in a independent jar , and my main code will depend on it. But I don't want to package custom connector jar with main code in a jar flie. In other words, I want to get a thin jar not a fat jar. So I think I can put the custom connector jar in flink/lib. before I run my job. In fact, it really work. My jobmanager yaml like as below: ->>>>>>>> containers: ... volumeMounts: - mountPath: /opt/flink/conf name: flink-config-volume - mountPath: /opt/flink/lib name: volume-1618910657181 - mountPath: /opt/flink/flink-uploadjar name: volume-1618911748381 - mountPath: /opt/flink/plugins/oss-fs-hadoop/flink-oss-fs-hadoop-1.12.2.jar name: volume-1618916463815 volumes: - configMap: defaultMode: 420 items: - key: flink-conf.yaml path: flink-conf.yaml - key: log4j-console.properties path: log4j-console.properties name: flink-config name: flink-config-volume - hostPath: path: /data/volumes/flink/volume-for-session/cxylib-common-jar type: '' name: volume-1618910657181 - hostPath: path: /home/uploadjar type: '' name: volume-1618911748381 - hostPath: path: /data/volumes/flink/volume-for-session/plugins/oss-fs-hadoop/flink-oss-fs-hadoop-1.12.2.jar type: '' name: volume-1618916463815 ->>>>>>>> As the yaml, I have to mount Host Machine path to container path. Now I deploy flink in k8s cluster which has three nodes, so I have to put my all jar in three nodes.And then If I change some codes, I also have to package and put then in three nodes. So if flink support to config the flink to load lib in myself path, I can use aliyun oss pv and pvc to mount oss path directly. Like my other yaml as below: ->>>>>>>> containers: ... volumeMounts: - mountPath: /data name: volume-trino-volume ... volumes: - name: volume-trino-volume persistentVolumeClaim: claimName: trino-volume ... ->>>>>>>> So if flink support to config like "flink.lib.path : /data/myself/lib", it will very convenient. I don't know if you know what I mean.

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: How to config the flink to load libs in myself path

Arvid Heise-4
Hi,

I can't offer you a solution for your problem but I'd like to emphasize that connectors are most of the time put into the user jar. A connector should be a couple of MB and not cause too many issues.

On Tue, Apr 20, 2021 at 4:02 PM cxydevelop <[hidden email]> wrote:
For example, now I had my custom table source or sink which were builed in a independent jar , and my main code will depend on it. But I don't want to package custom connector jar with main code in a jar flie. In other words, I want to get a thin jar not a fat jar. So I think I can put the custom connector jar in flink/lib. before I run my job. In fact, it really work. My jobmanager yaml like as below: ->>>>>>>> containers: ... volumeMounts: - mountPath: /opt/flink/conf name: flink-config-volume - mountPath: /opt/flink/lib name: volume-1618910657181 - mountPath: /opt/flink/flink-uploadjar name: volume-1618911748381 - mountPath: /opt/flink/plugins/oss-fs-hadoop/flink-oss-fs-hadoop-1.12.2.jar name: volume-1618916463815 volumes: - configMap: defaultMode: 420 items: - key: flink-conf.yaml path: flink-conf.yaml - key: log4j-console.properties path: log4j-console.properties name: flink-config name: flink-config-volume - hostPath: path: /data/volumes/flink/volume-for-session/cxylib-common-jar type: '' name: volume-1618910657181 - hostPath: path: /home/uploadjar type: '' name: volume-1618911748381 - hostPath: path: /data/volumes/flink/volume-for-session/plugins/oss-fs-hadoop/flink-oss-fs-hadoop-1.12.2.jar type: '' name: volume-1618916463815 ->>>>>>>> As the yaml, I have to mount Host Machine path to container path. Now I deploy flink in k8s cluster which has three nodes, so I have to put my all jar in three nodes.And then If I change some codes, I also have to package and put then in three nodes. So if flink support to config the flink to load lib in myself path, I can use aliyun oss pv and pvc to mount oss path directly. Like my other yaml as below: ->>>>>>>> containers: ... volumeMounts: - mountPath: /data name: volume-trino-volume ... volumes: - name: volume-trino-volume persistentVolumeClaim: claimName: trino-volume ... ->>>>>>>> So if flink support to config like "flink.lib.path : /data/myself/lib", it will very convenient. I don't know if you know what I mean.

Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.