Hi everyone,
Flink 0.10.1 Hadoop 2.4.0 Fairly new to Flink here, so my question might be simple but couldn't find something relevant in the docs. I am implementing a Flink client that submits jobs to a Flink Yarn cluster. At the moment I am using the Client and PackagedProgram classes which are working fine, however the latter expects the job jar to be available locally so that the Client can submit it to the Flink Yarn cluster. Is it possible to use a job jar that is already stored in the HDFS of the cluster where Flink is running on, without first copying it locally to where the Client is? Thank you, Theofilos |
Hi Theofilos, I'm afraid, but that is currently not possible with Flink. Flink expects the user code jar to be uploaded to its Blob server. That's what the client does prior to submitting the job. You would have to upload the jar with the BlobClient manually if you wanted to circumvent the Client. Cheers, On Apr 26, 2016 11:54 PM, "Theofilos Kakantousis" <[hidden email]> wrote:
Hi everyone, |
Hi Till,
Thank you for the quick reply. Do you think that would be a useful feature in the future, for the Client to automatically download an job jar from HDFS, or there are no plans to introduce it? Cheers, Theofilos On 2016-04-27 10:42, Till Rohrmann wrote:
|
At the moment, there is no concrete plan to introduce such a feature, because it cannot be guaranteed that you always have a distributed file system available. But we could maybe add it as a tool which we contribute to flink-contrib. Do you wanna take the lead? Cheers, Till On Wed, Apr 27, 2016 at 10:12 AM, Theofilos Kakantousis <[hidden email]> wrote:
|
Thanks for the update. Hopefully when I
get the time, I will try to contribute.
Cheers, Theo On 2016-04-27 11:24, Till Rohrmann wrote:
|
Free forum by Nabble | Edit this page |