How to pass hdp.version to flink on yarn

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

How to pass hdp.version to flink on yarn

Jagat Singh
Hi,

I am running example Flink program (Pivotal HDP)

./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar

I am getting error below.

How to pass the stack.name and stack.version to the flink program.

This is similar to what we give to Spark as 

hdp.version.

Example

spark.driver.extraJavaOptions            -Dhdp.version=2.3.0.0-2557
spark.yarn.am.extraJavaOptions           -Dhdp.version=2.3.0.0-2557
Thanks

Exception message: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution

Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution


Reply | Threaded
Open this post in threaded view
|

Re: How to pass hdp.version to flink on yarn

rmetzger0
Hi,

In Flink the configuration parameter for passing custom JVM options is "env.java.opts". I would recommend to put it into the conf/flink-config.yaml like this:

env.java.opts: "-Dhdp.version=2.3.0.0-2557 -Dhdp.version=2.3.0.0-2557"

Please let me know if this works.
Maybe you are the first user running Flink on Pivotal HDP and there are some things different to other Hadoop distributions.

Regards,
Robert




On Mon, Nov 23, 2015 at 1:15 AM, Jagat Singh <[hidden email]> wrote:
Hi,

I am running example Flink program (Pivotal HDP)

./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar

I am getting error below.

How to pass the stack.name and stack.version to the flink program.

This is similar to what we give to Spark as 

hdp.version.

Example

spark.driver.extraJavaOptions            -Dhdp.version=2.3.0.0-2557
spark.yarn.am.extraJavaOptions           -Dhdp.version=2.3.0.0-2557
Thanks

Exception message: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution

Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution



Reply | Threaded
Open this post in threaded view
|

Re: How to pass hdp.version to flink on yarn

Jagat Singh
Hello Robert,

Added following

env.java.opts: "-Dstack.name=phd -Dstack.version=3.0.0.0-249"

Same Error

Is there any config which allows to pass special java opts to actual yarn containers?

Thanks,

Jagat Singh

 



On Mon, Nov 23, 2015 at 9:21 PM, Robert Metzger <[hidden email]> wrote:
Hi,

In Flink the configuration parameter for passing custom JVM options is "env.java.opts". I would recommend to put it into the conf/flink-config.yaml like this:

env.java.opts: "-Dhdp.version=2.3.0.0-2557 -Dhdp.version=2.3.0.0-2557"

Please let me know if this works.
Maybe you are the first user running Flink on Pivotal HDP and there are some things different to other Hadoop distributions.

Regards,
Robert




On Mon, Nov 23, 2015 at 1:15 AM, Jagat Singh <[hidden email]> wrote:
Hi,

I am running example Flink program (Pivotal HDP)

./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar

I am getting error below.

How to pass the stack.name and stack.version to the flink program.

This is similar to what we give to Spark as 

hdp.version.

Example

spark.driver.extraJavaOptions            -Dhdp.version=2.3.0.0-2557
spark.yarn.am.extraJavaOptions           -Dhdp.version=2.3.0.0-2557
Thanks

Exception message: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution

Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh: line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*: bad substitution




Reply | Threaded
Open this post in threaded view
|

Re: How to pass hdp.version to flink on yarn

Maximilian Michels
Hi Jagat,

I think your issue here are not the JVM options. You are missing shell
environment variables during the container launch. Adding those to the
user's .bashrc or .profile should fix the problem.

Best regards,
Max

On Mon, Nov 23, 2015 at 10:14 PM, Jagat Singh <[hidden email]> wrote:

> Hello Robert,
>
> Added following
>
> env.java.opts: "-Dstack.name=phd -Dstack.version=3.0.0.0-249"
>
> Same Error
>
> Is there any config which allows to pass special java opts to actual yarn
> containers?
>
> Thanks,
>
> Jagat Singh
>
>
>
>
>
> On Mon, Nov 23, 2015 at 9:21 PM, Robert Metzger <[hidden email]> wrote:
>>
>> Hi,
>>
>> In Flink the configuration parameter for passing custom JVM options is
>> "env.java.opts". I would recommend to put it into the conf/flink-config.yaml
>> like this:
>>
>> env.java.opts: "-Dhdp.version=2.3.0.0-2557 -Dhdp.version=2.3.0.0-2557"
>>
>> Please let me know if this works.
>> Maybe you are the first user running Flink on Pivotal HDP and there are
>> some things different to other Hadoop distributions.
>>
>> Regards,
>> Robert
>>
>>
>>
>>
>> On Mon, Nov 23, 2015 at 1:15 AM, Jagat Singh <[hidden email]> wrote:
>>>
>>> Hi,
>>>
>>> I am running example Flink program (Pivotal HDP)
>>>
>>> ./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar
>>>
>>> I am getting error below.
>>>
>>> How to pass the stack.name and stack.version to the flink program.
>>>
>>> This is similar to what we give to Spark as
>>>
>>> hdp.version.
>>>
>>> Example
>>>
>>> spark.driver.extraJavaOptions            -Dhdp.version=2.3.0.0-2557
>>> spark.yarn.am.extraJavaOptions           -Dhdp.version=2.3.0.0-2557
>>>
>>> Thanks
>>>
>>> Exception message:
>>> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh:
>>> line 26:
>>> $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*:
>>> bad substitution
>>>
>>> Stack trace: ExitCodeException exitCode=1:
>>> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_000001/launch_container.sh:
>>> line 26:
>>> $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*:
>>> bad substitution
>>>
>>>
>>
>