Flink Yarn Deployment Issue - 1.7.0

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink Yarn Deployment Issue - 1.7.0

sohimankotia
This post was updated on .
Hi ,

I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
hortonworks hadoop distribution.(hdp/2.6.1.0-129/)

Flink lib folder looks like :


-rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
flink-hadoop-compatibility_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
-rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j

*My code :*

       ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();

       String p = args[0];


       Job job = Job.getInstance();
       SequenceFileInputFormat<Text, BytesWritable> inputFormat = new
SequenceFileInputFormat<>();
     
job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
true);
       final HadoopInputFormat<Text, BytesWritable> hInputEvents =
HadoopInputs.readHadoopFile(inputFormat, Text.class, BytesWritable.class, p,
job);
       org.apache.flink.configuration.Configuration fileReadConfig = new
org.apache.flink.configuration.Configuration();

       env.createInput(hInputEvents)
               .output(new PrintingOutputFormat<>());


pom.xml

flink.version = 1.7.0

    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-java</artifactId>
      <version>${flink.version}</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-clients_2.11</artifactId>
      <version>${flink.version}</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-streaming-java_2.11</artifactId>
      <version>${flink.version}</version>
      <scope>provided</scope>
    </dependency>
   
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-hadoop-compatibility_2.11</artifactId>
      <version>${flink.version}</version>
      <scope>provided</scope>
    </dependency>
   
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-shaded-hadoop2</artifactId>
      <version>${flink.version}</version>
      <scope>provided</scope>
    </dependency>


in script :



export HADOOP_CONF_DIR=/etc/hadoop/conf
export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
classpath`

echo ${HADOOP_CLASSPATH}

PARALLELISM=1
JAR_PATH="jar"
CLASS_NAME="CLASS_NAME"
NODES=1
SLOTS=1
MEMORY_PER_NODE=2048
QUEUE="default"
NAME="sample"

IN="input-file-path"


/home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism ${PARALLELISM}
-ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN}


where classpath is printing:

/usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf


But I am getting class not found error for hadoop related jar . Error is
attached .
error.txt



Another Problem :


If i added hadoop shaded jar in lib folder


-rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
flink-hadoop-compatibility_2.11-1.7.0.jar
-rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
*-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
flink-shaded-hadoop2-uber-1.7.0.jar*
-rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
-rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar

I am getting following error. And this is happening for all version greater
than 1.4.2 .

java.lang.IllegalAccessError: tried to access method
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
from class
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
        at
org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
        at
org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
        at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
        at
org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
        at
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
        at
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
        at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.getClusterDescriptor(FlinkYarnSessionCli.java:985)
        at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.createDescriptor(FlinkYarnSessionCli.java:273)
        at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:451)
        at
org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:96)
        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:224)


Thanks in advance .





--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Flink Yarn Deployment Issue - 1.7.0

Jörn Franke
Can you check the Flink log files? You should get there a better description of the error.

> Am 08.12.2018 um 18:15 schrieb sohimankotia <[hidden email]>:
>
> Hi ,
>
> I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
> hortonworks hadoop distribution.(hdp/2.6.1.0-129/)
>
> *Flink lib folder looks like :*
>
>
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j
>
> *My code :*
>
>       ExecutionEnvironment env =
> ExecutionEnvironment.getExecutionEnvironment();
>
>       String p = args[0];
>
>
>       Job job = Job.getInstance();
>       SequenceFileInputFormat<Text, BytesWritable> inputFormat = new
> SequenceFileInputFormat<>();
>
> job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
> true);
>       final HadoopInputFormat<Text, BytesWritable> hInputEvents =
> HadoopInputs.readHadoopFile(inputFormat, Text.class, BytesWritable.class, p,
> job);
>       org.apache.flink.configuration.Configuration fileReadConfig = new
> org.apache.flink.configuration.Configuration();
>
>       env.createInput(hInputEvents)
>               .output(new PrintingOutputFormat<>());
>
>
> *pom.xml*
>
> flink.version = 1.7.0
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-java</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-clients_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-streaming-java_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-hadoop-compatibility_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-shaded-hadoop2</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
> *
> in script :*
>
>
>
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
> classpath`
>
> echo ${HADOOP_CLASSPATH}
>
> PARALLELISM=1
> JAR_PATH="jar"
> CLASS_NAME="CLASS_NAME"
> NODES=1
> SLOTS=1
> MEMORY_PER_NODE=2048
> QUEUE="default"
> NAME="sample"
>
> IN="input-file-path"
>
>
> /home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
> ${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism ${PARALLELISM}
> -ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN}
>
>
> *where classpath is printing:*
>
> /usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf
>
>
> But I am getting class not found error for hadoop related jar . Error is
> attached .
>
>
> error.txt
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/t894/error.txt>  
> *Another Problem :*
>
> If i added hadoop shaded jar in lib folder
>
>
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> *-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
> flink-shaded-hadoop2-uber-1.7.0.jar*
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar
>
> I am getting following error. And this is happening for all version greater
> than 1.4.2 .
>
> java.lang.IllegalAccessError: tried to access method
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
> from class
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>    at
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>    at
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>    at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>    at
> org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>    at
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>    at
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.getClusterDescriptor(FlinkYarnSessionCli.java:985)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createDescriptor(FlinkYarnSessionCli.java:273)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:451)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:96)
>    at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:224)
>
>
> Thanks in advance .
>
>
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Flink Yarn Deployment Issue - 1.7.0

sohimankotia
Hi Jorn,

There are no more logs . Attaching yarn aggregated logs for first problem . For second one job is not even getting submitted.

- Sohi

On Sun, Dec 9, 2018 at 2:13 PM Jörn Franke <[hidden email]> wrote:
Can you check the Flink log files? You should get there a better description of the error.

> Am 08.12.2018 um 18:15 schrieb sohimankotia <[hidden email]>:
>
> Hi ,
>
> I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 .  We are using
> hortonworks hadoop distribution.(hdp/2.6.1.0-129/)
>
> *Flink lib folder looks like :*
>
>
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.j
>
> *My code :*
>
>       ExecutionEnvironment env =
> ExecutionEnvironment.getExecutionEnvironment();
>
>       String p = args[0];
>
>
>       Job job = Job.getInstance();
>       SequenceFileInputFormat<Text, BytesWritable> inputFormat = new
> SequenceFileInputFormat<>();
>
> job.getConfiguration().setBoolean(FileInputFormat.INPUT_DIR_RECURSIVE,
> true);
>       final HadoopInputFormat<Text, BytesWritable> hInputEvents =
> HadoopInputs.readHadoopFile(inputFormat, Text.class, BytesWritable.class, p,
> job);
>       org.apache.flink.configuration.Configuration fileReadConfig = new
> org.apache.flink.configuration.Configuration();
>
>       env.createInput(hInputEvents)
>               .output(new PrintingOutputFormat<>());
>
>
> *pom.xml*
>
> flink.version = 1.7.0
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-java</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-clients_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-streaming-java_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-hadoop-compatibility_2.11</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
>    <dependency>
>      <groupId>org.apache.flink</groupId>
>      <artifactId>flink-shaded-hadoop2</artifactId>
>      <version>${flink.version}</version>
>      <scope>provided</scope>
>    </dependency>
>
> *
> in script :*
>
>
>
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export HADOOP_CLASSPATH="/usr/hdp/2.6.1.0-129/hadoop/hadoop-*":`hadoop
> classpath`
>
> echo ${HADOOP_CLASSPATH}
>
> PARALLELISM=1
> JAR_PATH="jar"
> CLASS_NAME="CLASS_NAME"
> NODES=1
> SLOTS=1
> MEMORY_PER_NODE=2048
> QUEUE="default"
> NAME="sample"
>
> IN="input-file-path"
>
>
> /home/hdfs/flink-1.7.0/bin/flink run -m yarn-cluster  -yn ${NODES} -yqu
> ${QUEUE} -ys ${SLOTS} -ytm ${MEMORY_PER_NODE} --parallelism ${PARALLELISM}
> -ynm ${NAME} -c ${CLASS_NAME} ${JAR_PATH} ${IN}
>
>
> *where classpath is printing:*
>
> /usr/hdp/2.6.1.0-129/hadoop/hadoop-*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*:/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf:mysql-connector-java-5.1.17.jar:mysql-connector-java.jar:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf
>
>
> But I am getting class not found error for hadoop related jar . Error is
> attached .
>
>
> error.txt
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/t894/error.txt
> *Another Problem :*
>
> If i added hadoop shaded jar in lib folder
>
>
> -rw-r--r-- 1 hdfs hadoop 93184216 Nov 29 02:15 flink-dist_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop    79219 Nov 29 03:33
> flink-hadoop-compatibility_2.11-1.7.0.jar
> -rw-r--r-- 1 hdfs hadoop   141881 Nov 29 02:13 flink-python_2.11-1.7.0.jar
> *-rw-r--r-- 1 hdfs hadoop 41130742 Dec  8 22:38
> flink-shaded-hadoop2-uber-1.7.0.jar*
> -rw-r--r-- 1 hdfs hadoop   489884 Nov 28 23:01 log4j-1.2.17.jar
> -rw-r--r-- 1 hdfs hadoop     9931 Nov 28 23:01 slf4j-log4j12-1.7.15.jar
>
> I am getting following error. And this is happening for all version greater
> than 1.4.2 .
>
> java.lang.IllegalAccessError: tried to access method
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object;
> from class
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>    at
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider.init(RequestHedgingRMFailoverProxyProvider.java:75)
>    at
> org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:163)
>    at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:94)
>    at
> org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:72)
>    at
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:187)
>    at
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.getClusterDescriptor(FlinkYarnSessionCli.java:985)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createDescriptor(FlinkYarnSessionCli.java:273)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:451)
>    at
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:96)
>    at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:224)
>
>
> Thanks in advance .
>
>
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

error.log (80K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Flink Yarn Deployment Issue - 1.7.0

sohimankotia
In reply to this post by sohimankotia
Reply | Threaded
Open this post in threaded view
|

Re: Flink Yarn Deployment Issue - 1.7.0

sohimankotia
In reply to this post by sohimankotia