Hi, When I launch Flink Application Cluster I keep getting a message " Log file environment variable 'log.file' is not set." I use console logging via log4j and I read logs via yarn logs -applicationId .... What's the purpose of log.file property? What this file will contain and on which host should I search for the log? Does this property understands hdfs paths? Regards, Vitaliy |
Hi Vitaliy
Property of 'log.file' would be configured if you have uploaded 'logback.xml' or 'log4j.properties' [1].
The file would contain logs of job manager or task manager which is decided by the component itself. And as you can see, this is only a local file path, I am afraid this cannot understand hdfs paths.
Best
Yun Tang
From: Vitaliy Semochkin <[hidden email]>
Sent: Sunday, March 29, 2020 4:32 To: user <[hidden email]> Subject: Log file environment variable 'log.file' is not set. Hi,
When I launch Flink Application Cluster I keep getting a message
" Log file environment variable 'log.file' is not set."
I use console logging via log4j
and I read logs via
yarn logs -applicationId ....
What's the purpose of log.file property?
What this file will contain and on which host should I search for the log?
Does this property understands hdfs paths?
Regards,
Vitaliy
|
Hello Yun, I see this error reported by: org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'. I wonder where the JobManager files are stored in case running on a YARN cluster? Are these logs same to those I get via yarn logs -applicationId? Vitaliy On Sun, Mar 29, 2020 at 8:24 PM Yun Tang <[hidden email]> wrote:
|
Hey, which Flink version are you using? Where exactly are you seeing the "Log file environment variable 'log.file' is not set." message? Can you post some context around it? (is this shown from the command line? what are the arguments? is it shown in a file? Usually, the "log.file" property is used to pass the name of the log file into the log4j configuration. If this property is not set, I have to assume that you are using modified or custom scripts, or you are executing Flink in an environment that fails to set the property. When running Flink on YARN, the JobManager logs are stored on the machine running the JobManager. The logs accessible through "yarn logs" are the same as you would see in the JM interface. Best, Robert On Sun, Mar 29, 2020 at 11:22 PM Vitaliy Semochkin <[hidden email]> wrote:
|
Hello Robert, Thank you for quick response! Indeed logs says the hadoop version is 2.4.1 this is probably because of How can I make 1.10 to work with my current hadoop version? Regarding flink reporting in logs its 1.7.0 org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting YarnJobClusterEntrypoint (Version: 1.7.0 while I'm using 1.10 and this is application cluster (everything is bundled and we don't have session cluster running). Here are the whole dependencies list: mvn dependency:tree | grep flink | cut -d'-' -f2- org.apache.flink:flink-yarn_2.11:jar:1.10.0:runtime org.apache.flink:flink-clients_2.11:jar:1.10.0:compile org.apache.flink:flink-optimizer_2.11:jar:1.10.0:compile org.apache.flink:flink-shaded-hadoop-2:jar:2.4.1-9.0:runtime org.apache.flink:force-shading:jar:1.10.0:compile org.apache.flink:flink-runtime_2.11:jar:1.10.0:runtime org.apache.flink:flink-core:jar:1.10.0:compile org.apache.flink:flink-annotations:jar:1.10.0:compile org.apache.flink:flink-metrics-core:jar:1.10.0:compile org.apache.flink:flink-java:jar:1.10.0:compile org.apache.flink:flink-queryable-state-client-java:jar:1.10.0:runtime org.apache.flink:flink-hadoop-fs:jar:1.10.0:runtime org.apache.flink:flink-shaded-netty:jar:4.1.39.Final-9.0:compile org.apache.flink:flink-shaded-guava:jar:18.0-9.0:compile org.apache.flink:flink-shaded-asm-7:jar:7.1-9.0:compile org.apache.flink:flink-shaded-jackson:jar:2.10.1-9.0:compile org.apache.flink:flink-jdbc_2.11:jar:1.10.0:compile org.apache.flink:flink-hbase_2.11:jar:1.10.0:compile org.apache.flink:flink-runtime-web_2.11:jar:1.10.0:compile As you can see all flink related libs are 1.10. Can you please tell which class in flinks identifies the version(I'll try to debug it locally)? Regards, Vitaliy On Mon, Mar 30, 2020 at 5:10 PM Robert Metzger <[hidden email]> wrote:
|
Hey Vitaliy, Check this documentation on how to use Flink with Hadoop: https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/hadoop.html For your setup, I would recommend referencing the Hadoop jars from your Hadoop vendor by setting export HADOOP_CLASSPATH=`hadoop classpath` Is it possible that the files on your cluster are Flink 1.7.0 files, while your Flink job maven project has Flink 1.10 dependencies? On your server, what version do the flink jar files in lib/ have? If your are launching Flink like this... ... it will use the files in lib/ for starting Flink.Best, Robert On Mon, Mar 30, 2020 at 5:39 PM Vitaliy Semochkin <[hidden email]> wrote:
|
Hello Robert, Thank you for the help, tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvi was the root of the issue. I managed to fix issue with the "hadoop version is 2.4.1", it was because previous flink version I was using referred to org.apache.flink:flink-shaded-hadoop2 and now it became org.apache.flink:flink-shaded-hadoop-2, hence I had an incorrect jar "org.apache.flink:flink-shaded-hadoop-2:jar:2.4.1-9.0" in my runtime. I excluded it from classpath and added correct version org.apache.flink:flink-shaded-hadoop-2:jar:2.7.5-10.0 that helped to solve Caused by: java.lang.IllegalAccessError: tried to access method org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.getProxyInternal()Ljava/lang/Object; from class org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider And the job now runs without an error. As for message saying, YarnJobClusterEntrypoint (Version: 1.7.0.. it turned out that flink takes it's version from META-INF/MANIFEST.FM, which in case you use a shaded uber jar will be the version of your application and not flink. Best Regards Vitaliy ![]() On Tue, Mar 31, 2020 at 9:46 AM Robert Metzger <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |