Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() I am trying to test 1.4.0-RC3, Hadoop libraries removed in this version. Actually, i have created custom Bucketer for the bucketing sink. I am extending in the class, i have to use org.apache.hadoop.fs.Path but as hadoop libraries removed it's giving error "object hadoop is not a member of package org.apache" Should i have to include Hadoop client libs in build.sbt dependencies. Thanks Shashank |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1299 posts
|
Hi,
Is this a compilation error or at runtime. But in general, yes you have to include the Hadoop dependencies if they're not there. Best, Aljoscha
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() On Fri, Dec 8, 2017 at 6:54 PM, Aljoscha Krettek <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() On Fri, Dec 8, 2017 at 6:58 PM, shashank agarwal <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1299 posts
|
I think hadoop-hdfs might be sufficient.
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() hadoop-hdfs (this for HDFS configuration) hadoop-common (this for Path) On Fri, Dec 8, 2017 at 7:38 PM, Aljoscha Krettek <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1299 posts
|
I see, thanks for letting us know!
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1172 posts
|
I would recommend to add "flink-shaded-hadoop2". That is a bundle of all Hadoop dependencies used by Flink. On Fri, Dec 8, 2017 at 3:44 PM, Aljoscha Krettek <[hidden email]> wrote:
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
Sure i’ll Try that. Thanks On Fri, 8 Dec 2017 at 9:18 PM, Stephan Ewen <[hidden email]> wrote:
... [show rest of quote] -- Sent from iPhone 5
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
yes, it's working fine. now not getting compile time error. But when i trying to run this on cluster or yarn, getting following runtime error : org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293) at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99) at org.apache.flink.runtime.state.filesystem.FsStateBackend.createStreamFactory(FsStateBackend.java:277) at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createStreamFactory(RocksDBStateBackend.java:273) at org.apache.flink.streaming.runtime.tasks.StreamTask.createCheckpointStreamFactory(StreamTask.java:787) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:247) at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694) at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop File System abstraction does not support scheme 'hdfs'. Either no file system implementation exists for that scheme, or the relevant classes are missing from the classpath. at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:102) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401) ... 12 more Caused by: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2786) at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:99) ... 13 more while submitting job it's printing following logs so i think it's including hdoop libs : Using the result of 'hadoop classpath' to augment the Hadoop classpath: /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//* On Fri, Dec 8, 2017 at 9:24 PM, shashank agarwal <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
644 posts
|
Hi Shashank,
it seems that HDFS is still not in classpath. Could you quickly explain how I can reproduce the error? Regards, Timo Am 12/19/17 um 12:38 PM schrieb shashank agarwal:
... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
76 posts
|
In reply to this post by shashank734
You need to put flink-hadoop-compability*.jar in the lib folder of your flink distribution or in the class path of your Custer nodes
... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() On Tue, Dec 19, 2017 at 11:28 PM, Jörn Franke <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
In reply to this post by shashank734
![]() I am using Rocksdbstatebackend with hdfs path. I have following flink dependencies in my sbt : "org.slf4j" % "slf4j-log4j12" % "1.7.21", "org.apache.flink" %% "flink-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-cep-scala" % flinkVersion, "org.apache.flink" %% "flink-connector-kafka-0.10" % flinkVersion, "org.apache.flink" %% "flink-connector-filesystem" % flinkVersion, "org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion, "org.apache.flink" %% "flink-connector-cassandra" % "1.3.2", "org.apache.flink" % "flink-shaded-hadoop2" % flinkVersion, when i start flink yarn session it's working fine even it's creating flink checkpointing directory and copying libs into hdfs. But when I submit the application to this yarn session it prints following logs :
But application fails contuniously with logs which i have sent earlier. I have tried to add flink- hadoop-compability*.jar as suggested by Jorn but it's not working. On Tue, Dec 19, 2017 at 5:08 PM, shashank agarwal <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1299 posts
|
Hi,
Could you please list what exactly is in your submitted jar file, for example using "jar tf my-jar-file.jar"? And also what files exactly are in your Flink lib directory. Best, Aljoscha
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() Please find attached list of jar file contents and flink/lib/ contents. I have removed my class files list from jar list and I have added flink-hadoop-compatibility_2.11-1.4.0.jar later in flink/lib/ but no success. I have tried by removing flink-shaded-hadoop2 from my project but still no success. Thanks Shashank On Wed, Dec 20, 2017 at 2:14 PM, Aljoscha Krettek <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() Using the result of 'hadoop classpath' to augment the Hadoop classpath: /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//* SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/flink/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.0.3-8/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] So i think it's adding Hadoop libs in classpath too cause it's able to create the checkpointing directories from flink-conf file to HDFS. On Wed, Dec 20, 2017 at 2:31 PM, shashank agarwal <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
1299 posts
|
Hi,
That jar file looks like it has too much stuff in there that shouldn't be there. This can explain the errors you seeing because of classloading conflicts. Could you try not building a fat-jar and have only your code in your jar? Best, Aljoscha
... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
49 posts
|
![]() In that case, it won't find the dependencies. Cause I have other dependencies also and what about CEP etc. cause that is not part of flink-dist. Best Shashank On Wed, Dec 20, 2017 at 3:16 PM, Aljoscha Krettek <[hidden email]> wrote:
... [show rest of quote] Thanks Regards SHASHANK AGARWAL --- Trying to mobilize the things.... |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
644 posts
|
Libraries such as CEP or Table API
should have the "compile" scope and should be in the both the fat
and non-fat jar.
The non-fat jar should contain everything that is not in flink-dist or your lib directory. Regards, Timo Am 12/20/17 um 3:07 PM schrieb shashank agarwal:
... [show rest of quote]
|
Free forum by Nabble | Edit this page |