Ah, I assumed you were running 1.3.0 (since you mentioned “new” ES connector).Another thing to check, if you built Flink yourself, make sure you’re not using Maven 3.3+. There are shading problems when Flink is built with Maven versions higher then that.The flink-dist jar should not contain any non-shaded Guava dependencies, could you also quickly check that?On 7 June 2017 at 5:42:28 PM, Flavio Pompermaier ([hidden email]) wrote:
I shaded the Elasticsearch dependency [1] and now the job works.So I cannot run a job that needs guava 18 on Flink 1.2.1...
On Wed, Jun 7, 2017 at 5:33 PM, Tzu-Li (Gordon) Tai <[hidden email]> wrote:
Hi Flavio,Could there be another dependency in your job that requires a conflicting version (w.r.t. ES 2.4.1) of Guava?I’ve just double checked the flink-dist jar, there doesn’t seem to be any non-shaded Guava dependencies there, so the conflict should not have been caused by Flink.Cheers,Gordon
On 7 June 2017 at 4:12:04 PM, Flavio Pompermaier ([hidden email]) wrote:
Hi to all,I'm trying to use the new ES connector to index data from Flink (with ES 2.4.1).When I try to run it from Eclipse everything is ok, when I run it from the cluster I get the following exception:java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors. directExecutor()Ljava/util/ concurrent/Executor;
at org.elasticsearch.threadpool.ThreadPool.<clinit>( ThreadPool.java:192)
at org.elasticsearch.client.transport.TransportClient$ Builder.build(TransportClient. java:131)
In my fat jar there are the classes of guava 18 (ES requires that version), Flink runs on CDH 5.9 (that use guava 11), in flink-dist jar I think that there's guava 11 classes while in flink-hadoop-compatibility there are shade guava 18 dependencies.How can I make the job successfully run on the cluster?Best,Flavio
Free forum by Nabble | Edit this page |