.so linkage error in Cluster

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

.so linkage error in Cluster

Debaditya Roy
Hello users,

I am having a problem while running my flink program in a cluster. It gives me an error that it is unable to find an .so file in a tmp directory.

Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
    at org.bytedeco.javacpp.Loader.load(Loader.java:492)
    at org.bytedeco.javacpp.Loader.load(Loader.java:409)
    at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.bytedeco.javacpp.Loader.load(Loader.java:464)
    at org.bytedeco.javacpp.Loader.load(Loader.java:409)
    at org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
    at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot open shared object file: No such file or directory
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)


I searched for the temp directory and in one of the nodes this directory and the .jar file was present. Is it required to have the file across all the nodes? If yes is there any way to control it? Since this tmp directory and the .so file gets extracted during the runtime without any external manipulation.


Thanks in advance.

Regards,
Debaditya
Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Ufuk Celebi
Yes, the BlobCache on each TaskManager node should fetch it from the
JobManager. How are you packaging your JAR?

On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]> wrote:

> Hello users,
>
> I am having a problem while running my flink program in a cluster. It gives
> me an error that it is unable to find an .so file in a tmp directory.
>
> Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
> java.library.path
>     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>     at java.lang.System.loadLibrary(System.java:1122)
>     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>     at java.lang.Class.forName0(Native Method)
>     at java.lang.Class.forName(Class.java:348)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>     at
> org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>     at
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>     at
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>     at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsatisfiedLinkError:
> /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot open
> shared object file: No such file or directory
>     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>     at java.lang.Runtime.load0(Runtime.java:809)
>     at java.lang.System.load(System.java:1086)
>     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>
>
> I searched for the temp directory and in one of the nodes this directory and
> the .jar file was present. Is it required to have the file across all the
> nodes? If yes is there any way to control it? Since this tmp directory and
> the .so file gets extracted during the runtime without any external
> manipulation.
>
>
> Thanks in advance.
>
> Regards,
> Debaditya
Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Debaditya Roy
Hello,

I am using the jar builder from IntelliJ IDE (the mvn one was causing problems). After that I executed it successfully locally. But in remote it is causing problem.

Warm Regards,
Debaditya

On Tue, Jul 26, 2016 at 4:36 PM, Ufuk Celebi <[hidden email]> wrote:
Yes, the BlobCache on each TaskManager node should fetch it from the
JobManager. How are you packaging your JAR?

On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]> wrote:
> Hello users,
>
> I am having a problem while running my flink program in a cluster. It gives
> me an error that it is unable to find an .so file in a tmp directory.
>
> Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
> java.library.path
>     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>     at java.lang.System.loadLibrary(System.java:1122)
>     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>     at java.lang.Class.forName0(Native Method)
>     at java.lang.Class.forName(Class.java:348)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>     at
> org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>     at
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>     at
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>     at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.UnsatisfiedLinkError:
> /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot open
> shared object file: No such file or directory
>     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>     at java.lang.Runtime.load0(Runtime.java:809)
>     at java.lang.System.load(System.java:1086)
>     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>
>
> I searched for the temp directory and in one of the nodes this directory and
> the .jar file was present. Is it required to have the file across all the
> nodes? If yes is there any way to control it? Since this tmp directory and
> the .so file gets extracted during the runtime without any external
> manipulation.
>
>
> Thanks in advance.
>
> Regards,
> Debaditya

Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Ufuk Celebi
What error message to you get from Maven?

On Tue, Jul 26, 2016 at 4:39 PM, Debaditya Roy <[hidden email]> wrote:

> Hello,
>
> I am using the jar builder from IntelliJ IDE (the mvn one was causing
> problems). After that I executed it successfully locally. But in remote it
> is causing problem.
>
> Warm Regards,
> Debaditya
>
> On Tue, Jul 26, 2016 at 4:36 PM, Ufuk Celebi <[hidden email]> wrote:
>>
>> Yes, the BlobCache on each TaskManager node should fetch it from the
>> JobManager. How are you packaging your JAR?
>>
>> On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]>
>> wrote:
>> > Hello users,
>> >
>> > I am having a problem while running my flink program in a cluster. It
>> > gives
>> > me an error that it is unable to find an .so file in a tmp directory.
>> >
>> > Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
>> > java.library.path
>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>> >     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>> >     at java.lang.System.loadLibrary(System.java:1122)
>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>> >     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>> >     at java.lang.Class.forName0(Native Method)
>> >     at java.lang.Class.forName(Class.java:348)
>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>> >     at
>> >
>> > org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>> >     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>> >     at
>> >
>> > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>> >     at
>> >
>> > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>> >     at
>> >
>> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>> >     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>> >     at java.lang.Thread.run(Thread.java:745)
>> > Caused by: java.lang.UnsatisfiedLinkError:
>> > /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot
>> > open
>> > shared object file: No such file or directory
>> >     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>> >     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>> >     at java.lang.Runtime.load0(Runtime.java:809)
>> >     at java.lang.System.load(System.java:1086)
>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>> >
>> >
>> > I searched for the temp directory and in one of the nodes this directory
>> > and
>> > the .jar file was present. Is it required to have the file across all
>> > the
>> > nodes? If yes is there any way to control it? Since this tmp directory
>> > and
>> > the .so file gets extracted during the runtime without any external
>> > manipulation.
>> >
>> >
>> > Thanks in advance.
>> >
>> > Regards,
>> > Debaditya
>
>
Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Ufuk Celebi
Out of curiosity I've tried this locally by adding the following
dependencies to my Maven project:

<dependency>
   <groupId>org.bytedeco</groupId>
   <artifactId>javacpp</artifactId>
   <version>1.2.2</version>
</dependency>
<dependency>
   <groupId>org.bytedeco.javacpp-presets</groupId>
   <artifactId>opencv</artifactId>
   <version>3.1.0-1.2</version>
</dependency>

With this, running mvn clean package works as expected.



On Tue, Jul 26, 2016 at 7:09 PM, Ufuk Celebi <[hidden email]> wrote:

> What error message to you get from Maven?
>
> On Tue, Jul 26, 2016 at 4:39 PM, Debaditya Roy <[hidden email]> wrote:
>> Hello,
>>
>> I am using the jar builder from IntelliJ IDE (the mvn one was causing
>> problems). After that I executed it successfully locally. But in remote it
>> is causing problem.
>>
>> Warm Regards,
>> Debaditya
>>
>> On Tue, Jul 26, 2016 at 4:36 PM, Ufuk Celebi <[hidden email]> wrote:
>>>
>>> Yes, the BlobCache on each TaskManager node should fetch it from the
>>> JobManager. How are you packaging your JAR?
>>>
>>> On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]>
>>> wrote:
>>> > Hello users,
>>> >
>>> > I am having a problem while running my flink program in a cluster. It
>>> > gives
>>> > me an error that it is unable to find an .so file in a tmp directory.
>>> >
>>> > Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
>>> > java.library.path
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>>> >     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>>> >     at java.lang.System.loadLibrary(System.java:1122)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>>> >     at java.lang.Class.forName0(Native Method)
>>> >     at java.lang.Class.forName(Class.java:348)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at
>>> >
>>> > org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>>> >     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>>> >     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> > Caused by: java.lang.UnsatisfiedLinkError:
>>> > /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot
>>> > open
>>> > shared object file: No such file or directory
>>> >     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> >     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>>> >     at java.lang.Runtime.load0(Runtime.java:809)
>>> >     at java.lang.System.load(System.java:1086)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>>> >
>>> >
>>> > I searched for the temp directory and in one of the nodes this directory
>>> > and
>>> > the .jar file was present. Is it required to have the file across all
>>> > the
>>> > nodes? If yes is there any way to control it? Since this tmp directory
>>> > and
>>> > the .so file gets extracted during the runtime without any external
>>> > manipulation.
>>> >
>>> >
>>> > Thanks in advance.
>>> >
>>> > Regards,
>>> > Debaditya
>>
>>
Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Debaditya Roy
Hi,

For the error I get this when I run the .jar made by mvn clean package

java.lang.NoClassDefFoundError: org/bytedeco/javacpp/opencv_core$Mat
    at loc.video.Job.main(Job.java:29)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:505)
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
    at org.apache.flink.client.program.Client.runBlocking(Client.java:248)
    at org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:866)
    at org.apache.flink.client.CliFrontend.run(CliFrontend.java:333)
    at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
    at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
Caused by: java.lang.ClassNotFoundException: org.bytedeco.javacpp.opencv_core$Mat
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 12 more

My pom.xml is follows:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>org.video</groupId>
<artifactId>OCR</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.build.outputEncoding>UTF-8</project.build.outputEncoding>

<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<start-class>loc.video.MainApp</start-class>

<javacv.version>1.2</javacv.version>
<opencv.version>3.1.0-${javacv.version}</opencv.version>
<ffmpeg.version>3.0.2-${javacv.version}</ffmpeg.version>

<flink.version>1.0.3</flink.version>


</properties>

<prerequisites>
<maven>3.1.0</maven>
</prerequisites>

<dependencies>
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>javacv</artifactId>
<version>${javacv.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>opencv</artifactId>
<version>${opencv.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>ffmpeg</artifactId>
<version>${ffmpeg.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>tesseract</artifactId>
<version>3.04.01-1.2</version>
</dependency>

<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>19.0</version>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
<version>1.4.192</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>${flink.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>${flink.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>

<!-- Fix Javadoc -->
<dependency>
<groupId>javax.interceptor</groupId>
<artifactId>javax.interceptor-api</artifactId>
<version>1.2</version>
<scope>provided</scope>
</dependency>

</dependencies>


<profiles>
<profile>
<!-- Generate the frontend -->
<!-- when run on travis or appveyor -->
<id>generate-frontend</id>
<activation>
<activeByDefault>false</activeByDefault>
<property>
<name>env.CI</name>
<!--Appveyor sets CI = True, but Travis sets it to true-->
<!--<value>true</value>-->
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>com.github.eirslett</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<id>install node and npm</id>
<goals>
<goal>install-node-and-npm</goal>
</goals>
<configuration>
<nodeVersion>v4.4.2</nodeVersion>
<npmVersion>3.8.5</npmVersion>
</configuration>
</execution>
<execution>
<id>npm install</id>
<goals>
<goal>npm</goal>
</goals>
</execution>
<execution>
<id>npm build</id>
<goals>
<goal>npm</goal>
</goals>
<configuration>
<arguments>run build</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>


</profiles>

<build>
<plugins>
<plugin>
<inherited>true</inherited>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce-maven-3.1</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireMavenVersion>
<version>3.1.0</version>
</requireMavenVersion>
<requireJavaVersion>
<version>1.7.0</version>
</requireJavaVersion>
</rules>
<fail>true</fail>
</configuration>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-project-info-reports-plugin</artifactId>
<version>2.9</version>

<configuration>
<dependencyLocationsEnabled>false</dependencyLocationsEnabled>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>

</plugins>

<extensions>
<extension>
<groupId>kr.motd.maven</groupId>
<artifactId>os-maven-plugin</artifactId>
<version>1.5.0.Final</version>
</extension>
</extensions>
</build>


<reporting>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>3.0.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.5</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-report-plugin</artifactId>
<version>2.19</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>taglist-maven-plugin</artifactId>
<version>2.4</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jdepend-maven-plugin</artifactId>
<version>2.0</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>javancss-maven-plugin</artifactId>
<version>2.1</version>
</plugin>
</plugins>
</reporting>

</project>

Since it was giving an error I used the .jar file created by the IDE and ran it by specifying class path which was working nice locally but in the cluster
it was failing. 
Thanks in advance
Warm Regrads,
Debaditya


On Tue, Jul 26, 2016 at 8:06 PM, Ufuk Celebi <[hidden email]> wrote:
Out of curiosity I've tried this locally by adding the following
dependencies to my Maven project:

<dependency>
   <groupId>org.bytedeco</groupId>
   <artifactId>javacpp</artifactId>
   <version>1.2.2</version>
</dependency>
<dependency>
   <groupId>org.bytedeco.javacpp-presets</groupId>
   <artifactId>opencv</artifactId>
   <version>3.1.0-1.2</version>
</dependency>

With this, running mvn clean package works as expected.



On Tue, Jul 26, 2016 at 7:09 PM, Ufuk Celebi <[hidden email]> wrote:
> What error message to you get from Maven?
>
> On Tue, Jul 26, 2016 at 4:39 PM, Debaditya Roy <[hidden email]> wrote:
>> Hello,
>>
>> I am using the jar builder from IntelliJ IDE (the mvn one was causing
>> problems). After that I executed it successfully locally. But in remote it
>> is causing problem.
>>
>> Warm Regards,
>> Debaditya
>>
>> On Tue, Jul 26, 2016 at 4:36 PM, Ufuk Celebi <[hidden email]> wrote:
>>>
>>> Yes, the BlobCache on each TaskManager node should fetch it from the
>>> JobManager. How are you packaging your JAR?
>>>
>>> On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]>
>>> wrote:
>>> > Hello users,
>>> >
>>> > I am having a problem while running my flink program in a cluster. It
>>> > gives
>>> > me an error that it is unable to find an .so file in a tmp directory.
>>> >
>>> > Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
>>> > java.library.path
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>>> >     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>>> >     at java.lang.System.loadLibrary(System.java:1122)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>>> >     at java.lang.Class.forName0(Native Method)
>>> >     at java.lang.Class.forName(Class.java:348)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at
>>> >
>>> > org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>>> >     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>>> >     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> > Caused by: java.lang.UnsatisfiedLinkError:
>>> > /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot
>>> > open
>>> > shared object file: No such file or directory
>>> >     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> >     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>>> >     at java.lang.Runtime.load0(Runtime.java:809)
>>> >     at java.lang.System.load(System.java:1086)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>>> >
>>> >
>>> > I searched for the temp directory and in one of the nodes this directory
>>> > and
>>> > the .jar file was present. Is it required to have the file across all
>>> > the
>>> > nodes? If yes is there any way to control it? Since this tmp directory
>>> > and
>>> > the .so file gets extracted during the runtime without any external
>>> > manipulation.
>>> >
>>> >
>>> > Thanks in advance.
>>> >
>>> > Regards,
>>> > Debaditya
>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: .so linkage error in Cluster

Debaditya Roy
Hello users,

I rebuilt the project and now on doing mvn clean package i have got two jar files and I can run with the fat jar in the local jvm properly. However when executing in the cluster I get error as follows:

Source: Custom Source -> Flat Map -> Sink: Unnamed(1/1) switched to FAILED
java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:726)
    at org.bytedeco.javacpp.Loader.load(Loader.java:501)
    at org.bytedeco.javacpp.Loader.load(Loader.java:418)
    at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.bytedeco.javacpp.Loader.load(Loader.java:473)
    at org.bytedeco.javacpp.Loader.load(Loader.java:418)
    at org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
    at org.myorg.quickstart.FlinkStreamSource.run(FlinkStreamSource.java:33)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/javacpp5798629402792/libjniopencv_core.so: libgomp.so.1: cannot open shared object file: No such file or directory
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:709)
    ... 14 more

07/27/2016 14:38:42    Job execution switched to status FAILING.
java.lang.UnsatisfiedLinkError: no jniopencv_core in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
    at java.lang.Runtime.loadLibrary0(Runtime.java:870)
    at java.lang.System.loadLibrary(System.java:1122)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:726)
    at org.bytedeco.javacpp.Loader.load(Loader.java:501)
    at org.bytedeco.javacpp.Loader.load(Loader.java:418)
    at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.bytedeco.javacpp.Loader.load(Loader.java:473)
    at org.bytedeco.javacpp.Loader.load(Loader.java:418)
    at org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
    at org.myorg.quickstart.FlinkStreamSource.run(FlinkStreamSource.java:33)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/javacpp5798629402792/libjniopencv_core.so: libgomp.so.1: cannot open shared object file: No such file or directory
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:709)
    ... 14 more

Do I need to do some more tweak to run in the cluster?

Regards,
Debaditya

On Tue, Jul 26, 2016 at 8:56 PM, Debaditya Roy <[hidden email]> wrote:
Hi,

For the error I get this when I run the .jar made by mvn clean package

java.lang.NoClassDefFoundError: org/bytedeco/javacpp/opencv_core$Mat
    at loc.video.Job.main(Job.java:29)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:505)
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
    at org.apache.flink.client.program.Client.runBlocking(Client.java:248)
    at org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:866)
    at org.apache.flink.client.CliFrontend.run(CliFrontend.java:333)
    at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
    at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
Caused by: java.lang.ClassNotFoundException: org.bytedeco.javacpp.opencv_core$Mat
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 12 more

My pom.xml is follows:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>org.video</groupId>
<artifactId>OCR</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.build.outputEncoding>UTF-8</project.build.outputEncoding>

<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<start-class>loc.video.MainApp</start-class>

<javacv.version>1.2</javacv.version>
<opencv.version>3.1.0-${javacv.version}</opencv.version>
<ffmpeg.version>3.0.2-${javacv.version}</ffmpeg.version>

<flink.version>1.0.3</flink.version>


</properties>

<prerequisites>
<maven>3.1.0</maven>
</prerequisites>

<dependencies>
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>javacv</artifactId>
<version>${javacv.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>opencv</artifactId>
<version>${opencv.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>ffmpeg</artifactId>
<version>${ffmpeg.version}</version>
</dependency>

<dependency>
<groupId>org.bytedeco.javacpp-presets</groupId>
<artifactId>tesseract</artifactId>
<version>3.04.01-1.2</version>
</dependency>

<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>19.0</version>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
<version>1.4.192</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>${flink.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>${flink.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>

<!-- Fix Javadoc -->
<dependency>
<groupId>javax.interceptor</groupId>
<artifactId>javax.interceptor-api</artifactId>
<version>1.2</version>
<scope>provided</scope>
</dependency>

</dependencies>


<profiles>
<profile>
<!-- Generate the frontend -->
<!-- when run on travis or appveyor -->
<id>generate-frontend</id>
<activation>
<activeByDefault>false</activeByDefault>
<property>
<name>env.CI</name>
<!--Appveyor sets CI = True, but Travis sets it to true-->
<!--<value>true</value>-->
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>com.github.eirslett</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<id>install node and npm</id>
<goals>
<goal>install-node-and-npm</goal>
</goals>
<configuration>
<nodeVersion>v4.4.2</nodeVersion>
<npmVersion>3.8.5</npmVersion>
</configuration>
</execution>
<execution>
<id>npm install</id>
<goals>
<goal>npm</goal>
</goals>
</execution>
<execution>
<id>npm build</id>
<goals>
<goal>npm</goal>
</goals>
<configuration>
<arguments>run build</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>


</profiles>

<build>
<plugins>
<plugin>
<inherited>true</inherited>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>1.4.1</version>
<executions>
<execution>
<id>enforce-maven-3.1</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireMavenVersion>
<version>3.1.0</version>
</requireMavenVersion>
<requireJavaVersion>
<version>1.7.0</version>
</requireJavaVersion>
</rules>
<fail>true</fail>
</configuration>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-project-info-reports-plugin</artifactId>
<version>2.9</version>

<configuration>
<dependencyLocationsEnabled>false</dependencyLocationsEnabled>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>

</plugins>

<extensions>
<extension>
<groupId>kr.motd.maven</groupId>
<artifactId>os-maven-plugin</artifactId>
<version>1.5.0.Final</version>
</extension>
</extensions>
</build>


<reporting>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>3.0.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.5</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-report-plugin</artifactId>
<version>2.19</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>taglist-maven-plugin</artifactId>
<version>2.4</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jdepend-maven-plugin</artifactId>
<version>2.0</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>javancss-maven-plugin</artifactId>
<version>2.1</version>
</plugin>
</plugins>
</reporting>

</project>

Since it was giving an error I used the .jar file created by the IDE and ran it by specifying class path which was working nice locally but in the cluster
it was failing. 
Thanks in advance
Warm Regrads,
Debaditya


On Tue, Jul 26, 2016 at 8:06 PM, Ufuk Celebi <[hidden email]> wrote:
Out of curiosity I've tried this locally by adding the following
dependencies to my Maven project:

<dependency>
   <groupId>org.bytedeco</groupId>
   <artifactId>javacpp</artifactId>
   <version>1.2.2</version>
</dependency>
<dependency>
   <groupId>org.bytedeco.javacpp-presets</groupId>
   <artifactId>opencv</artifactId>
   <version>3.1.0-1.2</version>
</dependency>

With this, running mvn clean package works as expected.



On Tue, Jul 26, 2016 at 7:09 PM, Ufuk Celebi <[hidden email]> wrote:
> What error message to you get from Maven?
>
> On Tue, Jul 26, 2016 at 4:39 PM, Debaditya Roy <[hidden email]> wrote:
>> Hello,
>>
>> I am using the jar builder from IntelliJ IDE (the mvn one was causing
>> problems). After that I executed it successfully locally. But in remote it
>> is causing problem.
>>
>> Warm Regards,
>> Debaditya
>>
>> On Tue, Jul 26, 2016 at 4:36 PM, Ufuk Celebi <[hidden email]> wrote:
>>>
>>> Yes, the BlobCache on each TaskManager node should fetch it from the
>>> JobManager. How are you packaging your JAR?
>>>
>>> On Tue, Jul 26, 2016 at 4:32 PM, Debaditya Roy <[hidden email]>
>>> wrote:
>>> > Hello users,
>>> >
>>> > I am having a problem while running my flink program in a cluster. It
>>> > gives
>>> > me an error that it is unable to find an .so file in a tmp directory.
>>> >
>>> > Caused by: java.lang.UnsatisfiedLinkError: no jniopencv_core in
>>> > java.library.path
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
>>> >     at java.lang.Runtime.loadLibrary0(Runtime.java:870)
>>> >     at java.lang.System.loadLibrary(System.java:1122)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:654)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:492)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at org.bytedeco.javacpp.opencv_core.<clinit>(opencv_core.java:10)
>>> >     at java.lang.Class.forName0(Native Method)
>>> >     at java.lang.Class.forName(Class.java:348)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:464)
>>> >     at org.bytedeco.javacpp.Loader.load(Loader.java:409)
>>> >     at
>>> >
>>> > org.bytedeco.javacpp.helper.opencv_core$AbstractArray.<clinit>(opencv_core.java:109)
>>> >     at loc.video.FlinkStreamSource.run(FlinkStreamSource.java:95)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
>>> >     at
>>> >
>>> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:225)
>>> >     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>>> >     at java.lang.Thread.run(Thread.java:745)
>>> > Caused by: java.lang.UnsatisfiedLinkError:
>>> > /tmp/javacpp5400264496782/libjniopencv_core.so: libgomp.so.1: cannot
>>> > open
>>> > shared object file: No such file or directory
>>> >     at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> >     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
>>> >     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
>>> >     at java.lang.Runtime.load0(Runtime.java:809)
>>> >     at java.lang.System.load(System.java:1086)
>>> >     at org.bytedeco.javacpp.Loader.loadLibrary(Loader.java:637)
>>> >
>>> >
>>> > I searched for the temp directory and in one of the nodes this directory
>>> > and
>>> > the .jar file was present. Is it required to have the file across all
>>> > the
>>> > nodes? If yes is there any way to control it? Since this tmp directory
>>> > and
>>> > the .so file gets extracted during the runtime without any external
>>> > manipulation.
>>> >
>>> >
>>> > Thanks in advance.
>>> >
>>> > Regards,
>>> > Debaditya
>>
>>