Flink on YARN : Amazon S3 wrongly used instead of HDFS

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink on YARN : Amazon S3 wrongly used instead of HDFS

VALLEE Charles

Hi everyone,

I followed Flink on YARN's setup documentation. But when I run with ./bin/yarn-session.sh -n 2 -jm 1024 -tm 2048, while being authenticated to Kerberos, I get the following error :

2016-06-16 17:46:47,760 WARN  org.apache.hadoop.util.NativeCodeLoader                       - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-06-16 17:46:48,518 INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl     - Timeline service address: https://**host**:8190/ws/v1/timeline/
2016-06-16 17:46:48,814 INFO  org.apache.flink.yarn.FlinkYarnClient                         - Using values:
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager count = 2
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   JobManager memory = 1024
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager memory = 2048
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
    at java.util.ServiceLoader.fail(ServiceLoader.java:224)
    at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2623)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2634)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
    at org.apache.flink.yarn.FlinkYarnClientBase.deployInternal(FlinkYarnClientBase.java:531)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:342)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:339)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.flink.yarn.FlinkYarnClientBase.deploy(FlinkYarnClientBase.java:339)
    at org.apache.flink.client.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:419)
    at org.apache.flink.client.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:362)
Caused by: java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
    at java.lang.Class.getConstructor0(Class.java:2842)
    at java.lang.Class.newInstance(Class.java:345)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
    ... 18 more
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 23 more

I set the following properties in my ./flink-1.0.3/conf/flink-conf.yaml

fs.hdfs.hadoopconf: /etc/hadoop/conf/
fs.hdfs.hdfssite: /etc/hadoop/conf/hdfs-site.xml

How can I use HDFS instead of Amazon's S3?

Thanks, Charles.

http://stackoverflow.com/questions/37864969/flink-on-yarn-amazon-s3-wrongly-used-instead-of-hdfs

 



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.
____________________________________________________

This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or virus-free.

Reply | Threaded
Open this post in threaded view
|

Re: Flink on YARN : Amazon S3 wrongly used instead of HDFS

rmetzger0
Hi Charles,

sorry for the late response. I put an answer on Stack Overflow.

Regards,
Robert


On Fri, Jun 17, 2016 at 3:11 PM, VALLEE Charles <[hidden email]> wrote:

Hi everyone,

I followed Flink on YARN's setup documentation. But when I run with ./bin/yarn-session.sh -n 2 -jm 1024 -tm 2048, while being authenticated to Kerberos, I get the following error :

2016-06-16 17:46:47,760 WARN  org.apache.hadoop.util.NativeCodeLoader                       - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-06-16 17:46:48,518 INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl     - Timeline service address: https://**host**:8190/ws/v1/timeline/
2016-06-16 17:46:48,814 INFO  org.apache.flink.yarn.FlinkYarnClient                         - Using values:
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager count = 2
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   JobManager memory = 1024
2016-06-16 17:46:48,815 INFO  org.apache.flink.yarn.FlinkYarnClient                         -   TaskManager memory = 2048
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
    at java.util.ServiceLoader.fail(ServiceLoader.java:224)
    at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2623)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2634)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
    at org.apache.flink.yarn.FlinkYarnClientBase.deployInternal(FlinkYarnClientBase.java:531)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:342)
    at org.apache.flink.yarn.FlinkYarnClientBase$1.run(FlinkYarnClientBase.java:339)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.flink.yarn.FlinkYarnClientBase.deploy(FlinkYarnClientBase.java:339)
    at org.apache.flink.client.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:419)
    at org.apache.flink.client.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:362)
Caused by: java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2532)
    at java.lang.Class.getConstructor0(Class.java:2842)
    at java.lang.Class.newInstance(Class.java:345)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
    ... 18 more
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 23 more

I set the following properties in my ./flink-1.0.3/conf/flink-conf.yaml

fs.hdfs.hadoopconf: /etc/hadoop/conf/
fs.hdfs.hdfssite: /etc/hadoop/conf/hdfs-site.xml

How can I use HDFS instead of Amazon's S3?

Thanks, Charles.

http://stackoverflow.com/questions/37864969/flink-on-yarn-amazon-s3-wrongly-used-instead-of-hdfs

 



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.
____________________________________________________

This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or virus-free.