Hi.
I’m trying to set external hdfs as state backend. my os user name is ec2-user. hdfs user is hadoop. there is a permission denied exception. I wanna specify hdfs user name. I set hadoop.job.ugi in core-site.xml and HADOOP_USER_NAME on command line. but not works. what shall I do? thanks. |
Hi! Do you register the Hadoop Config at the Flink Configuration? Also, do you use Flink standalone or on Yarn? Stephan On Tue, Aug 9, 2016 at 11:00 AM, Dong-iL, Kim <[hidden email]> wrote: Hi. |
Hi. In this case , I used standalone cluster(aws EC2) and I wanna connect to remote HDFS machine(aws EMR). I register the location of core-site.xml as below. does it need other properties? <configuration> <property> <name>fs.defaultFS</name> <value><a href="hdfs://…:8020</value>" class="">hdfs://…:8020</value> </property> <property> <name>hadoop.security.authentication</name> <value>simple</value> </property> <property> <name>hadoop.security.key.provider.path</name> <value><a href="kms://....:9700/kms</value>" class="">kms://....:9700/kms</value> </property> <property> <name>hadoop.job.ugi</name> <value>hadoop</value> </property> Thanks.
|
Do you also set fs.hdfs.hadoopconf in flink-conf.yaml
(https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options)? On Thu, Aug 11, 2016 at 2:47 PM, Dong-iL, Kim <[hidden email]> wrote: > Hi. > In this case , I used standalone cluster(aws EC2) and I wanna connect to > remote HDFS machine(aws EMR). > I register the location of core-site.xml as below. > does it need other properties? > > <configuration> > <property> > <name>fs.defaultFS</name> > <value>hdfs://…:8020</value> > </property> > <property> > <name>hadoop.security.authentication</name> > <value>simple</value> > </property> > <property> > <name>hadoop.security.key.provider.path</name> > <value>kms://....:9700/kms</value> > </property> > <property> > <name>hadoop.job.ugi</name> > <value>hadoop</value> > </property> > > Thanks. > > On Aug 11, 2016, at 9:31 PM, Stephan Ewen <[hidden email]> wrote: > > Hi! > > Do you register the Hadoop Config at the Flink Configuration? > Also, do you use Flink standalone or on Yarn? > > Stephan > > On Tue, Aug 9, 2016 at 11:00 AM, Dong-iL, Kim <[hidden email]> wrote: >> >> Hi. >> I’m trying to set external hdfs as state backend. >> my os user name is ec2-user. hdfs user is hadoop. >> there is a permission denied exception. >> I wanna specify hdfs user name. >> I set hadoop.job.ugi in core-site.xml and HADOOP_USER_NAME on command >> line. >> but not works. >> what shall I do? >> thanks. > > > |
I have the same question.
I am setting fs.hdfs.hadoopconf to the location of a Hadoop config. However, when I start a job, I get an error message that it's trying to connect to the HDFS directory as user "flink": Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=flink, access=EXECUTE, inode="/user/site":site:hadoop:drwx------ at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:206) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:158) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3495) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3478) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:3465) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:6596) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4377) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4355) I have seen in other threads on this list where people mention setting up the impersonate user in core-site.xml, but I've been unable to determine the correct setting. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ |
Seems like 3 possibilities: 1. Change the user flink runs as to the user with hdfs rights 2. hdfs chown the directory you're writing to (or hdfs chmod to open up access) 3. I've seen where org.apache.hadoop.security.UserGroupInformation can be used to do something like this: UserGroupInformation realUser = UserGroupInformation.createRemoteUser("theuserwithhdfsrights"); UserGroupInformation.setLoginUser(realUser); On Thu, Dec 7, 2017 at 1:49 PM, Edward <[hidden email]> wrote: I have the same question.
CONFIDENTIALITY. This communication is intended only for the use of the intended recipient(s) and contains information that is privileged and confidential. As a recipient of this confidential and proprietary information, you are prohibited from distributing this information outside of sovrn. Further, if you are not the intended recipient, please note that any dissemination of this communication is prohibited. If you have received this communication in error, please erase all copies of the message, including all attachments, and please also notify the sender immediately. Thank you for your cooperation. |
Free forum by Nabble | Edit this page |