flink how to access remote hdfs using namenode nameservice

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

flink how to access remote hdfs using namenode nameservice

wanglei2@geekplus.com.cn


I am deploying standalone  cluster with jobmanager HA and need the hdfs address:  

high-availability.storageDir: hdfs:///flink/recovery  

My hadoop is a remote cluster. I can write it as  hdfs://active-namenode-ip:8020. But this way lost namenode HA

Is there's any method that I can config it as hdfs://name-service:8020

Thanks,
Lei



Reply | Threaded
Open this post in threaded view
|

Re: flink how to access remote hdfs using namenode nameservice

Yang Wang
Do you mean to use the hdfs nameservice? You could find it with config key
"dfs.nameservices" in hdfs-site.xml. For example, hdfs://myhdfs/flink/recovery.

Please keep in mind that you need to set the HADOOP_CONF_DIR environment beforehand.


Best,
Yang

[hidden email] <[hidden email]> 于2020年5月7日周四 下午5:04写道:


I am deploying standalone  cluster with jobmanager HA and need the hdfs address:  

high-availability.storageDir: hdfs:///flink/recovery  

My hadoop is a remote cluster. I can write it as  hdfs://active-namenode-ip:8020. But this way lost namenode HA

Is there's any method that I can config it as hdfs://name-service:8020

Thanks,
Lei