Multiple hdfs

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Multiple hdfs

Raul Valdoleiros
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul
Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Kien Truong
Hi,

If your cluster are not high-availability clusters then just use the
full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use
hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the
hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint &
hdfs://cluster2ha/data

Regards,
Kien

On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
> Hi,
>
> I want to store my data in one hdfs and the flink checkpoints in
> another hdfs. I didn't find a way to do it, anyone can point me a
> direction?
>
> Thanks in advance,
> Raul
Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Raul Valdoleiros
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <[hidden email]>:
Hi,

If your cluster are not high-availability clusters then just use the full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data

Regards,
Kien


On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul

Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Deepak Sharma
Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case then ?

Thanks
Deepak

On Tue, May 22, 2018, 19:34 Raul Valdoleiros <[hidden email]> wrote:
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <[hidden email]>:
Hi,

If your cluster are not high-availability clusters then just use the full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data

Regards,
Kien


On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul

Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Kien Truong

You only need to modify the core-site and hdfs-site read by Flink.

Regards,

Kiên

On 5/22/2018 9:07 PM, Deepak Sharma wrote:
Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case then ?

Thanks
Deepak

On Tue, May 22, 2018, 19:34 Raul Valdoleiros <[hidden email]> wrote:
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <[hidden email]>:
Hi,

If your cluster are not high-availability clusters then just use the full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data

Regards,
Kien


On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul

Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Stephan Ewen
I think that Hadoop recommends to solve such setups with a viewfs:// that spans both HDFS clusters and then the two different clusters look like different paths within on file system. Similar as mounting different file systems into one directory tree in unix.

On Tue, May 22, 2018 at 4:41 PM, Kien Truong <[hidden email]> wrote:

You only need to modify the core-site and hdfs-site read by Flink.

Regards,

Kiên

On 5/22/2018 9:07 PM, Deepak Sharma wrote:
Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case then ?

Thanks
Deepak

On Tue, May 22, 2018, 19:34 Raul Valdoleiros <[hidden email]> wrote:
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <[hidden email]>:
Hi,

If your cluster are not high-availability clusters then just use the full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data

Regards,
Kien


On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul


Reply | Threaded
Open this post in threaded view
|

Re: Multiple hdfs

Rong Rong
+1 on viewfs, I was going to add that :-)

To add to this, viewfs can be use as a federation layer for supporting multiple HDFS clusters for checkpoint/savepoint HA purpose as well.

--
Rong

On Wed, May 23, 2018 at 6:29 AM, Stephan Ewen <[hidden email]> wrote:
I think that Hadoop recommends to solve such setups with a viewfs:// that spans both HDFS clusters and then the two different clusters look like different paths within on file system. Similar as mounting different file systems into one directory tree in unix.

On Tue, May 22, 2018 at 4:41 PM, Kien Truong <[hidden email]> wrote:

You only need to modify the core-site and hdfs-site read by Flink.

Regards,

Kiên

On 5/22/2018 9:07 PM, Deepak Sharma wrote:
Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case then ?

Thanks
Deepak

On Tue, May 22, 2018, 19:34 Raul Valdoleiros <[hidden email]> wrote:
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <[hidden email]>:
Hi,

If your cluster are not high-availability clusters then just use the full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data

Regards,
Kien


On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
Hi,

I want to store my data in one hdfs and the flink checkpoints in another hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul