[gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Mathias Dietz MDIETZ at de.ibm.com
Thu May 3 16:14:20 BST 2018


yes, deleting all NFS exports which point to a given file system would 
allow you to unmount it without bringing down the other file systems. 


Mit freundlichen Grüßen / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: mdietz at de.ibm.com
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Geschäftsführung: Dirk 
WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht 
Stuttgart, HRB 243294



From:   valleru at cbio.mskcc.org
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   03/05/2018 16:41
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file 
system mounts
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Thanks Mathiaz, 
Yes i do understand the concern, that if one of the remote file systems go 
down abruptly - the others will go down too.

However, i suppose we could bring down one of the filesystems before a 
planned downtime? 
For example, by unexporting the filesystems on NFS/SMB before the 
downtime?

I might not want to be in a situation, where i have to bring down all the 
remote filesystems because of planned downtime of one of the remote 
clusters.

Regards,
Lohit

On May 3, 2018, 7:41 AM -0400, Mathias Dietz <MDIETZ at de.ibm.com>, wrote:
Hi Lohit,

>I am thinking of using a single CES protocol cluster, with remote mounts 
from 3 storage clusters.
Technically this should work fine (assuming all 3 clusters use the same 
uids/guids). However this has not been tested in our Test lab.


>One thing to watch, be careful if your CES root is on a remote fs, as if 
that goes away, so do all CES exports.
Not only the ces root file system is a concern, the whole CES cluster will 
go down if any remote file systems with NFS exports is not available.
e.g. if remote cluster 1 is not available, the CES cluster will unmount 
the corresponding file system which will lead to a NFS failure on all CES 
nodes.


Mit freundlichen Grüßen / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: mdietz at de.ibm.com
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Geschäftsführung: Dirk 
WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht 
Stuttgart, HRB 243294



From:        valleru at cbio.mskcc.org
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        01/05/2018 16:34
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file 
system mounts
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Thanks Simon.
I will make sure i am careful about the CES root and test nfs exporting 
more than 2 remote file systems.

Regards,
Lohit

On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) 
<S.J.Thompson at bham.ac.uk>, wrote:
You have been able to do this for some time, though I think it's only just 
supported.

We've been exporting remote mounts since CES was added.

At some point we've had two storage clusters supplying data and at least 3 
remote file-systems exported over NFS and SMB.

One thing to watch, be careful if your CES root is on a remote fs, as if 
that goes away, so do all CES exports. We do have CES root on a remote fs 
and it works, just be aware...

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org 
[gpfsug-discuss-bounces at spectrumscale.org] on behalf of 
valleru at cbio.mskcc.org [valleru at cbio.mskcc.org]
Sent: 30 April 2018 22:11
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Hello All,

I read from the below link, that it is now possible to export remote 
mounts over NFS/SMB.

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm


I am thinking of using a single CES protocol cluster, with remote mounts 
from 3 storage clusters.
May i know, if i will be able to export the 3 remote mounts(from 3 storage 
clusters) over NFS/SMB from a single CES protocol cluster?

Because according to the limitations as mentioned in the below link:

https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm


It says ?You can configure one storage cluster and up to five protocol 
clusters (current limit).?


Regards,
Lohit
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180503/32718637/attachment.htm>


More information about the gpfsug-discuss mailing list