[gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Bryan Banister bbanister at jumptrading.com
Thu May 3 16:15:24 BST 2018


Hi Lohit,

Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April:  http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf

Multicluster setups with shared file access have a high probability of “MetaNode Flapping”
• “MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more “client” clusters via a MultiCluster relationship.”

Cheers,
-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org
Sent: Thursday, May 03, 2018 9:46 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Note: External Email
________________________________
Thanks Brian,
May i know, if you could explain a bit more on the metadata updates issue?
I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers.
I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters?
Please do correct me if i am wrong.
As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers.

Thanks,
Lohit

On May 3, 2018, 10:25 AM -0400, Bryan Banister <bbanister at jumptrading.com<mailto:bbanister at jumptrading.com>>, wrote:

Hi Lohit,

Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue.

Cheers,
-Bryan

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz
Sent: Thursday, May 03, 2018 6:41 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Note: External Email
________________________________
Hi Lohit,

>I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters.
Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab.


>One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports.
Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available.
e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes.


Mit freundlichen Grüßen / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: mdietz at de.ibm.com<mailto:mdietz at de.ibm.com>
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Geschäftsführung: Dirk WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294



From:        valleru at cbio.mskcc.org<mailto:valleru at cbio.mskcc.org>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date:        01/05/2018 16:34
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts
Sent by:        gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________



Thanks Simon.
I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems.

Regards,
Lohit

On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) <S.J.Thompson at bham.ac.uk<mailto:S.J.Thompson at bham.ac.uk>>, wrote:
You have been able to do this for some time, though I think it's only just supported.

We've been exporting remote mounts since CES was added.

At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB.

One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware...

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org<mailto:valleru at cbio.mskcc.org> [valleru at cbio.mskcc.org]
Sent: 30 April 2018 22:11
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Hello All,

I read from the below link, that it is now possible to export remote mounts over NFS/SMB.

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm

I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters.
May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster?

Because according to the limitations as mentioned in the below link:

https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm

It says “You can configure one storage cluster and up to five protocol clusters (current limit).”


Regards,
Lohit
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

________________________________

Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180503/201ec69b/attachment.htm>


More information about the gpfsug-discuss mailing list