[gpfsug-discuss] metadata replication question

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Sun Jan 3 22:18:24 GMT 2016


Yes there is extended san in place. The failure groups for the storage are different in each dc so we guarantee that the data replication has 1 copy per dc.

Simon
________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Barry Evans [bevans at pixitmedia.com]
Sent: 03 January 2016 22:10
To: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] metadata replication question

Can all 4 NSD servers see all existing storwize arrays across both DC's?

Cheers,
Barry


On 03/01/2016 21:56, Simon Thompson (Research Computing - IT Services) wrote:

I currently have 4 NSD servers in a cluster, two pairs in two data centres. Data and metadata replication is currently set to 2 with metadata sitting on sas drivers in a storewise array. I also have a vm floating between the two data centres to guarantee quorum in one only in the event of split brain.

Id like to add some ssd for metadata.

Should I:

Add raid1 ssd to the storewise?

Add local ssd to the nsd servers?

If I did the second, should I
 add ssd to each nsd server (not raid 1) and set each in a different failure group and make metadata replication of 4.
 add ssd to each nsd server as raid 1, use the same failure group for each data centre pair?
 add ssd to each nsd server not raid 1, use the dame failure group for each data centre pair?

Or something else entirely?

What I want so survive is a split data centre situation or failure of a single nsd server at any point...

Thoughts? Comments?

I'm thinking the first of the nsd local options uses 4 writes as does the second, but each nsd server then has a local copy of the metatdata locally and ssd fails, in which case it should be able to get it from its local partner pair anyway (with readlocalreplica)?

Id like a cost competitive solution that gives faster performance than the current sas drives.

Was also thinking I might add an ssd to each nsd server for system.log pool for hawc as well...

Thanks

Simon
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--

Barry Evans
Technical Director & Co-Founder
Pixit Media
Mobile: +44 (0)7950 666 248
http://www.pixitmedia.com

[http://www.pixitmedia.com/sig/sig-cio.jpg]
This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email.



More information about the gpfsug-discuss mailing list