[gpfsug-discuss] RAID type for system pool

Stephen Ulmer ulmer at ulmer.org
Wed Sep 5 21:33:55 BST 2018


> On Sep 5, 2018, at 11:34 AM, Buterbaugh, Kevin L <Kevin.Buterbaugh at Vanderbilt.Edu <mailto:Kevin.Buterbaugh at Vanderbilt.Edu>> wrote:
> 
> 

[…]

> Of course, if we increase the size of the inodes by a factor of 8 then we also need 8 times as much space to store those inodes.  Given that Enterprise class SSDs are still very expensive and our budget is not unlimited, we’re trying to get the best bang for the buck.
> 

Nobody has gone in this direction yet, so I’ll play devil’s advocate:

Are you sure you need enterprise class SSDs? The only practical difference between enterprise class SSDs and "read intensive" SSDs is the "endurance" in DWPD[1]. Read-intensive SSDs usually have a DWPD of 1-ish. Enterprise SSDs can have a DWPD as high as 30.

So, how many times do you think you’ll actually write all of the data on the SSDs per day?

I don’t know how much (meta)data you’ve got, but maybe consider buying the "cheap" SSDs (which will be *much* larger for your dollar) and just use fractions of them with GPFS replication[2] or maybe some vendor’s {distributed, de-clustererd} RAID. Keep some spares.

This is probably bad advice, but the thought exercise will let you find the edges of what you meant. :)

[1] DWPD = Drive Writes Per Day — write all of the cells on the entire storage device every 24 hours.
[2] Okay, somebody already said to use GPFS replication. ;)

-- 
Stephen



> We have always - even back in the day when our metadata was on spinning disk and not SSD - used RAID 1 mirrors and metadata replication of two.  However, we are wondering if it might be possible to switch to RAID 5?  Specifically, what we are considering doing is buying 8 new SSDs and creating two 3+1P RAID 5 LUNs (metadata replication would stay at two).  That would give us 50% more usable space than if we configured those same 8 drives as four RAID 1 mirrors.
> 
> Unfortunately, unless I’m misunderstanding something, mean that the RAID stripe size and the GPFS block size could not match.  Therefore, even though we don’t need the space, would we be much better off to buy 10 SSDs and create two 4+1P RAID 5 LUNs?
> 
> I’ve searched the mailing list archives and scanned the DeveloperWorks wiki and even glanced at the GPFS documentation and haven’t found anything that says “bad idea, Kevin”… ;-)
> 
> Expanding on this further … if we just present those two RAID 5 LUNs to GPFS as NSDs then we can only have two NSD servers as primary for them.  So another thing we’re considering is to take those RAID 5 LUNs and further sub-divide them into a total of 8 logical volumes, each of which could be a GPFS NSD and therefore would allow us to have each of our 8 NSD servers be primary for one of them.  Even worse idea?!?  Good idea?
> 
> Anybody have any better ideas???  ;-)
> 
> Oh, and currently we’re on GPFS 4.2.3-10, but are also planning on moving to GPFS 5.0.1-x before creating the new filesystem.
> 
> Thanks much…
> 
>> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and Education
> Kevin.Buterbaugh at vanderbilt.edu <mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180905/bd5986f8/attachment.htm>


More information about the gpfsug-discuss mailing list