[gpfsug-discuss] RAID type for system pool

Marc A Kaplan makaplan at us.ibm.com
Thu Sep 6 17:09:10 BST 2018


A somewhat smarter RAID controller will "only" need to read the old values 
of the single changed segment of data and the corresponding parity 
segment, and know the new value of the data block. Then it can compute the 
new parity segment value.

Not necessarily the entire stripe.   Still 2 reads and 2 writes + access 
delay times ( guaranteed more than one full rotation time when on spinning 
disks, average something like 1.7x rotation time ).




From:   "Uwe Falke" <UWEFALKE at de.ibm.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   09/05/2018 04:07 PM
Subject:        Re: [gpfsug-discuss] RAID type for system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi, 

just think that your RAID controller on parity-backed redundancy needs to 
read the full stripe, modify it, and write it back (including parity) - 
the infamous Read-Modify-Write penalty.
As long as your users don't bulk-create inodes and doo amend some 
metadata, (create a file sometimes, e.g.) The writing of a 4k inode, or 
the update of a 32k dir block causes your controller to read a full block 
(let's say you use 1MiB on MD) and write back the full block plus parity 
(on 4+1p RAID 5 at 1MiB that'll be 1.25MiB. Overhead two orders of 
magnitude above the payload. 
SSDs have become better now  and expensive enterprise SSDs will endure 
quite a lot of full rewrites, but you need to estimate the MD change rate, 

apply the RMW overhead and see where you end WRT lifetime (and 
performance). 
 


 
Mit freundlichen Grüßen / Kind regards

 
Dr. Uwe Falke
 
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefalke at de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 




From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   05/09/2018 17:35
Subject:        [gpfsug-discuss] RAID type for system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All, 

We are in the process of finalizing the purchase of some new storage 
arrays (so no sales people who might be monitoring this list need contact 
me) to life-cycle some older hardware.  One of the things we are 
considering is the purchase of some new SSD?s for our ?/home? filesystem 
and I have a question or two related to that.

Currently, the existing home filesystem has it?s metadata on SSD?s ? two 
RAID 1 mirrors and metadata replication set to two.  However, the 
filesystem itself is old enough that it uses 512 byte inodes.  We have 
analyzed our users files and know that if we create a new filesystem with 
4K inodes that a very significant portion of the files would now have 
their _data_ stored in the inode as well due to the files being 3.5K or 
smaller (currently all data is on spinning HD RAID 1 mirrors).

Of course, if we increase the size of the inodes by a factor of 8 then we 
also need 8 times as much space to store those inodes.  Given that 
Enterprise class SSDs are still very expensive and our budget is not 
unlimited, we?re trying to get the best bang for the buck.

We have always - even back in the day when our metadata was on spinning 
disk and not SSD - used RAID 1 mirrors and metadata replication of two. 
However, we are wondering if it might be possible to switch to RAID 5? 
Specifically, what we are considering doing is buying 8 new SSDs and 
creating two 3+1P RAID 5 LUNs (metadata replication would stay at two). 
That would give us 50% more usable space than if we configured those same 
8 drives as four RAID 1 mirrors.

Unfortunately, unless I?m misunderstanding something, mean that the RAID 
stripe size and the GPFS block size could not match.  Therefore, even 
though we don?t need the space, would we be much better off to buy 10 SSDs 

and create two 4+1P RAID 5 LUNs?

I?ve searched the mailing list archives and scanned the DeveloperWorks 
wiki and even glanced at the GPFS documentation and haven?t found anything 

that says ?bad idea, Kevin?? ;-)

Expanding on this further ? if we just present those two RAID 5 LUNs to 
GPFS as NSDs then we can only have two NSD servers as primary for them. So 

another thing we?re considering is to take those RAID 5 LUNs and further 
sub-divide them into a total of 8 logical volumes, each of which could be 
a GPFS NSD and therefore would allow us to have each of our 8 NSD servers 
be primary for one of them.  Even worse idea?!?  Good idea?

Anybody have any better ideas???  ;-)

Oh, and currently we?re on GPFS 4.2.3-10, but are also planning on moving 
to GPFS 5.0.1-x before creating the new filesystem.

Thanks much?

?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180906/a944dc19/attachment.htm>


More information about the gpfsug-discuss mailing list