[gpfsug-discuss] Disabling individual Storage Pools by themselves? How about GPFS Native Raid?

Zachary Giles zgiles at gmail.com
Fri Jun 19 22:35:59 BST 2015


OK, back on topic:
Honestly, I'm really glad you said that. I have that exact problem
also -- a researcher will be funded for xTB of space, and we are told
by the grants office that if something is purchased on a grant it
belongs to them and it should have a sticker put on it that says
"property of the govt' etc etc.
We decided to (as an institution) put the money forward to purchase a
large system ahead of time, and as grants come in, recover the cost
back into the system by paying off our internal "negative balance". In
this way we can get the benefit of a large storage system like
performance and purchasing price, but provision storage into quotas as
needed. We can even put stickers on a handful of drives in the GSS
tray if that makes them feel happy.
Could they request us to hand over their drives and take them out of
our system? Maybe. if the Grants Office made us do it, sure, I'd drain
some pools off and go hand them over.. but that will never happen
because it's more valuable to them in our cluster than sitting on
their table, and I'm not going to deliver the drives full of their
data. That's their responsibility.

Is it working? Yeah, but, I'm not a grants admin nor an accountant, so
I'll let them figure that out, and they seem to be OK with this model.
And yes, it's not going to work for all institutions unless you can
put the money forward upfront, or do a group purchase at the end of a
year.

So I 100% agree, GNR doesn't really fit the model of purchasing a few
drives at a time, and the grants things is still a problem.



On Fri, Jun 19, 2015 at 5:08 PM, Simon Thompson (Research Computing -
IT Services) <S.J.Thompson at bham.ac.uk> wrote:
> I'm not disputing that gnr is a cool technology.
>
> Just that as scale out, it doesn't work for our funding model.
>
> If we go back to the original question, if was pros and cons of gnr vs raid type storage.
>
> My point was really that I have research groups who come along and want to by xTb at a time. And that's relatively easy with a raid/san based approach. And at times that needs to be a direct purchase from our supplier based on the grant rather than an internal recharge.
>
> And the overhead of a smaller gss (twin servers) is much higher cost compared to a storewise tray. I'm also not really advocating that its arbitrary storage. Just saying id really like to see shelf at a time upgrades for it (and supports shelf only).
>
> Simon
> ________________________________________
> From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com]
> Sent: 19 June 2015 21:08
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] Disabling individual Storage Pools by themselves? How about GPFS Native Raid?
>
> It's comparable to other "large" controller systems. Take the DDN
> 10K/12K for example: You don't just buy one more shelf of disks, or 5
> disks at a time from Walmart. You buy 5, 10, or 20 trays and populate
> enough disks to either hit your bandwidth or storage size requirement.
> Generally changing from 5 to 10 to 20 requires support to come on-site
> and recable it, and generally you either buy half or all the disks
> slots worth of disks. The whole system is a building block and you buy
> N of them to get up to 10-20PB of storage.
> GSS is the same way, there are a few models and you just buy a packaged one.
>
> Technically, you can violate the above constraints, but then it may
> not work well and you probably can't buy it that way.
> I'm pretty sure DDN's going to look at you funny if you try to buy a
> 12K with 30 drives.. :)
>
> For 1PB (small), I guess just buy 1 GSS24 with smaller drives to save
> money. Or, buy maybe just 2 NetAPP / LSI / Engenio enclosure with
> buildin RAID, a pair of servers, and forget GNR.
> Or maybe GSS22? :)
>
> From http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS114-098
> "
> Current high-density storage Models 24 and 26 remain available
> Four new base configurations: Model 21s (1 2u JBOD), Model 22s (2 2u
> JBODs), Model 24 (4 2u JBODs), and Model 26 (6 2u JBODs)
> 1.2 TB, 2 TB, 3 TB, and 4 TB hard drives available
> 200 GB and 800 GB SSDs are also available
> The Model 21s is comprised of 24 SSD drives, and the Model 22s, 24s,
> 26s is comprised of SSD drives or 1.2 TB hard SAS drives
> "
>
>
> On Fri, Jun 19, 2015 at 3:17 PM, Simon Thompson (Research Computing -
> IT Services) <S.J.Thompson at bham.ac.uk> wrote:
>>
>> My understanding I that GSS and IBM ESS are sold as pre configured systems.
>>
>> So something like 2x servers with a fixed number of shelves. E.g. A GSS 24 comes with 232 drives.
>>
>> So whilst that might be  1Pb system (large scale), its essentially an appliance type approach and not scalable in the sense that it isn't supported add another storage system.
>>
>> So maybe its the way it has been productised, and perhaps gnr is technically capable of having more shelves added, but if that isn't a supports route for the product then its not something that as a customer I'd be able to buy.
>>
>> Simon
>>  ________________________________________
>> From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com]
>> Sent: 19 June 2015 19:45
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] Disabling individual Storage Pools by themselves? How about GPFS Native Raid?
>>
>> OOps...  here is the official statement:
>>
>> GPFS Native RAID (GNR) is available on the following: v IBM Power® 775 Disk Enclosure. v IBM System x GPFS Storage Server (GSS). GSS is a high-capacity, high-performance storage solution that combines IBM System x servers, storage enclosures, and drives, software (including GPFS Native RAID), and networking components. GSS uses a building-block approach to create highly-scalable storage for use in a broad range of application environments.
>>
>> I wonder what specifically are the problems you guys see with the "GSS building-block" approach to ... highly-scalable...?
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at gpfsug.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> --
> Zach Giles
> zgiles at gmail.com
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-- 
Zach Giles
zgiles at gmail.com



More information about the gpfsug-discuss mailing list