[gpfsug-discuss] Tiers

Brian Marshall mimarsh2 at vt.edu
Mon Dec 19 15:15:45 GMT 2016


We are in very similar situation.  VT - ARC has a layer of SSD for metadata
only,  another layer of SSD for "hot" data, and a layer of 8TB HDDs for
capacity.   We just now in the process of getting it all into production.

On this topic:

What is everyone's favorite migration policy to move data from SSD to HDD
(and vice versa)?

Do you nightly move large/old files to HDD or wait until the fast tier hit
some capacity limit?

Do you use QOS to limit the migration from SSD to HDD i.e. try not to kill
the file system with migration work?


Thanks,
Brian Marshall

On Thu, Dec 15, 2016 at 4:25 PM, Buterbaugh, Kevin L <
Kevin.Buterbaugh at vanderbilt.edu> wrote:

> Hi Mark,
>
> We just use an 8 Gb FC SAN.  For the data pool we typically have a dual
> active-active controller storage array fronting two big RAID 6 LUNs and 1
> RAID 1 (for /home).  For the capacity pool, it might be the same exact
> model of controller, but the two controllers are now fronting that whole
> 60-bay array.
>
> But our users tend to have more modest performance needs than most…
>
> Kevin
>
> On Dec 15, 2016, at 3:19 PM, Mark.Bush at siriuscom.com wrote:
>
> Kevin, out of curiosity, what type of disk does your data pool use?  SAS
> or just some SAN attached system?
>
> *From: *<gpfsug-discuss-bounces at spectrumscale.org> on behalf of
> "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
> *Reply-To: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Date: *Thursday, December 15, 2016 at 2:47 PM
> *To: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Subject: *Re: [gpfsug-discuss] Tiers
>
> Hi Mark,
>
> We’re a “traditional” university HPC center with a very untraditional
> policy on our scratch filesystem … we don’t purge it and we sell quota
> there.  Ultimately, a lot of that disk space is taken up by stuff that,
> let’s just say, isn’t exactly in active use.
>
> So what we’ve done, for example, is buy a 60-bay storage array and stuff
> it with 8 TB drives.  It wouldn’t offer good enough performance for
> actively used files, but we use GPFS policies to migrate files to the
> “capacity” pool based on file atime.  So we have 3 pools:
>
> 1.  the system pool with metadata only (on SSDs)
> 2.  the data pool, which is where actively used files are stored and which
> offers decent performance
> 3.  the capacity pool, for data which hasn’t been accessed “recently”, and
> which is on slower storage
>
> I would imagine others do similar things.  HTHAL…
>
> Kevin
>
>
> On Dec 15, 2016, at 2:32 PM, Mark.Bush at siriuscom.com wrote:
>
> Just curious how many of you out there deploy SS with various tiers?  It
> seems like a lot are doing the system pool with SSD’s but do you routinely
> have clusters that have more than system pool and one more tier?
>
> I know if you are doing Archive in connection that’s an obvious choice for
> another tier but I’m struggling with knowing why someone needs more than
> two tiers really.
>
> I’ve read all the fine manuals as to how to do such a thing and some of
> the marketing as to maybe why.  I’m still scratching my head on this
> though.  In fact, my understanding is in the ESS there isn’t any different
> pools (tiers) as it’s all NL-SAS or SSD (DF150, etc).
>
> It does make sense to me know with TCT and I could create an ILM policy to
> get some of my data into the cloud.
>
> But in the real world I would like to know what yall do in this regard.
>
>
> Thanks
>
> Mark
>
> This message (including any attachments) is intended only for the use of
> the individual or entity to which it is addressed and may contain
> information that is non-public, proprietary, privileged, confidential, and
> exempt from disclosure under applicable law. If you are not the intended
> recipient, you are hereby notified that any use, dissemination,
> distribution, or copying of this communication is strictly prohibited. This
> message may be viewed by parties at Sirius Computer Solutions other than
> those named in the message header. This message does not contain an
> official representation of Sirius Computer Solutions. If you have received
> this communication in error, notify Sirius Computer Solutions immediately
> and (i) destroy this message if a facsimile or (ii) delete this message
> immediately if this is an electronic communication. Thank you.
> *Sirius Computer Solutions <http://www.siriuscom.com/>*
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161219/9ff5378c/attachment.htm>


More information about the gpfsug-discuss mailing list