[gpfsug-discuss] snapshots & tiering in a busy filesystem

J. Eric Wonderley eric.wonderley at vt.edu
Wed Mar 22 13:41:26 GMT 2017


The filesystem I'm working with has about 100M files and 80Tb of data.

What kind of metadata latency do you observe?
I did a mmdiag --iohist and filtered out all of the md devices and averaged
over reads and writes.  I'm seeing ~.28ms on a one off dump.  The pure
array which we have is 10G iscsi connected and is reporting average .25ms.

On Wed, Mar 22, 2017 at 6:47 AM, Sobey, Richard A <r.sobey at imperial.ac.uk>
wrote:

> We’re also snapshotting 4 times a day. Filesystem isn’t tremendously busy
> at all but we’re creating snaps for each fileset.
>
>
>
> [root at cesnode tmp]# mmlssnapshot gpfs | wc -l
>
> 6916
>
>
>
> *From:* gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-
> bounces at spectrumscale.org] *On Behalf Of *J. Eric Wonderley
> *Sent:* 20 March 2017 14:03
> *To:* gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Subject:* [gpfsug-discuss] snapshots & tiering in a busy filesystem
>
>
>
> I found this link and it didn't give me much hope for doing snapshots &
> backup in a home(busy) filesystem:
>
> http://www.spectrumscale.org/pipermail/gpfsug-discuss/2013-
> February/000200.html
>
> I realize this is dated and I wondered if qos etc have made is a tolerable
> thing to do now.  Gpfs I think was barely above v3.5 in mid 2013.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170322/3bac8419/attachment.htm>


More information about the gpfsug-discuss mailing list