[gpfsug-discuss] Can't take snapshots while re-striping

Alex Chekholko alex at calicolabs.com
Thu Oct 18 17:12:42 BST 2018


The re-striping uses a lot of I/O, so if your goal is user-facing
performance, the re-striping is definitely hurting in the short term and is
of questionable value in the long term, depending on how much churn there
is on your filesystem.

One way to split the difference would be to run your 'mmrestripe -b'
midnight to 6am for many days; so it does not conflict with your snapshot.
Or whatever other time you have lower user load.

On Thu, Oct 18, 2018 at 8:32 AM Peter Childs <p.childs at qmul.ac.uk> wrote:

> We've just added 9 raid volumes to our main storage, (5 Raid6 arrays
> for data and 4 Raid1 arrays for metadata)
>
> We are now attempting to rebalance and our data around all the volumes.
>
> We started with the meta-data doing a "mmrestripe -r" as we'd changed
> the failure groups to on our meta-data disks and wanted to ensure we
> had all our metadata on known good ssd. No issues, here we could take
> snapshots and I even tested it. (New SSD on new failure group and move
> all old SSD to the same failure group)
>
> We're now doing a "mmrestripe -b" to rebalance the data accross all 21
> Volumes however when we attempt to take a snapshot, as we do every
> night at 11pm it fails with
>
> sudo /usr/lpp/mmfs/bin/mmcrsnapshot home test
> Flushing dirty data for snapshot :test...
> Quiescing all file system operations.
> Unable to quiesce all nodes; some processes are busy or holding
> required resources.
> mmcrsnapshot: Command failed. Examine previous error messages to
> determine cause.
>
> Are you meant to be able to take snapshots while re-striping or not?
>
> I know a rebalance of the data is probably unnecessary, but we'd like
> to get the best possible speed out of the system, and we also kind of
> like balance.
>
> Thanks
>
>
> --
> Peter Childs
> ITS Research Storage
> Queen Mary, University of London
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20181018/5b83bbe9/attachment-0001.htm>


More information about the gpfsug-discuss mailing list