[gpfsug-discuss] Snapshots for backups

Fosburgh,Jonathan jfosburg at mdanderson.org
Wed May 9 13:16:03 BST 2018


Our existing environments are using Scale+Protect with tape.  Management wants us to move away from tape where possible.
We do one filesystem per cluster.  So, there will be two new clusters.
We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range.

We understand that if we replicate corrupted data, the corruption will go with it.  But the same would be true for a backup (unless I am not quite following you).

The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data.

FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup.  Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup.  We abandoned tape backups on our NAS at around 600TB.

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Andrew Beattie <abeattie at au1.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Tuesday, May 8, 2018 at 4:38 PM
To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Cc: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Snapshots for backups

Hi Jonathan,

First off a couple of questions:

1) your using Scale+Protect with Tape today?
2) your new filesystems will be within the same cluster ?
3) What capacity are the new filesystems

Based on the above then:

AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case
Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location

If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent)

Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeattie at au1.ibm.com<mailto:abeattie at au1.ibm.com>


----- Original message -----
From: "Fosburgh,Jonathan" <jfosburg at mdanderson.org>
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Cc:
Subject: [gpfsug-discuss] Snapshots for backups
Date: Tue, May 8, 2018 11:43 PM



We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect.  In particular, they are interested in the following:



Replicate to a remote filesystem (I assume this is best done via AFM).

Take periodic (probably daily) snapshots at the remote site.



The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site.  Does anyone have experience with this kind of setup?  I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error.  Are there any other gotchas we should be aware of?

The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180509/9e61fb3e/attachment.htm>


More information about the gpfsug-discuss mailing list