[gpfsug-discuss] backup and hsm with gpfs

Ed Wahl ewahl at osc.edu
Tue Dec 10 20:34:02 GMT 2013


I have moderate experience with mmbackup, dmapi (though NOT with custom apps) and both the Tivoli HSM and the newer LTFS-EE product (relies on dsmmigfs for a backend). 

How much time does a standard dsmc backup scan take?  And an mmapplypolicy scan?

So you have both a normal backup with dsmc today and also want to push to HSM with policy engine?  Are these separate storage destinations?  If they are the same,  perhaps using mmbackup and making DR copies inside TSM is better? Or would that affect other systems being backup up to TSM?  Or perhaps configure a storage pool for TSM that only handles the special files such that they don't mix tapes?


   mmbackup uses the standard policy engine scans with (unfortunately) a set # of directory and scan threads (defaults to 24 but ends up a somewhat higher # on first node of the backup) unlike a standard mmapplypolicy where you can adjust the thread levels and the only adjustment is "-m #" which adjusts how many dsmc threads/processes run per node.

Overall I find the mmbackup with multi-node support to be _much_ faster than the linear dsmc scans.  _WAY_ too thread heavy and insanely high IO loads on smaller GPFS's with mid-range to slower metadata though. (almost all IO Wait with loads well over 100 on an 8 core server) 

Depending on your version of both TSM and GPFS you can quickly convert from dsmc schedule to mmbackup with snapshots using -q or -rebuild options.  Be aware there are some versions of GPFS that do NOT work with snapshots and mmbackup, and there are quite a few gotchas in the TSM integration. The largest of which is if you normally use TSM virtualmountpoints. That is NOT supported in GPFS. It will backup happily, but restoration is more amusing and it creates a TSM filespace per vmp.  This currently breaks the shadowDB badly and makes 'rebuilds' damn near impossible in the newest GPFS and just annoying in older versions.

All that being said, the latest version of GPFS and anything above about TSM 6.4.x client seem to work well for us. 

Ed Wahl
OSC

________________________________________
From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Stefan Fritzsche [stefan.fritzsche at slub-dresden.de]
Sent: Tuesday, December 10, 2013 5:16 AM
To: gpfsug-discuss at gpfsug.org
Subject: [gpfsug-discuss] backup and hsm with gpfs

Dear gpfsug,

we are the SLUB, Saxon State and University Library Dresden.

Our goal is to build a long term preservation system. We use gpfs and a
tsm with hsm integration to backup, migrate and distribute the data over
two computing centers.
Currently, we are making backups with the normal tsm ba-client.
Our pre-/migration runs with the gpfs-policy engine to find all files
that are in the state "rersistent" and match some additional rules.
After the scan, we create a filelist and premigrate the data with dsmmigfs.

The normal backup takes a long time for the scan of the whole
gpfs-filesystem, so we are looking for a better way to perfom the backups.
I know that i can also use the policy engine to perfom the backup but my
questions are:

How do I perform backups with gpfs?

Is there anyone who uses the mmbackup command or mmbackup in companies
with snapshots?

Does anyone have any expirence in writing an application with gpfs-api
and/or dmapi?

Thank you for your answers and proposals.

Best regards,
Stefan


--
Stefan Fritzsche

SLUB
email: stefan.fritzsche at slub-dresden.de
Zellescher Weg 18
---------------------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list