[gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1)

Hughes, Doug Douglas.Hughes at DEShawResearch.com
Fri Jan 29 19:35:24 GMT 2016


I have found that a tar pipe is much faster than rsync for this sort of thing. The fastest of these is ‘star’ (schily tar). On average it is about 2x-5x faster than rsync for doing this. After one pass with this, you can use rsync for a subsequent or last pass synch.

e.g.
$ cd /export/gpfs1/foo
$ star –c H=xtar | (cd /export/gpfs2/foo; star –xp)

This also will not preserve filesets and quotas, though. You should be able to automate that with a little bit of awk, perl, or whatnot.


From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Damir Krstic
Sent: Friday, January 29, 2016 2:32 PM
To: gpfsug main discussion list
Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1)

We have recently purchased ESS appliance from IBM (GL6) with 1.5PT of storage. We are in planning stages of implementation. We would like to migrate date from our existing GPFS installation (around 300TB) to new solution.

We were planning of adding ESS to our existing GPFS cluster and adding its disks and then deleting our old disks and having the data migrated this way. However, our existing block size on our projects filesystem is 1M and in order to extract as much performance out of ESS we would like its filesystem created with larger block size. Besides rsync do you have any suggestions of how to do this without downtime and in fastest way possible?

I have looked at AFM but it does not seem to migrate quotas and filesets so that may not be an optimal solution.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160129/ca9b6a5f/attachment.htm>


More information about the gpfsug-discuss mailing list