[gpfsug-discuss] Using AFM to migrate files.

Peter Childs p.childs at qmul.ac.uk
Thu Oct 20 11:12:36 BST 2016


Yes but not a great deal,

Peter Childs
Research Storage Expert
ITS Research Infrastructure
Queen Mary, University of London


________________________________________
From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Yaron Daniel <YARD at il.ibm.com>
Sent: Thursday, October 20, 2016 7:15:54 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Using AFM to migrate files.

Hi

Does you use NFSv4 acls in your old cluster ?


Regards



________________________________





Yaron Daniel     94 Em Ha'Moshavot Rd
[cid:_1_09E5055809E4FFC4002269E8C2258052]

Server, Storage and Data Services<https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>- Team Leader       Petach Tiqva, 49527
Global Technology Services       Israel
Phone:  +972-3-916-5672
Fax:    +972-3-916-5672
Mobile: +972-52-8395593
e-mail: yard at il.ibm.com
IBM Israel<http://www.ibm.com/il/he/>








From:        Peter Childs <p.childs at qmul.ac.uk>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        10/19/2016 05:34 PM
Subject:        [gpfsug-discuss] Using AFM to migrate files.
Sent by:        gpfsug-discuss-bounces at spectrumscale.org

________________________________




We are planning to use AFM to migrate our old GPFS file store to a new GPFS file store. This will give us the advantages of Spectrum Scale (GPFS) 4.2, such as larger block and inode size. I would like to attempt to gain some insight on my plans before I start.

The old file store was running GPFS 3.5 with 512 byte inodes and 1MB block size. We have now upgraded it to 4.1 and are working towards 4.2 with 300TB of files. (385TB max space) this is so we can use both the old and new storage via multi-cluster.

We are moving to a new GPFS cluster so we can use the new protocol nodes eventually and also put the new storage machines as cluster managers, as this should be faster and future proof

The new hardware has 1PB of space running GPFS 4.2

We have multiple filesets, and would like to maintain our namespace as far as possible.

My plan was to.

1. Create a read-only (RO) AFM cache on the new storage (ro)
2a. Move old fileset and replace with SymLink to new.
2b. Convert RO AFM to Local Update (LU) AFM pointing to new parking area of old files.
2c. move user access to new location in cache.
3. Flush everything into cache and disconnect.

I've read the docs including the ones on migration but it's not clear if it's safe to move the home of a cache and update the target. It looks like it should be possible and my tests say it works.

An alternative plan is to use a Independent Writer (IW) AFM Cache to move the home directories which are pointed to by LDAP. Hence we can move users one at a time and only have to drain the HPC cluster at the end to disconnect the cache. I assume that migrating users over an Independent Writer is safe so long as the users don't use both sides of the cache at once (ie home and target)

I'm also interested in any recipe people have on GPFS policies to preseed and flush the cache.

We plan to do all the migration using AFM over GPFS we're not currently using NFS and have no plans to start. I believe using GPFS is the faster method to preform the migration.

Any suggestions and experience of doing similar migration jobs would be helpful.

Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00001.gif
Type: image/gif
Size: 1851 bytes
Desc: ATT00001.gif
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161020/ca74b99d/attachment.gif>


More information about the gpfsug-discuss mailing list