[gpfsug-discuss] GPFS Remote Cluster Co-existence with CTDB/NFS Re-exporting

Buterbaugh, Kevin L Kevin.Buterbaugh at Vanderbilt.Edu
Tue Dec 8 14:33:26 GMT 2015


Hi Richard,

We went from GPFS 3.5.0.26 (where we also had zero problems with snapshot deletion) to GPFS 4.1.0.8 this past August and immediately hit the snapshot deletion bug (it’s some sort of race condition).  It’s not pleasant … to recover we had to unmount the affected filesystem from both clusters, which didn’t exactly make our researchers happy.

But the good news is that there is an efix available for it if you’re on the 4.1.0 series and I am 99% sure that the bug has also been fixed in the last several PTF’s for the 4.1.1 series.

That’s not the only bug we hit when going to 4.1.0.8 so my personal advice / opinion would be to bypass 4.1.0 and go straight to 4.1.1 or 4.2 when it comes out.  We are planning on going to 4.2 as soon as feasible … it looks like it’s much more stable plus has some new features (compression!) that we are very interested in.  Again, my 2 cents worth.

Kevin

On Dec 8, 2015, at 8:14 AM, Sobey, Richard A <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>> wrote:

This may not be at all applicable to your situation, but we’re creating thousands of snapshots per day of many independent filesets. The same script(s) call mmdelsnapshot, too. We haven’t seen any particular issues with this.

GPFS 3.5.

I note with intereste your bug report below about 4.1.0.x though – are you able to elaborate?

From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L
Sent: 07 December 2015 17:53
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Remote Cluster Co-existence with CTDB/NFS Re-exporting

Hi Stewart,

We had been running mmcrsnapshot with a ~700 node remote cluster accessing the filesystem for a couple of years now without issue.

However, we haven’t been running it for a little while because there is a very serious bug in GPFS 4.1.0.x relating to snapshot *deletion*.  There is an efix for it and we are in the process of rolling that out, but will not try to resume snapshots until both clusters are fully updated.

HTH…

Kevin

On Dec 7, 2015, at 11:23 AM, Howard, Stewart Jameson <sjhoward at iu.edu<mailto:sjhoward at iu.edu>> wrote:

Hi All,

Thanks to Doug and Kevin for the replies.  In answer to Kevin's question about our choice of clustering solution for NFS:  the choice was made hoping to maintain some simplicity by not using more than one HA solution at a time.  However, it seems that this choice might have introduced more wrinkles than it's ironed out.

An update on our situation:  we have actually uncovered another clue since my last posting.  One thing that this now known to be correlated *very* closely with instability in the NFS layer is running `mmcrsnapshot`.    We had noticed that flapping happened like clockwork at midnight every night.  This happens to be the same time at which our crontab was running the `mmcrsnapshot` so, as an experiment, we moved the snapshot to happen at 1a.

After this change, the late-night flapping has moved to 1a and now happens reliably every night at that time.  I saw a post on this list from 2013 stating that `mmcrsnapshot` was known to hang up the filesystem with race conditions that result in deadlocks and am wondering if that is still a problem with the `mmcrsnapthost` command.  Running the snapshots had not been an obvious problem before, but seems to have become one since we deployed ~300 additional GPFS clients in a remote cluster configuration about a week ago.

Can anybody comment on the safety of running `mmcrsnapshot` with a ~300 node remote cluster accessing the filesystem?

Also, I would comment that this is not the only condition under which we see instability in the NFS layer.  We continue to see intermittent instability through the day.  The creation of a snapshot is simply the one well-correlated condition that we've discovered so far.

Thanks so much to everyone for your help  :)

Stewart
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
Kevin.Buterbaugh at vanderbilt.edu<mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20151208/d0db5b99/attachment.htm>


More information about the gpfsug-discuss mailing list