[gpfsug-discuss] Forcing an internal mount to complete

Jordan Robertson salut4tions at gmail.com
Sun Jun 9 04:24:47 BST 2019


Hey Bob,

Ditto on what Aaron said, it sounds as if the last fs manager might need a
nudge.  Things can get weird when a filesystem isn't mounted anywhere but a
manager is needed for an operation though, so I would keep an eye on the
ras logs of the cluster manager during the kick just to make sure the
management duty isn't bouncing (which in turn can cause waiters).

-Jordan

On Sat, Jun 8, 2019 at 9:16 PM Aaron Knister <aaron.knister at gmail.com>
wrote:

> Bob, I wonder if something like an “mmdf” or an “mmchmgr” would trigger
> the internal mounts to release.
>
> Sent from my iPhone
>
> On Jun 8, 2019, at 13:22, Oesterlin, Robert <Robert.Oesterlin at nuance.com>
> wrote:
>
> I have a few file systems that are showing “internal mount” on my NSD
> servers, even though they are not mounted. I’d like to force them, without
> have to restart GPFS on those nodes - any options?
>
>
>
> Not mounted on any other  (local cluster) nodes.
>
>
>
>
>
> Bob Oesterlin
>
> Sr Principal Storage Engineer, Nuance
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190608/732623df/attachment.htm>


More information about the gpfsug-discuss mailing list