[gpfsug-discuss] mmauth/mmremotecluster wonkyness?

Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] aaron.s.knister at nasa.gov
Thu Nov 30 16:35:04 GMT 2017


It’s my understanding and experience that all member nodes of two clusters that are multi-clustered must be able to (and will eventually given enough time/activity) make connections to any and all nodes in both clusters. Even if you don’t designate the 2 protocol nodes as contact nodes I would expect to see connections from remote clusters to the protocol nodes just because of the nature of the beast. If you don’t want remote nodes to make connections to the protocol nodes then I believe you would need to put the protocol nodes in their own cluster. CES/CNFS hasn’t always supported this but I think it is now supported, at least with NFS.





On November 30, 2017 at 11:28:03 EST, valdis.kletnieks at vt.edu wrote:
We have a 10-node cluster running gpfs 4.2.2.3, where 8 nodes are GPFS contact
nodes for 2 filesystems, and 2 are protocol nodes doingNFS exports of the
filesystems.

But we see some nodes in remote clusters trying to GPFS connect to
the 2 protocol nodes anyhow.

My reading of the manpages is that the remote cluster is responsible
for setting '-n contactNodes' when they do the 'mmremotecluster add',
and there's no way to sanity check or enforce that at the local end, and
fail/flag connections to unintended non-contact nodes if the remote
admin forgets/botches the -n.

Is that actually correct? If so, is it time for an RFE?
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20171130/027cec79/attachment.htm>


More information about the gpfsug-discuss mailing list