[gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

Tomer Perry TOMP at il.ibm.com
Fri Mar 15 18:12:14 GMT 2019


I also for using callbacks ( as that's the "right" way for GPFS to report 
an event) instead of polling for status.
One exception is the umount case, in which when using bind mounts ( which 
is quite common for "namespace virtualization") one should use the 
preunmount  user exit instead of callback ( callback wouldn't work on 
these cases) for more info check 
https://www.ibm.com/developerworks/community/wikis/home?lang=en-gb#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/User%20Exits


Regards,

Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: tomp at il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:    +1 720 3422758
Israel Tel:      +972 3 9188625
Mobile:         +972 52 2554625




From:   Simon Thompson <S.J.Thompson at bham.ac.uk>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   15/03/2019 10:53
Subject:        Re: [gpfsug-discuss] Systemd configuration to wait for 
mount of SS filesystem
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



+1 for using callbacks, we use the Mount and preUnMount callbacks on 
various things, e.g. before unmount, shutdown all the VMs running on the 
host, i.e. start and stop other things cleanly when the FS arrives/before 
it goes away.
 
Simon
 
From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of 
"stockf at us.ibm.com" <stockf at us.ibm.com>
Reply-To: "gpfsug-discuss at spectrumscale.org" 
<gpfsug-discuss at spectrumscale.org>
Date: Thursday, 14 March 2019 at 21:04
To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Cc: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Systemd configuration to wait for mount of 
SS filesystem
 
But if all you are waiting for is the mount to occur the invocation of the 
callback informs you the file system has been mounted.  You would be free 
to start a command in the background, with appropriate protection, and 
exit the callback script.  Also, making the callback script run 
asynchronous means GPFS will not wait for it to complete and that greatly 
mitigates any potential problems with GPFS commands, if you need to run 
them from the script.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com
 
 
----- Original message -----
From: "Stephen R Buchanan" <stephen.buchanan at us.ibm.com>
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Cc:
Subject: Re: [gpfsug-discuss] Systemd configuration to wait for mount of 
SS filesystem
Date: Thu, Mar 14, 2019 4:52 PM
 
The man page for mmaddcallback specifically cautions against running 
"commands that involve GPFS files" because it "may cause unexpected and 
undesired results, including loss of file system availability." While I 
can imagine some kind of Rube Goldberg-esque chain of commands that I 
could run locally that would trigger the GPFS-filesystem-based commands I 
really want, I don't think mmaddcallback is the droid I'm looking for.
 
Stephen R. Wall Buchanan
Sr. IT Specialist
IBM Data & AI North America Government Expert Labs
+1 (571) 299-4601
stephen.buchanan at us.ibm.com
 
 
----- Original message -----
From: "Frederick Stock" <stockf at us.ibm.com>
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Cc: gpfsug-discuss at spectrumscale.org
Subject: Re: [gpfsug-discuss] Systemd configuration to wait for mount of 
SS filesystem
Date: Thu, Mar 14, 2019 4:17 PM
 
It is not systemd based but you might want to look at the user callback 
feature in GPFS (mmaddcallback).  There is a file system mount callback 
you could register.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com
 
 
----- Original message -----
From: "Stephen R Buchanan" <stephen.buchanan at us.ibm.com>
Sent by: gpfsug-discuss-bounces at spectrumscale.org
To: gpfsug-discuss at spectrumscale.org
Cc:
Subject: [gpfsug-discuss] Systemd configuration to wait for mount of SS 
filesystem
Date: Thu, Mar 14, 2019 3:58 PM
 
I searched the list archives with no obvious results.
 
I have an application that runs completely from a Spectrum Scale 
filesystem that I would like to start automatically on boot, obviously 
after the SS filesystem mounts, on multiple nodes. There are groups of 
nodes for dev, test, and production, (separate clusters) and the target 
filesystems are different between them (and are named differently, so the 
paths are different), but all nodes have an identical soft link from root 
(/) that points to the environment-specific path. (see below for details)
 
My first effort before I did any research was to try to simply use a 
directive of After=gpfs.service which anyone who has tried it will know 
that the gpfs.service returns as "started" far in advance (and 
independently of) when filesystems are actually mounted.
 
What I want is to be able to deploy a systemd service-unit and path-unit 
pair of files (that are as close to identical as possible across the 
environments) that wait for /appbin/builds/ to be available 
(/[dev|tst|prd]01/ to be mounted) and then starts the application. The 
problem is that systemd.path units, specifically the 'PathExists=' 
directive, don't follow symbolic links, so I would need to customize the 
path unit file for each environment with the full (real) path. There are 
other differences between the environments that I believe I can handle by 
specifying an EnvironmentFile directive -- but that would come from the SS 
filesystem so as to be a single reference point, so it can't help with the 
path unit.
 
Any suggestions are welcome and appreciated.
 
dev:(path names have been slightly generalized, but the structure is 
identical)
SS filesystem: /dev01
full path: /dev01/app-bin/user-tree/builds/
soft link: /appbin/ -> /dev01/app-bin/user-tree/
 
test:
SS filesystem: /tst01
full path: /tst01/app-bin/user-tree/builds/
soft link: /appbin/ -> /tst01/app-bin/user-tree/
 
prod:
SS filesystem: /prd01
full path: /prd01/app-bin/user-tree/builds/
soft link: /appbin/ -> /prd01/app-bin/user-tree/
 
 
Stephen R. Wall Buchanan
Sr. IT Specialist
IBM Data & AI North America Government Expert Labs
+1 (571) 299-4601
stephen.buchanan at us.ibm.com
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F9Tf-JhgNwLBBIROpBcPVceJFINblVd6CHoSA1tOhmw&s=bhtWnfg7iqmu6Xu6_pJePULJ8jw8-4mFHftkHZ_bZho&e=





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190315/7990d234/attachment.htm>


More information about the gpfsug-discuss mailing list