[gpfsug-discuss] gpfsug-discuss Digest, Vol 73, Issue 60

Ray Coetzee coetzee.ray at gmail.com
Tue Feb 27 23:54:17 GMT 2018


Hi Lohit

Using mmap based applications against GPFS has a number of challenges. For
me the main challenge is that mmap threads can fragment the IO into multiple
 strided reads at random offsets which defeats GPFS's attempts in
prefetching the file contents.

LROC, as the name implies, is only a Local Read Only Cache and functions as
an extension of your local page pool on the client.

You would only see a performance improvement if the file(s) have been read
into the local pagepool on a previous occasion.

Depending on the dataset size & the NVMe/SSDs you have for LROC, you could
look at using a pre-job to read the file(s) in their entirety on the
compute node before the mmap process starts, as this would ensure the
relevant data blocks are in the local pagepool or LROC.

Another solution I've seen is to stage the dataset into tmpfs.

Sven is working on improvements for mmap on GPFS that may make it into a
production release so keep an eye out for an update.



Kind regards

Ray Coetzee



On Tue, Feb 27, 2018 at 10:25 PM, <gpfsug-discuss-request at spectrumscale.org>
wrote:

> Send gpfsug-discuss mailing list submissions to
>         gpfsug-discuss at spectrumscale.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> or, via email, send a message with subject or body 'help' to
>         gpfsug-discuss-request at spectrumscale.org
>
> You can reach the person managing the list at
>         gpfsug-discuss-owner at spectrumscale.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gpfsug-discuss digest..."
>
>
> Today's Topics:
>
>    1. Re: Problems with remote mount via routed IB (John Hearns)
>    2. Re: GPFS and Flash/SSD Storage tiered storage (Alex Chekholko)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 27 Feb 2018 09:17:36 +0000
> From: John Hearns <john.hearns at asml.com>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Subject: Re: [gpfsug-discuss] Problems with remote mount via routed IB
> Message-ID:
>         <VI1PR0202MB2862E6B955FF05B89D877B3388C00 at VI1PR0202MB2862.
> eurprd02.prod.outlook.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
> Jan Erik,
>    Can you clarify if you are routing IP traffic between the two
> Infiniband networks.
> Or are you routing Infiniband traffic?
>
>
> If I can be of help I manage an Infiniband network which connects to other
> IP networks using Mellanox VPI gateways, which proxy arp between IB and
> Ethernet. But I am not running GPFS traffic over these.
>
>
>
> -----Original Message-----
> From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-
> bounces at spectrumscale.org] On Behalf Of Sundermann, Jan Erik (SCC)
> Sent: Monday, February 26, 2018 5:39 PM
> To: gpfsug-discuss at spectrumscale.org
> Subject: [gpfsug-discuss] Problems with remote mount via routed IB
>
>
> Dear all
>
> we are currently trying to remote mount a file system in a routed
> Infiniband test setup and face problems with dropped RDMA connections. The
> setup is the following:
>
> - Spectrum Scale Cluster 1 is setup on four servers which are connected to
> the same infiniband network. Additionally they are connected to a fast
> ethernet providing ip communication in the network 192.168.11.0/24.
>
> - Spectrum Scale Cluster 2 is setup on four additional servers which are
> connected to a second infiniband network. These servers have IPs on their
> IB interfaces in the network 192.168.12.0/24.
>
> - IP is routed between 192.168.11.0/24 and 192.168.12.0/24 on a dedicated
> machine.
>
> - We have a dedicated IB hardware router connected to both IB subnets.
>
>
> We tested that the routing, both IP and IB, is working between the two
> clusters without problems and that RDMA is working fine both for internal
> communication inside cluster 1 and cluster 2
>
> When trying to remote mount a file system from cluster 1 in cluster 2,
> RDMA communication is not working as expected. Instead we see error
> messages on the remote host (cluster 2)
>
>
> 2018-02-23_13:48:47.037+0100: [I] VERBS RDMA connecting to 192.168.11.4
> (iccn004-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 2
> 2018-02-23_13:48:49.890+0100: [I] VERBS RDMA connected to 192.168.11.4
> (iccn004-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 2
> 2018-02-23_13:48:53.138+0100: [E] VERBS RDMA closed connection to
> 192.168.11.1 (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1
> fabnum 0 error 733 index 3
> 2018-02-23_13:48:53.854+0100: [I] VERBS RDMA connecting to 192.168.11.1
> (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 3
> 2018-02-23_13:48:54.954+0100: [E] VERBS RDMA closed connection to
> 192.168.11.3 (iccn003-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1
> fabnum 0 error 733 index 1
> 2018-02-23_13:48:55.601+0100: [I] VERBS RDMA connected to 192.168.11.1
> (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 3
> 2018-02-23_13:48:57.775+0100: [I] VERBS RDMA connecting to 192.168.11.3
> (iccn003-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 1
> 2018-02-23_13:48:59.557+0100: [I] VERBS RDMA connected to 192.168.11.3
> (iccn003-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 1
> 2018-02-23_13:48:59.876+0100: [E] VERBS RDMA closed connection to
> 192.168.11.2 (iccn002-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1
> fabnum 0 error 733 index 0
> 2018-02-23_13:49:02.020+0100: [I] VERBS RDMA connecting to 192.168.11.2
> (iccn002-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 0
> 2018-02-23_13:49:03.477+0100: [I] VERBS RDMA connected to 192.168.11.2
> (iccn002-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 0
> 2018-02-23_13:49:05.119+0100: [E] VERBS RDMA closed connection to
> 192.168.11.4 (iccn004-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1
> fabnum 0 error 733 index 2
> 2018-02-23_13:49:06.191+0100: [I] VERBS RDMA connecting to 192.168.11.4
> (iccn004-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 2
> 2018-02-23_13:49:06.548+0100: [I] VERBS RDMA connected to 192.168.11.4
> (iccn004-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 2
> 2018-02-23_13:49:11.578+0100: [E] VERBS RDMA closed connection to
> 192.168.11.1 (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1
> fabnum 0 error 733 index 3
> 2018-02-23_13:49:11.937+0100: [I] VERBS RDMA connecting to 192.168.11.1
> (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 index 3
> 2018-02-23_13:49:11.939+0100: [I] VERBS RDMA connected to 192.168.11.1
> (iccn001-gpfs in gpfsstorage.localdomain) on mlx4_0 port 1 fabnum 0 sl 0
> index 3
>
>
> and in the cluster with the file system (cluster 1)
>
> 2018-02-23_13:47:36.112+0100: [E] VERBS RDMA rdma read error
> IBV_WC_RETRY_EXC_ERR to 192.168.12.5 (iccn005-ib in
> gpfsremoteclients.localdomain) on mlx4_0 port 1 fabnum 0 vendor_err 129
> 2018-02-23_13:47:36.112+0100: [E] VERBS RDMA closed connection to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 due to RDMA read error IBV_WC_RETRY_EXC_ERR index 3
> 2018-02-23_13:47:47.161+0100: [I] VERBS RDMA accepted and connected to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 sl 0 index 3
> 2018-02-23_13:48:04.317+0100: [E] VERBS RDMA rdma read error
> IBV_WC_RETRY_EXC_ERR to 192.168.12.5 (iccn005-ib in
> gpfsremoteclients.localdomain) on mlx4_0 port 1 fabnum 0 vendor_err 129
> 2018-02-23_13:48:04.317+0100: [E] VERBS RDMA closed connection to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 due to RDMA read error IBV_WC_RETRY_EXC_ERR index 3
> 2018-02-23_13:48:11.560+0100: [I] VERBS RDMA accepted and connected to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 sl 0 index 3
> 2018-02-23_13:48:32.523+0100: [E] VERBS RDMA rdma read error
> IBV_WC_RETRY_EXC_ERR to 192.168.12.5 (iccn005-ib in
> gpfsremoteclients.localdomain) on mlx4_0 port 1 fabnum 0 vendor_err 129
> 2018-02-23_13:48:32.523+0100: [E] VERBS RDMA closed connection to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 due to RDMA read error IBV_WC_RETRY_EXC_ERR index 3
> 2018-02-23_13:48:35.398+0100: [I] VERBS RDMA accepted and connected to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 sl 0 index 3
> 2018-02-23_13:48:53.135+0100: [E] VERBS RDMA rdma read error
> IBV_WC_RETRY_EXC_ERR to 192.168.12.5 (iccn005-ib in
> gpfsremoteclients.localdomain) on mlx4_0 port 1 fabnum 0 vendor_err 129
> 2018-02-23_13:48:53.135+0100: [E] VERBS RDMA closed connection to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 due to RDMA read error IBV_WC_RETRY_EXC_ERR index 3
> 2018-02-23_13:48:55.600+0100: [I] VERBS RDMA accepted and connected to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 sl 0 index 3
> 2018-02-23_13:49:11.577+0100: [E] VERBS RDMA rdma read error
> IBV_WC_RETRY_EXC_ERR to 192.168.12.5 (iccn005-ib in
> gpfsremoteclients.localdomain) on mlx4_0 port 1 fabnum 0 vendor_err 129
> 2018-02-23_13:49:11.577+0100: [E] VERBS RDMA closed connection to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 due to RDMA read error IBV_WC_RETRY_EXC_ERR index 3
> 2018-02-23_13:49:11.939+0100: [I] VERBS RDMA accepted and connected to
> 192.168.12.5 (iccn005-ib in gpfsremoteclients.localdomain) on mlx4_0 port 1
> fabnum 0 sl 0 index 3
>
>
>
> Any advice on how to configure the setup in a way that would allow the
> remote mount via routed IB would be very appreciated.
>
>
> Thank you and best regards
> Jan Erik
>
>
> -- The information contained in this communication and any attachments is
> confidential and may be privileged, and is for the sole use of the intended
> recipient(s). Any unauthorized review, use, disclosure or distribution is
> prohibited. Unless explicitly stated otherwise in the body of this
> communication or the attachment thereto (if any), the information is
> provided on an AS-IS basis without any express or implied warranties or
> liabilities. To the extent you are relying on this information, you are
> doing so at your own risk. If you are not the intended recipient, please
> notify the sender immediately by replying to this message and destroy all
> copies of this message and any attachments. Neither the sender nor the
> company/group of companies he or she represents shall be liable for the
> proper and complete transmission of the information contained in this
> communication, or for any delay in its receipt.
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 27 Feb 2018 14:25:30 -0800
> From: Alex Chekholko <alex at calicolabs.com>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc: gpfsug-discuss-bounces at spectrumscale.org
> Subject: Re: [gpfsug-discuss] GPFS and Flash/SSD Storage tiered
>         storage
> Message-ID:
>         <CANcy_PbVVwN7Uok7aiRDYm_4aDUaS9n0M=cXn3Do24YGL_SVhQ@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> My experience has been that you could spend the same money to just make
> your main pool more performant.  Instead of doing two data transfers (one
> from cold pool to AFM or hot pools, one from AFM/hot to client), you can
> just make the direct access of the data faster by adding more resources to
> your main pool.
>
> Regards,
> Alex
>
> On Thu, Feb 22, 2018 at 5:27 PM, <valleru at cbio.mskcc.org> wrote:
>
> > Thanks, I will try the file heat feature but i am really not sure, if it
> > would work - since the code can access cold files too, and not
> necessarily
> > files recently accessed/hot files.
> >
> > With respect to LROC. Let me explain as below:
> >
> > The use case is that -
> > The code initially reads headers (small region of data) from thousands of
> > files as the first step. For example about 30,000 of them with each about
> > 300MB to 500MB in size.
> > After the first step, with the help of those headers - it mmaps/seeks
> > across various regions of a set of files in parallel.
> > Since its all small IOs and it was really slow at reading from GPFS over
> > the network directly from disks - Our idea was to use AFM which i believe
> > fetches all file data into flash/ssds, once the initial few blocks of the
> > files are read.
> > But again - AFM seems to not solve the problem, so i want to know if LROC
> > behaves in the same way as AFM, where all of the file data is prefetched
> in
> > full block size utilizing all the worker threads  - if few blocks of the
> > file is read initially.
> >
> > Thanks,
> > Lohit
> >
> > On Feb 22, 2018, 4:52 PM -0500, IBM Spectrum Scale <scale at us.ibm.com>,
> > wrote:
> >
> > My apologies for not being more clear on the flash storage pool.  I meant
> > that this would be just another GPFS storage pool in the same cluster, so
> > no separate AFM cache cluster.  You would then use the file heat feature
> to
> > ensure more frequently accessed files are migrated to that all flash
> > storage pool.
> >
> > As for LROC could you please clarify what you mean by a few headers/stubs
> > of the file?  In reading the LROC documentation and the LROC variables
> > available in the mmchconfig command I think you might want to take a
> look a
> > the lrocDataStubFileSize variable since it seems to apply to your
> situation.
> >
> > Regards, The Spectrum Scale (GPFS) team
> >
> > ------------------------------------------------------------
> > ------------------------------------------------------
> > If you feel that your question can benefit other users of  Spectrum Scale
> > (GPFS), then please post it to the public IBM developerWroks Forum at
> > https://www.ibm.com/developerworks/community/
> > forums/html/forum?id=11111111-0000-0000-0000-000000000479.
> >
> > If your query concerns a potential software error in Spectrum Scale
> (GPFS)
> > and you have an IBM software maintenance contract please contact
> > 1-800-237-5511 <(800)%20237-5511> in the United States or your local IBM
> > Service Center in other countries.
> >
> > The forum is informally monitored as time permits and should not be used
> > for priority messages to the Spectrum Scale (GPFS) team.
> >
> >
> >
> > From:        valleru at cbio.mskcc.org
> > To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org
> >
> > Cc:        gpfsug-discuss-bounces at spectrumscale.org
> > Date:        02/22/2018 04:21 PM
> > Subject:        Re: [gpfsug-discuss] GPFS and Flash/SSD Storage tiered
> > storage
> > Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> > ------------------------------
> >
> >
> >
> > Thank you.
> >
> > I am sorry if i was not clear, but the metadata pool is all on SSDs in
> the
> > GPFS clusters that we use. Its just the data pool that is on Near-Line
> > Rotating disks.
> > I understand that AFM might not be able to solve the issue, and I will
> try
> > and see if file heat works for migrating the files to flash tier.
> > You mentioned an all flash storage pool for heavily used files - so you
> > mean a different GPFS cluster just with flash storage, and to manually
> copy
> > the files to flash storage whenever needed?
> > The IO performance that i am talking is prominently for reads, so you
> > mention that LROC can work in the way i want it to? that is prefetch all
> > the files into LROC cache, after only few headers/stubs of data are read
> > from those files?
> > I thought LROC only keeps that block of data that is prefetched from the
> > disk, and will not prefetch the whole file if a stub of data is read.
> > Please do let me know, if i understood it wrong.
> >
> > On Feb 22, 2018, 4:08 PM -0500, IBM Spectrum Scale <scale at us.ibm.com>,
> > wrote:
> > I do not think AFM is intended to solve the problem you are trying to
> > solve.  If I understand your scenario correctly you state that you are
> > placing metadata on NL-SAS storage.  If that is true that would not be
> wise
> > especially if you are going to do many metadata operations.  I suspect
> your
> > performance issues are partially due to the fact that metadata is being
> > stored on NL-SAS storage.  You stated that you did not think the file
> heat
> > feature would do what you intended but have you tried to use it to see if
> > it could solve your problem?  I would think having metadata on SSD/flash
> > storage combined with a all flash storage pool for your heavily used
> files
> > would perform well.  If you expect IO usage will be such that there will
> be
> > far more reads than writes then LROC should be beneficial to your overall
> > performance.
> >
> > Regards, The Spectrum Scale (GPFS) team
> >
> > ------------------------------------------------------------
> > ------------------------------------------------------
> > If you feel that your question can benefit other users of  Spectrum Scale
> > (GPFS), then please post it to the public IBM developerWroks Forum at
> > *https://www.ibm.com/developerworks/community/
> forums/html/forum?id=11111111-0000-0000-0000-000000000479*
> > <https://www.ibm.com/developerworks/community/
> forums/html/forum?id=11111111-0000-0000-0000-000000000479>
> > .
> >
> > If your query concerns a potential software error in Spectrum Scale
> (GPFS)
> > and you have an IBM software maintenance contract please contact
> > 1-800-237-5511 <(800)%20237-5511> in the United States or your local IBM
> > Service Center in other countries.
> >
> > The forum is informally monitored as time permits and should not be used
> > for priority messages to the Spectrum Scale (GPFS) team.
> >
> >
> >
> > From:        valleru at cbio.mskcc.org
> > To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org
> >
> > Date:        02/22/2018 03:11 PM
> > Subject:        [gpfsug-discuss] GPFS and Flash/SSD Storage tiered
> storage
> > Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> > ------------------------------
> >
> >
> >
> > Hi All,
> >
> > I am trying to figure out a GPFS tiering architecture with flash storage
> > in front end and near line storage as backend, for Supercomputing
> >
> > The Backend storage will be a GPFS storage on near line of about 8-10PB.
> > The backend storage will/can be tuned to give out large streaming
> bandwidth
> > and enough metadata disks to make the stat of all these files fast
> enough.
> >
> > I was thinking if it would be possible to use a GPFS flash cluster or
> GPFS
> > SSD cluster in front end that uses AFM and acts as a cache cluster with
> the
> > backend GPFS cluster.
> >
> > At the end of this .. the workflow that i am targeting is where:
> >
> >
> > ?
> > If the compute nodes read headers of thousands of large files ranging
> from
> > 100MB to 1GB, the AFM cluster should be able to bring up enough threads
> to
> > bring up all of the files from the backend to the faster SSD/Flash GPFS
> > cluster.
> > The working set might be about 100T, at a time which i want to be on a
> > faster/low latency tier, and the rest of the files to be in slower tier
> > until they are read by the compute nodes.
> > ?
> >
> >
> > I do not want to use GPFS policies to achieve the above, is because i am
> > not sure - if policies could be written in a way, that files are moved
> from
> > the slower tier to faster tier depending on how the jobs interact with
> the
> > files.
> > I know that the policies could be written depending on the heat, and
> > size/format but i don?t think thes policies work in a similar way as
> above.
> >
> > I did try the above architecture, where an SSD GPFS cluster acts as an
> AFM
> > cache cluster before the near line storage. However the AFM cluster was
> > really really slow, It took it about few hours to copy the files from
> near
> > line storage to AFM cache cluster.
> > I am not sure if AFM is not designed to work this way, or if AFM is not
> > tuned to work as fast as it should.
> >
> > I have tried LROC too, but it does not behave the same way as i guess AFM
> > works.
> >
> > Has anyone tried or know if GPFS supports an architecture - where the
> fast
> > tier can bring up thousands of threads and copy the files almost
> > instantly/asynchronously from the slow tier, whenever the jobs from
> compute
> > nodes reads few blocks from these files?
> > I understand that with respect to hardware - the AFM cluster should be
> > really fast, as well as the network between the AFM cluster and the
> backend
> > cluster.
> >
> > Please do also let me know, if the above workflow can be done using GPFS
> > policies and be as fast as it is needed to be.
> >
> > Regards,
> > Lohit
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> >
> > *https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_
> listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=
> IbxtjdkPAM2Sbon4Lbbi4w&m=kMYZhGPhwadAbNHucw79NJgyYAJAMgxyFZKEW-kMeqk&s=
> AT1gb89TzzE7nt58h8DYyhYkybvBY8mbXvdPjtaRRpU&e=*
> > <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_
> listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=
> IbxtjdkPAM2Sbon4Lbbi4w&m=kMYZhGPhwadAbNHucw79NJgyYAJAMgxyFZKEW-kMeqk&s=
> AT1gb89TzzE7nt58h8DYyhYkybvBY8mbXvdPjtaRRpU&e=>
> >
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______
> > ________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.
> > org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_
> iaSHvJObTbx-siA1ZOg&r=
> > IbxtjdkPAM2Sbon4Lbbi4w&m=DuqESC-4ycoY5GoHpYeH1T8baq0JWY8QfkN8z
> > 6b8jPw&s=zNUAH3mFyzxcvXtrep_OroKiwR88QouIrcdN8TLJK8M&e=
> >
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/
> 20180227/be7c09c4/attachment.html>
>
> ------------------------------
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> End of gpfsug-discuss Digest, Vol 73, Issue 60
> **********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180227/ad3478eb/attachment.htm>


More information about the gpfsug-discuss mailing list