[gpfsug-discuss] GPFS on ZFS?

Yuri L Volobuev volobuev at us.ibm.com
Tue Jun 14 22:05:53 BST 2016


GPFS proper (as opposed to GNR) isn't particularly picky about block
devices.  Any block device that GPFS can see, with help from an nsddevices
user exit if necessary, is fair game, for those willing to blaze new
trails.  This applies to "real" devices, e.g. disk partitions or hardware
RAID LUNs, and "virtual" ones, like software RAID devices.  The device has
to be capable to accepting IO requests of GPFS block size, but aside from
that, Linux kernel does a pretty good job abstracting the realities of
low-level implementation from the higher-level block device API.  The basic
problem with software RAID approaches is the lack of efficient HA.  Since a
given device is only visible to one node, if a node goes down, it takes the
NSDs with it (as opposed to the more traditional twin-tailed disk model,
when another NSD server can take over).  So one would have to rely on GPFS
data/metadata replication to get HA, and that is costly, in terms of disk
utilization efficiency and data write cost.  This is still an attractive
model for some use cases, but it's not quite a one-to-one replacement for
something like GNR for general use.

yuri



From:	"Jaime Pinto" <pinto at scinet.utoronto.ca>
To:	"gpfsug main discussion list"
            <gpfsug-discuss at spectrumscale.org>,
Date:	06/13/2016 09:11 AM
Subject:	Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" <pinto at scinet.utoronto.ca>:

> Since we can not get GNR outside ESS/GSS appliances, is anybody using
> ZFS for software raid on commodity storage?
>
> Thanks
> Jaime
>
>




          ************************************
           TELL US ABOUT YOUR SUCCESS STORIES
          http://www.scinethpc.ca/testimonials
          ************************************
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477

----------------------------------------------------------------
This message was sent using IMP at SciNet Consortium, University of
Toronto.

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160614/cdfc7605/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160614/cdfc7605/attachment.gif>


More information about the gpfsug-discuss mailing list