[gpfsug-discuss] Mounting GPFS data on OpenStack VM

Brian Marshall mimarsh2 at vt.edu
Fri Jan 20 18:22:11 GMT 2017


Perfect.  Thanks for the advice.

Further: this might be a basic question: Are their design guides for
building CES protocl servers?

Brian

On Fri, Jan 20, 2017 at 1:04 PM, Gaurang Tapase <gaurang.tapase at in.ibm.com>
wrote:

> Hi Brian,
>
> For option #3, you can use GPFS Manila (OpenStack shared file system
> service) driver for exporting data from protocol servers to the OpenStack
> VMs.
> It was updated to support CES in the Newton release.
>
> A new feature of bringing existing filesets under Manila management has
> also been added recently.
>
> Thanks,
> Gaurang
> ------------------------------------------------------------------------
> Gaurang S Tapase
> Spectrum Scale & OpenStack
> IBM India Storage Lab, Pune (India)
> Email : gaurang.tapase at in.ibm.com
> Phone : +91-20-42025699 <+91%2020%204202%205699> (W), +91-9860082042
> <+91%2098600%2082042>(Cell)
> -------------------------------------------------------------------------
>
>
>
> From:        Brian Marshall <mimarsh2 at vt.edu>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        01/18/2017 09:52 PM
> Subject:        Re: [gpfsug-discuss] Mounting GPFS data on OpenStack VM
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> To answer some more questions:
>
> What sort of workload will your Nova VM's be running?
> This is largely TBD but we anticipate webapps and other non-batch ways of
> interacting with and post processing data that has been computed on HPC
> batch systems.  For example a user might host a website that allows users
> to view pieces of a large data set and do some processing in private cloud
> or kick off larger jobs on HPC clusters
>
> How many VM's are you running?
> This work is still in the design / build phase.  We have 48 servers slated
> for the project.  At max maybe 500 VMs; again this is a pretty wild
> estimate.  This is a new service we are looking to provide
>
> What is your Network interconnect between the Scale Storage cluster and
> the Nova Compute cluster
> Each nova node has a dual 10gigE connection to switches that uplink to our
> core 40 gigE switches were NSD Servers are directly connectly.
>
> The information so far has been awesome.  Thanks everyone.  I am
> definitely leaning towards option #3 of creating protocol servers.  Are
> there any design/build white papers targetting the virutalization use case?
>
> Thanks,
> Brian
>
> On Tue, Jan 17, 2017 at 5:55 PM, Andrew Beattie <*abeattie at au1.ibm.com*
> <abeattie at au1.ibm.com>> wrote:
> HI Brian,
>
>
> Couple of questions for you:
>
> What sort of workload will your Nova VM's be running?
> How many VM's are you running?
> What is your Network interconnect between the Scale Storage cluster and
> the Nova Compute cluster
>
> I have cc'd Jake Carrol from University of Queensland in on the email as I
> know they have done some basic performance testing using Scale to provide
> storage to Openstack.
> One of the issues that they found was the Openstack network translation
> was a performance limiting factor.
>
> I think from memory the best performance scenario they had was, when they
> installed the scale client locally into the virtual machines
>
>
> *Andrew Beattie*
> *Software Defined Storage  - IT Specialist*
> *Phone: *614-2133-7927
> *E-mail: **abeattie at au1.ibm.com* <abeattie at au1.ibm.com>
>
>
> ----- Original message -----
> From: Brian Marshall <*mimarsh2 at vt.edu* <mimarsh2 at vt.edu>>
> Sent by: *gpfsug-discuss-bounces at spectrumscale.org*
> <gpfsug-discuss-bounces at spectrumscale.org>
> To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org*
> <gpfsug-discuss at spectrumscale.org>>
> Cc:
> Subject: [gpfsug-discuss] Mounting GPFS data on OpenStack VM
> Date: Wed, Jan 18, 2017 7:51 AM
>
> UG,
>
> I have a GPFS filesystem.
>
> I have a OpenStack private cloud.
>
> What is the best way for Nova Compute VMs to have access to data inside
> the GPFS filesystem?
>
> 1)Should VMs mount GPFS directly with a GPFS client?
> 2) Should the hypervisor mount GPFS and share to nova computes?
> 3) Should I create GPFS protocol servers that allow nova computes to mount
> of NFS?
>
> All advice is welcome.
>
>
> Best,
> Brian Marshall
> Virginia Tech
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at *spectrumscale.org* <http://spectrumscale.org/>
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss*
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170120/8ab286d1/attachment.htm>


More information about the gpfsug-discuss mailing list